Drive Badger: open source platform for covert data exfiltration operations, ranging from small computers to big servers.

contact@drivebadger.com

Drive Badger is able to recursively exfiltrate virtual machines on VMware and Hyper-V virtualization servers.

Why separate exfiltration mode is required?

The problem with virtualization servers is about drive space reservation: they contain several virtual drive image files, where only a small degree of reserved space is occupied by valuable files, while all the rest space:

  • is free - just like in any other computer, but multiplied by the number of separate image files
  • is occupied by files and directories that would have been skipped by Drive Badger's exclusion rules

For example, a particular Windows 10 virtual machine can have a 300 GB drive, where:

  • 55 GB is occupied by Windows files (including internal huge WinSxS directory)
  • 10 GB is occupied by pagefile.sys and other temporary files
  • only 18 GB is occupied by more or less valuable files (worth exfiltration)
  • there is over 200 GB of free space

In standard configuration, Drive Badger sees only these virtual drive image files and needs to copy them as a whole, which wastes a lot of space on Drive Badger device.

When exfiltrating normal Windows servers, that additionally host eg. 1-5 virtual machines, this is usually not a problem. But if you want to exfiltrate big, specialized virtualization servers with hundreds of virtual machines, you would run out of space without this special mode.

How it works?

  1. After start, Drive Badger enumerates physical drives and partitions.
  2. Each partition is mounted and exfiltrated - but before the actual exfiltration, processed by hooks.
  3. hook-virtual searches this partition for virtual drive image files (*.vmdk for VMware, *.vhd and *.vhdx for Hyper-V).
  4. All found files are recursively mounted as partitions, and exfiltrated.
  5. If the exfiltration process fails for any particular file, it is copied (and compressed on-the-fly) as raw drive image - so it can be processed manually later.
  6. Additional exclusion rules from exclude-virtual exclude these virtual drive image files from being exfiltrated as ordinary files.

Real life cases

Cayman National: Hyper-V exfiltration case study brings the detailed performance analysis of exfiltration of big part of Cayman National's IT infrastructure (based on Hyper-V).

Installing

Here you will find instructions, how to install additional hooks, configuration repositories etc. All you need to do, is install these 2 repositories (either before of after arming the device), and then optionally proceed with some fine tuning (mostly when you have already seen the victim's servers and want to adjust the exact behavior of each Drive Badger device to each virtualization server).

git clone https://github.com/drivebadger/exclude-virtual /opt/drivebadger/config/exclude-virtual
git clone https://github.com/drivebadger/hook-virtual /opt/drivebadger/hooks/hook-virtual

Supported hypervisors

VMware

  • all products that use VMFS version 6 (ESXi version 6.5 or later)
  • all hosted-virtualization products (installed on host operating system, eg. Windows)

Hyper-V

  • all standalone Hyper-V servers using NTFS
  • all Windows-based Hyper-V instances

Limitations

  • VMFS versions earlier than 6 are not supported
  • nested VMFS filesystems are not supported, even with enabled nested virtualization

Drive encryption

Fine tuning

Nested virtualization

Searching for VHD/VHDX/VMDK containers inside another VHD/VHDX/VMDK container is very, very slow, and therefore disabled by default (which may lead to data loss, if victim uses nested VMware/Hyper-V virtualization - which fortunately is very unusual, at least using VHD/VHDX/VMDK containers).

To avoid data loss on servers with nested VMware/Hyper-V virtualization, you need to create these 3 files (separately for each image type):

  • /opt/drivebadger/config/.allow-nested-vhd
  • /opt/drivebadger/config/.allow-nested-vhdx
  • /opt/drivebadger/config/.allow-nested-vmdk

Task spooler and Qemu: speed vs reliability

hook-virtual hook uses task-spooler to queue exfiltration tasks for each VHD/VHDX/VMDK container. By default, it runs only 1 task at a time (except when container needs to be copied as raw file).

To speed up the exfiltration process, you can set task spooler to execute 2 (or more) simultaneous tasks, by executing the following command on console:

tsp -S 2

However, exfiltration of VHD/VHDX/VMDK container relies on Qemu, which is very susceptible to all sorts of problems with these containers, or filesystems inside them. And error handling of Qemu running in multiple instances is very unreliable, so running multiple tasks simultaneously can cause random rsync failures, and in the worst scenario can even lock the whole exfiltration process.

Also, running more than 3 simultaneous tasks for Hyper-V, or more than 4 for VMware will almost certainly cause data loss.

Compression methods

When VHD/VHDX/VMDK container cannot be fully exfiltrated for some reason:

  • listing internal filesystems fails
  • exfiltrating any of these filesystems fails or ends prematurely (eg. rsync or Qemu is killed)

then such container is copied to your target drive as a file. Since it was already excluded by rules from exclude-virtual, it needs to be copied separately - this is done by copy-compress.sh script, which compresses copied images on-the-fly.

There are many possible compression methods and levels available:

  • lz4 - the fastest one, minimal memory footprint, still achieves very good compression ratio
  • gzip - the most compatible and universal, quite fast
  • pigz - multi-core alternative to gzip (best performance when compressing small number of images on big CPUs, but problematic otherwise)
  • xz - the best compression, but very slow (impractical in most cases)
  • bzip2 - very good compression (worse than xz/LZMA2), still very slow

What you should choose and when:

  • in most cases, or when you don't know the exact characteristics of exfiltrated server(s), you should use either lz4 or gzip
  • pigz might be better for big CPUs and small number of virtual machines, but with large virtual drives
  • xz might be better for big CPUs, lots of RAM and big number of virtual machines, having small virtual drives
  • bzip2 is better than xz for big CPUs but small RAM
Performance

Performance comparison for fresh installed 80GB Solaris 11 drive image, compressed on Core i7-3770:

  • lz4 -1 - 12 minutes
  • gzip -6 - 31 minutes
  • size difference: ~0.8GB (1%)
Default settings

The current default is lz4 -1. You can change it:

  • by forking this repository and adjusting this script - and then deploying your fork instead of the original repository to your devices
  • by changing this script locally on selected devices - this is the preferred way, when you have already seen the victim's servers and want to adjust the exact behavior of each Drive Badger device to each virtualization server

Troubleshooting and recovering broken images

Hyper-V and unclean shutdown

The #1 reason (at least for Windows-based guests), why virtual drive image file can't be properly opened and exfiltrated, is that underlying filesystems weren't properly shut down, and need to be recovered (which usually only means, that journal needs to be replayed).

If you have a broken image, which was exfiltrated as raw file, unpack it (but keep a backup!) and execute the following command in its directory:

LIBGUESTFS_BACKEND=direct LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1 virt-filesystems -a CN-FS01_C.vhdx

You will probably see something like this:

qemu-img: Could not open '/test/CN-FS01_C.vhdx': VHDX image file '/test/CN-FS01_C.vhdx' opened read-only, but contains a log that needs to be replayed
To replay the log, run:
qemu-img check -r all '/test/CN-FS01_C.vhdx'

Now run the suggested command - and you should see:

# qemu-img check -r all '/test/CN-FS01_C.vhdx'
The following inconsistencies were found and repaired:

    0 leaked clusters
    1 corruptions

Double checking the fixed image now...
No errors were found on the image.

Beware that in some (fortunately rare) cases, this might further break the filesystem - so always keep a backup (original compressed file), to be able to start over. This is also the reason, why Drive Badger never attempts to do it on original images during exfiltration.

generic approach

If the underlying filesystem can't be recovered by qemu-img check, but the problem is still related to filesystem(s), and not the container itself, you can try to convert it to raw drive image:

qemu-img convert -f vhdx -O raw broken.vhdx broken.raw
qemu-img convert -f vmdk -O raw broken.vmdk broken.raw

This will allow you to work on this raw image using all filesystem recovery tools available in Kali Linux, not just QEMU.

another generic approach

7-Zip popular file compression software can open most existing drive image containers as archives, and unpack individual partitions from them. You can try it, if qemu-img convert command fails:

7z x -o/destination/directory CN-FS01_C.vhdx

Also, 7-Zip is a bit less sensitive to various problems with the container itself, than QEMU. Most VHDX containers can be unpacked using p7zip 16.02 (version included in Kali Linux) - while some specific failure types can be handled only by the 7-Zip 21.07 or newer, that needs to be installed separately (for Linux or Windows, also with graphical file manager).