This is a quick and simple guide to getting and analysing volatile memory dumps from Linux VMs on Azure, AWS, and other cloud providers.
- Volatility and Linux
- A tale of two halves
- 1. A Memory capture
- 2. An Intermediate Symbol File (ISF)
- Using ISF
Volatility and Linux
Volatility is the main open source tool to forensicically analyse volatile memory captures. But using Volatility to analyse Linux images can be confusing, especially as the process changed massivly from Volatility 2 to 3, so I thought I’d write up a quick guide explaining how to go about getting and analysing memory captures of Linux Virtual Machines (VMs) when they are hosted on a cloud provider (Azure, AWS, etc.)
A tale of two halves
There are two components needed to analyse a Linux memory Image, A volatile memory capture and an Intermediate Symbol File, or ISF.
1. A Memory capture
The memory capture will be taken from the machine you wish to analyse.
The easiest way to do this is using AVML from Microsoft. This script will download the latest version of avml, and write
a compressed memory capture to disk as
wget https://github.com/microsoft/avml/releases/latest/download/avml # OR if wget is not installed: curl -LO https://github.com/microsoft/avml/releases/latest/download/avml chmod ugo+x ./avml sudo ./avml --compress capture.lime.compressed
I don’t do the analysis on the same machine, but instead transfer the image
to my workstation for further analysis. On some version of Linux and Volatility I
encounter an issue analysing compressed images, where the program hangs indefinetly. I work around this by using
avml-convert to decompress the image on my workstation first:
wget https://github.com/microsoft/avml/releases/latest/download/avml-convert chmod u+x ./avml-convert ./avml-convert capture.lime.compressed capture.lime
2. An Intermediate Symbol File (ISF)
This is the part that has changed the most from Volatility 2, and is the part that initially confuses most people (including me).
You need to generate a special JSON file that contains the symbols from a matching Linux kernel. This file is called an Intermediate Symbol File or ISF. If they don’t match, Volatility won’t let you analyse the image, as it need to be able to precisly say where to look in the memory capture. Even very minor version differences can move the location of kernel objects needed for the analysis, so it’s important to generate a matching ISF for each memory capture.
There are lists of pre-generated ISF files, such as KevTheHermits’s excelent site. But as you kernel has to match exactly with the ISF file, it’s possible that you may need to generate your own.
Thankfully, KevTheHermit has also created a super useful tool to generate new ISFs. It can create ISF files for:
- AWS’ Amazon Linux 2
- Azure’s CBL-Mariner 2
To use KevTheHermit’s tool, first run this command on the target image, to get the exact kernel version:
uname -r # output should be something like: # "5.11.0-43-generic" or # "4.14.281-212.502.amzn2.x86_64"
Then run the tool, selecting the Linux distribution and passing in the output from
uname. e.g. to get a CBL-Mariner ISF:
python symbol_maker.py --distro cbl-mariner --kernel "18.104.22.168-2.cm2"
To generate an ISF for Debian or Ubuntu, use
--branch to tell the tool what cloud service the image is on, either:
linuxIf a non-cloud image
linux-awsIf on AWS
linux-azureIf on Azure
linux-gcpIf on Google Cloud Compute
e.g. to generate an ISF for an Ubuntu machine running on AWS:
python symbol_maker.py --distro ubuntu --branch linux-aws --kernel "5.13.0-1031-aws"
If generating an Amazon Linux ISF, use
--branch 2 for Amazon Linux 2 (the only version supported).
The tool will take a while as it downloads about 1GB of data, before outputting
the ISF json file to disk in the
Unfortunetly, sometimes even KevTheHermit’s tool won’t be able to get exactly the right kernel. Sometimes if the ‘banner’ (the string that identifies the exact kernel version) is extremely similar (e.g. only the build timestamp is different), you can edit the Banner in the ISF JSON file to match your image. But typically, if KevTheHermit’s tool can’t get the same kernel, it means you need to manually generate your own ISF.
The advantage of the cloud is that VMs should be running from a known and re-usable base image. I reccomend creating a new-but-matching VM to the one you caputred memory from, so you don’t taint the machine with uneeded data.
In AWS, the ID of the image is the
AMI, and you can list the AMI ID of a running VM using either the Web UI, or the aws commandline:
# Look for "ImageId" aws ec2 describe-instances --instance-ids i-1234567890abcdef0
You can then use this exact AMI ID to create a new machine that should have a matching kernel. Other cloud providers have different ways of getting the base Image ID. You don’t need a particularly powerful VM (my tests were on a 1CPU 2GB RAM), but you will need ~15GB of disk space.
To generate an ISF, you need two files:
- A debug version of the current Linux kernel
Most linux distributions should have the
System.Map file under the
/boot/ directory with a name that matches the version of the kenrel, i.e.:
To download a debug kernel, the process is different depending on the Linux distribution. I’ve captured the most common ones below:
(Tested on AWS, Azure, and DigitalOcean)
sudo apt update sudo apt install --yes ubuntu-dbgsym-keyring sudo tee /etc/apt/sources.list.d/debug.list << EOF deb http://ddebs.ubuntu.com $(lsb_release -cs) main restricted universe multiverse deb http://ddebs.ubuntu.com $(lsb_release -cs)-updates main restricted universe multiverse deb http://ddebs.ubuntu.com $(lsb_release -cs)-proposed main restricted universe multiverse EOF sudo apt update sudo apt install --yes linux-image-$(uname -r)-dbgsym # Debug kernel is at: /usr/lib/debug/boot/vmlinux-$(uname -r)
(Tested on AWS)
sudo tee /etc/apt/sources.list.d/debug.list << EOF deb http://deb.debian.org/debian-debug/ $(lsb_release -cs)-debug main deb http://deb.debian.org/debian-debug/ $(lsb_release -cs)-proposed-updates-debug main EOF sudo apt update sudo apt install --yes linux-image-$(uname -r)-dbg # Debug kernel is at: /usr/lib/debug/boot/vmlinux-$(uname -r)
Centos8, Fedora, Amazon Linux:
(Tested on AWS, should work for other RHEL-like distros)
sudo yum --enablerepo='*-debuginfo' install kernel-debuginfo-$(uname -r) # Debug kernel is at: /usr/lib/debug/lib/modules/$(uname -r)/vmlinux
sudo yum install mariner-repos-debug sudo yum install kernel-debuginfo-$(uname -r) # Debug kernel is at: /usr/lib/debug/lib/modules/$(uname -r)/vmlinux
Generating an ISF
Once I have a debug kernel, I transfer it from the VM to my workstation. You can in theory do this next step on the VM, but it requires a machine with 4GB+ of memory and a decent CPU, and I typically just run tiny VMs in the cloud and do bulky things on my workstation.
Build a copy of Dwarf2JSON, a tool from the Volatility team. If you don’t already have Go installed, you can grab a pre-built version by me on GiHub here. Then run dwarf2JSON to generate the ISF, pointing it at the debug kernel you downloaded from the VM:
wget https://github.com/pathtofile/dwarf2json/releases/latest/download/dwarf2json-linux-amd64 -O dwarf2json chmod u+x dwarf2json ./dwarf2json linux --system-map </path/to/System.map> --elf </path/to/vmlinux_from_vm> > dbgkernel_aws_ubuntu.json
Once you’ve generated the ISF JSON file, Clone the Volatility3 repo and install the Python3 dependecies:
git clone https://github.com/volatilityfoundation/volatility3.git cd volatility3 pip install -r requirements.txt
Next copy the ISF JSON file to
volatility3/volatility3/framework/symbols/linux/. Run this command to check that Volatility
found your ISF file and was able to parse the banner text:
$> python vol.py isfinfo.IsfInfo Volatility 3 Framework 2.3.0 URI Valid Number of base_types Number of types Number of symbols Number of enums Windows info Linux banner Mac banner file:///home/path/code/volatility/volatility3/volatility3/framework/symbols/linux/dbgkernel_aws_ubuntu.json Unknown 19 11925 199625 2110 - Linux version 5.15.0-1015-aws (buildd@lcy02-amd64-063) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #19~20.04.1-Ubuntu SMP Wed Jun 22 19:07:51 UTC 2022 (Ubuntu 5.15.0-1015.19~20.04.1-aws 5.15.39) ....
You can also check the banner in your memory capture by running:
$> python vol.py -f </path/to/memory_capture.lime> banners.Banners Volatility 3 Framework 2.3.0 Progress: 100.00 Stacking attempts finished Offset Banner 0x5da8f38 Linux version 5.15.0-1015-aws (buildd@lcy02-amd64-063) (gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0, GNU ld (GNU Binutils for Ubuntu) 2.34) #19~20.04.1-Ubuntu SMP Wed Jun 22 19:07:51 UTC 2022 (Ubuntu 5.15.0-1015.19~20.04.1-aws 5.15.39) ...
If they match up, then you can start using Volatility to analyse the image. For example, to list the processes running at the time of catprue:
$> python vol.py -f </path/to/memory_capture.lime> linux.pslist Volatility 3 Framework 2.3.0 Progress: 100.00 Stacking attempts finished OFFSET (V) PID TID PPID COMM 0x9ea501284b00 1 1 0 systemd 0x9ea501281900 2 2 0 kthreadd 0x9ea501280000 3 3 2 rcu_gp ...
Capturing and analysing memory from Linux machines does take a few steps, however hopefully this blog clears up some of the confusion about Volatility 3 and Linux, and provides a useful guide to getting and analysing cloud images.