Velociraptor-01: Velociraptor to Timesketch
In this post I will go through all the steps of setting up an automated processing pipeline for generation of timelines using Velociraptor, Plaso and Timesketch.
Based on this fantastic project by ReconInfoSec timeline generation from Kape forensic artifacts collected by Velociraptor can be automated and implemented at Enterprise scale using open-source software.
The purpose of this post is to provide a more detailed explanation of how to deploy velociraptor-to-timesketch
Prerequisites:
- Velociraptor instance: I will cover setting up Velociraptor in another post
- An AWS instance for installing Timesketch and timesketch-to-velociraptor (as least 8GiB memory). I am using Ubuntu Server 22.04 LTS t2.xlarge.
- Server Monitoring Artifact, Server.Utils.BackupS3, on Velociraptor will monitor clients for any Artifacts matching "Windows.KapeFiles.Triage" and automatically upload to S3 storage.
- watch-s3-to-timsketch.py watch s3 bucket for new zip files and downloads to Timesketch server.
- watch-to-timesketch.sh watch for new Kape Triage downloads, generate Plaso file using log2timeline and upload to Timesketch.
- watch-plaso-to-s3.sh backup Plaso files to s3 storage
Step 1: Setup AWS instance for hosting Timesketch and processing timelines
I will be using AWS as this is what is used by the ReconInfoSec/velociraptor-to-timesketch project and will require less modification to the code.
1.1 Create a AWS EC2 instance and S3 storage:
- Make sure you use an instance type with at least 8 GiB of memory.
- You can shut down your instance when not in use to save money.
- Ensure that you allow HTTP and HTTPS traffic from the internet. you can add a firewall to restrict access to IP ranges later.
- Create S3 bucket and keep not of bucket name
Step 2: Deploy Timesketch
- SSH to your AWS instance
- Follow the Google documentation to install Timesketch.
- Make sure you run the deploy script from the directory you whish to install I chose /opt
git clone https://github.com/DamonToumbourou/velociraptor-to-timesketch
sudo apt install python3 python3-pip unzip inotify-tools -y pip3 install --upgrade awscli
aws configure
- watch-s3-to-timesketch.py:
- update bucket_name
- your AWS credentials should be picked up automatically from this script but be mindful this will only be the case if you ran "aws configure" with the same user that you run this script with. If you are having issues you can specify the path to the credentials in this script manually.
- watch-to-timesketch.sh:
- update the 3 lines for timesketch_importer is being called:
- update --host to your Timesketch URL
- update username and password to a Timesketch user that you have already added in Timesketch.
- I had trouble calling the timesketch_importer from docker so I just called the libary locally by installing using pip:
- pip install timesketch-import-client (this will need to be called from the bash script so if u are running as another user in the bash script than what you installed the library as you may have trouble)
- watch-plaso-to-s3.sh
- Update bucket name
- install a couple of additional python requirements
- register and start the 3 scripted listed above as services
- All code is copied to /opt
- Timesketch uploads will be written to /opt/timesketch/upload
./deploy
- watch-s3-to-timesketch.service -> /opt/ watch-s3-to-timesketch.py
- data-to-timesketch.service -> /opt/watch-to-timesketch.sh
- watch-plaso-to-s3.service -> /opt/watch-plaso-to-s3.sh
- sudo systemctl status watch-s3-to-timesketch.service
- sudo journalctl -u watch-s3-to-timesketch.service
- sudo systemctl reload watch-s3-to-timesketch.service
If you want to make changes to the S3 upload artifact you will need to create a custom artifact this can be done as follows:
- Navigate to the 'view artifact' tab on the left
- At the top left select the + sign to 'create artifact'
- Enter your artifact code and save
- Select the 'server events' tab on the left
- Select the 'update server monitoring table' button highlighted below
- the artifact to watch "Windows.KapeFiles.Targets"
- the name of your S3 storage
- the region of the S3 storage
- credentials
- review and then launch
- Kick of a collection of type Windows.KapeFiles.Targets
- Once status is finished you can check logs in the 'server events' tab to see if any error was reported:
- check S3 to see if file was uploaded
- head to the velociraptor-to-timesketch server and check if the file was downloaded from s3 by the watch-s3-to-timesketch service.
- we can see a zip file that was in our 3s so it worked
- Now check if the Plaso file generation is working run:
- sudo journalctl -u data-to-timesketch.service
- We should see a bunch of events related to unzipping the triage file, processing the file with log2timeline.py
- followed by some entries that indicate that timesketch_importer was successful and that a new sketch was created.
Now navigate to your Timesketch instance in your browser and you should see a new sketch. Wait for it to finish processing and that's it! You have successfully setup a DFIR workflow to collect and process KAPE collections from Velociraptor to Timesketch.
Comments
Post a Comment