Velociraptor-01: Velociraptor to Timesketch

In this post I will go through all the steps of setting up an automated processing pipeline for generation of timelines using Velociraptor, Plaso and Timesketch. 

Based on this fantastic project by ReconInfoSec timeline generation from Kape forensic artifacts collected by Velociraptor can be automated and implemented at Enterprise scale using open-source software. 

The purpose of this post is to provide a more detailed explanation of how to deploy velociraptor-to-timesketch



Prerequisites

  1. Velociraptor instance: I will cover setting up Velociraptor in another post
  2. An AWS instance for installing Timesketch and timesketch-to-velociraptor (as least 8GiB memory). I am using Ubuntu Server 22.04 LTS t2.xlarge.

Overview of Components:
  1. Server Monitoring Artifact, Server.Utils.BackupS3, on Velociraptor will monitor clients for any Artifacts matching "Windows.KapeFiles.Triage" and automatically upload to S3 storage.
  2. watch-s3-to-timsketch.py watch s3 bucket for new zip files and downloads to Timesketch server.
  3. watch-to-timesketch.sh watch for new Kape Triage downloads, generate Plaso file using log2timeline and upload to Timesketch.
  4. watch-plaso-to-s3.sh backup Plaso files to s3 storage

Setup

Step 1:  Setup AWS instance for hosting Timesketch and processing timelines

I will be using AWS as this is what is used by the ReconInfoSec/velociraptor-to-timesketch project and will require less modification to the code. 

1.1 Create a AWS EC2 instance and S3 storage:

  1. Make sure you use an instance type with at least 8 GiB of memory. 
  2. You can shut down your instance when not in use to save money. 
  3. Ensure that you allow HTTP and HTTPS traffic from the internet. you can add a firewall to restrict access to IP ranges later.
  4. Create S3 bucket and keep not of bucket name 



Step 2: Deploy Timesketch

  1. SSH to your AWS instance
  2. Follow the Google documentation to install Timesketch.
  3. Make sure you run the deploy script from the directory you whish to install I chose /opt


Step 3: Deploy velociraptor-to-timesketch

Clone velociraptor-to-timesketch repository I made some changes to make this work in my environment so feel free to clone my work if this works better for you:

 git clone https://github.com/DamonToumbourou/velociraptor-to-timesketch
 
 Install required Python libraries:
  sudo apt install python3 python3-pip unzip inotify-tools -y
  pip3 install --upgrade awscli 

Configure AWS CLI
aws configure 

Update scripts to suit your environment:
  1. watch-s3-to-timesketch.py:
    1. update bucket_name
    2. your AWS credentials should be picked up automatically from this script but be mindful this will only be the case if you ran "aws configure" with the same user that you run this script with. If you are having issues you can specify the path to the credentials in this script manually.
  2. watch-to-timesketch.sh:
    1. update the 3 lines for timesketch_importer is being called:
      1. update --host to your Timesketch URL
      2. update username and password to a Timesketch user that you have already added in Timesketch.
      3. I had trouble calling the timesketch_importer from docker so I just called the libary locally by installing using pip:
        1. pip install timesketch-import-client (this will need to be called from the bash script so if u are running as another user in the bash script than what you installed the library as you may have trouble)
  3. watch-plaso-to-s3.sh
    1. Update bucket name

Run the deploy script which will:
  • install a couple of additional python requirements
  • register and start the 3 scripted listed above as services  
  • All code is copied to /opt
  • Timesketch uploads will be written to /opt/timesketch/upload
./deploy

Step 4: Check all 3 velociraptor-to-timesketch services are running and debug if required:
    1. watch-s3-to-timesketch.service -> /opt/ watch-s3-to-timesketch.py
    2. data-to-timesketch.service -> /opt/watch-to-timesketch.sh
    3. watch-plaso-to-s3.service -> /opt/watch-plaso-to-s3.sh
Services configuration files are located in the /opt file.



Check status of services:
  • sudo systemctl status watch-s3-to-timesketch.service


Get all the logs from a service:
  • sudo journalctl -u watch-s3-to-timesketch.service


Reload if you make a change to the script:
  • sudo systemctl reload watch-s3-to-timesketch.service
* The most common source of issues will installing and running will probably stem from running the services as root and not the python libraires or aws credentials, ect. I am no expert but the best way to install is probably by creating a new user for the velociraptor-to-timesketch project and running and installing everything as this user. 

Ok now everything on the Timesketch side of things should be ready. Now we need to configure Velociraptor to send Kape triage files to the AWS S3 storage.




Step 5: Configure Velociraptor to upload Kape files to S3

In the velociraptor-to-timesketch repository there is a Velociraptor Artifact for uploading KAPE files to S3. There is also an inbuilt artifact for this as well which I used instead. I made some minor changes to the Artifact changing the zip file upload name to not include spaces as this was creating issues in the watch-to-timesketch.sh script.

5.1 Upload custom artifact: 

If you want to make changes to the S3 upload artifact you will need to create a custom artifact this can be done as follows:
  1. Navigate to the 'view artifact' tab on the left
  2. At the top left select the + sign to 'create artifact'
  3. Enter your artifact code and save



5.2 Add Artifact to monitor events
  1. Select the 'server events' tab on the left 
  2. Select the 'update server monitoring table' button highlighted below
 


3. Select the built-in "Server.Utils.BackupS3" or the custom one you created like "Custom.Server.Utils.BackupS3"





4. Select the next tab "configure parameters" and enter:
  • the artifact to watch "Windows.KapeFiles.Targets"
  • the name of your S3 storage
  • the region of the S3 storage
  • credentials 
  • review and then launch



Step 6: See if it works

  • Kick of a collection of type Windows.KapeFiles.Targets
  • Once status is finished you can check logs in the 'server events' tab to see if any error was reported:



  • check S3 to see if file was uploaded
  • head to the velociraptor-to-timesketch server and check if the file was downloaded from s3 by the watch-s3-to-timesketch service. 
  • we can see a zip file that was in our 3s so it worked

  • Now check if the Plaso file generation is working run:
    • sudo journalctl -u data-to-timesketch.service
  • We should see a bunch of events related to unzipping the triage file, processing the file with log2timeline.py 

  • followed by some entries that indicate that timesketch_importer was successful and that a new sketch was created.


Now navigate to your Timesketch instance in your browser and you should see a new sketch. Wait for it to finish processing and that's it! You have successfully setup a DFIR workflow to collect and process KAPE collections from Velociraptor to Timesketch.



 




Comments

Popular posts from this blog

Add authentication in front of your apps with a reverse proxy using Keycloak and OpenResty

GuLoader... analysing malicious PDF, VBS and PowerShell