Automated Hashcat Cluster Deployment with Terraform: Part 2

Introduction

In this post, we’ll be adding a custom hashcat module to the Terraform configuration that we built in part 1. We’ll use that module to deploy hashcat nodes and register them to our Hashtopolis server. Finally, we’ll scale the cluster.

If you haven’t seen part 1, I encourage you to check that out before reading this part, or at least grab the code from the github example.

Building the Hashcat Module

My first attempt at deploying the hashcat nodes used a similar process to the hashtopolis module: install required packages with user data, perform configurations with provisioner, profit. This turned out less than optimal since package and driver installs were taking several minutes to complete, reboots were required, etc. So I decided to create a custom AMI with everything already completed except for registering the hashcat node to Hashtopolis. By taking the time to pre-build the AMI, we will reduce the hashcat node deployment time from ~7 minutes to ~45 seconds. And, the neat part about that is its 45 seconds for 1 node or 45 seconds for 10 nodes thanks to Terraform’s parallelism.

Note: Storing a custom AMI will result in storage costs for the EBS snapshot. The current monthly cost is $0.10/GB for GP2, so expect a total of $0.80/month for keeping this AMI.

The AMI

We’ll use a base Ubuntu 16.04 instance to build the AMI on. For size, a t2.micro is fine. Once the instance boots up, ssh into it and run the following bash script to install nvidia drivers, 7zip, etc.:


#!/bin/bash
sudo add-apt-repository ppa:graphics-drivers/ppa -y
sudo apt-get update
sudo apt-get install -y unzip p7zip-full linux-image-extra-virtual nvidia-390 nvidia-libopencl1-390 nvidia-opencl-icd-390 python-pip build-essential linux-headers-$(uname -r) 

# install python prerequisites
sudo pip install requests

Note: Driver versions may need updated.

Once all of the packages are installed, go ahead and power off the machine. In the EC2 console, select the Ubuntu instance, select Actions->Image->Create Image. The virtualization type should automatically be HVM using this method, but if you create the image from a snapshot or another method make sure you select HVM virtualization.

The Hashcat Module

If you’ve seen part 1, the hashcat module should look very similar to the hashtopolis module. In this module, we define a configuration template, create the instance, and run some provisioning scripts to register the node with the Hashtopolis server. Save this file as “main.tf”.


data "aws_subnet" "subnet" {
  id = "${var.subnet}"
}

# configure hashcat and hashtopolis client
data "template_file" "hashtopolis" {
  template = "${file("${path.module}/hashtopolis.template")}"

  vars {
    voucher_name   = "${var.voucher_name}"
    hashtopolis_ip = "${var.hashtopolis_ip}"
  }
}

resource "aws_instance" "hashcat" {
  count                  = "${var.count}"
  subnet_id              = "${data.aws_subnet.subnet.id}"
  ami                    = "${var.ami}"
  instance_type          = "${var.instance_type}"
  vpc_security_group_ids = ["${var.security_group_ids}"]
  key_name               = "${var.key_name}"

  tags {
    "Name" = "${var.name}${format("%02d", count.index + 1)}"
  }

  provisioner "file" {
    content     = "${data.template_file.hashtopolis.rendered}"
    destination = "/tmp/hashtopolis.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "sudo hostnamectl set-hostname ${var.name}${format("%02d", count.index + 1)}",
    ]
  }

  # run hashtopolis template
  provisioner "remote-exec" {
    inline = [
      "chmod u+x /tmp/hashtopolis.sh",
      "/tmp/hashtopolis.sh",
      "rm /tmp/hashtopolis.sh",
    ]
  }

  connection {
    host        = "${self.private_ip}"
    type        = "ssh"
    user        = "ubuntu"
    private_key = "${file("~/.ssh/${var.key_name}")}"
  }
}

The Configuration Template

Our configuration template is pretty simple. We’ll download the python agent from the Hashtopolis server, populate the initial config file with connection data, and launch the agent in a backgrounded screen session. The rc.local entry is necessary if we want to scale up, as AWS restarts the instance when resizing. Save the template as “hashtopolis.template”.


#!/bin/bash

# download hashtopolis client from server
wget http://${hashtopolis_ip}/agents.php?download=2 -O hashtopolis.zip
mkdir hashtopolis
unzip hashtopolis.zip -d hashtopolis/

# create and populate initial config file
tee config.json << YOLO
{
  "url": "http://${hashtopolis_ip}/api/server.php",
  "voucher": "${voucher_name}"
}
YOLO

# restart on reboot or scale up
sudo tee /etc/rc.local << SWAG
#!/bin/sh -e
#
# rc.local
#
su - ubuntu -c "screen -S hashtopolis -d -m python3 hashtopolis/ &"
exit 0
SWAG

# start the hashtopolis agent in the background
screen -S hashtopolis -d -m python3 hashtopolis/ &

# force terraform to slow it's roll
sleep 5

Updating the Main Configuration

We now need to update our main configuration to include our new hashcat module. Add the module block to the main configuration where we instantiated the hashtopolis module in part 1. Placement shouldn't be an issue, since Terraform should determine the correct order when you apply the configuration.


module "hashcat" {
  source             = "github.com/trillphil/terraform/modules//hashcat_ami"
  ami                = "${var.hashcat_ami}"
  instance_type      = "${var.hashcat_instancesize}"
  key_name           = "${var.key_name}"
  security_group_ids = ["${module.secgroup_internalssh.group_id}"]
  name               = "hashcat"
  subnet             = "${var.subnet_id}"
  voucher_name       = "${var.voucher_name}"
  hashtopolis_ip     = "${module.hashtopolis01.hashtopolis_privateip}"
  count              = 2
}

As with the hashtopolis module, we have a few variables that we'll need to declare and assign values to. Note that "hashtopolis_ip" references the hashtopolis module. This means that our configuration must wait for that module to be completed before it will start the hashcat module. Also, notice the count variable. The value of count will tell our module how many instances to create.

Note: AWS sets limits on how many EC2 instances per size you can have running at one time, particularly the expensive ones. However, it is pretty straightforward to request a limit increase.

Updating Our Variables

We only need to add two new variables to our main "variables.tf" file. The "hashcat_instancesize" default to t2.micro. This size can be used to test the configuration only, since t2.micros don't come with a GPU. We'll also add an output to display all of the internal IPv4 addresses for the hashcat nodes.


# HASHCAT DETAILS
variable "hashcat_ami" {
  description = "AMI for hashcat instance"
}

variable "hashcat_instancesize" {
  description = "EC2 instance size for hashcat instance"
  default     = "t2.micro"
}

# OUTPUTS
output "hashcat_ips" {
  value = "${module.hashcat.hashcat_private_ips}"
}

Setting Variable Values

Many of the variables used by the hashcat module are already defined in our "terraform.tfvars" file from part 1. However, we do need to add "hashcat_ami" and "hashcat_instancesize".


vpc_id = "your_vpc_id"
subnet_id = "your_subnet_id"
key_name = "your_ssh_key_name"

hashcat_ami          = "your_hashcat_ami_id"
hashcat_instancesize = "a_gpu_instance_size"

voucher_name = "3RVsilewJHu6nwLO"
mysql_rootpass = "HfCZngZIgiLU2hAP"
mysql_apppass = "66aNo2BUmUbnB3kL"
web_adminpass = "unzJPYVmqdEasPA1"

Deploying Hashtopolis + Hashcat(s)

We now have everything in place to deploy a Hashtopolis server, and register 2 GPU-based Hashcat nodes. Our "main.tf" and "variables.tf" files should look something like these. If you're running all of this from the same directory as part 1, do a "rm -rf .terraform/" to clean up the downloaded modules, etc. from part 1. Then, run "terraform init" followed by "terraform apply" to start the deployment.

Verify Agent Registration

Finally, verify that the agents are registered and checking in by navigating to the Hashtopolis web application. Login, and open the Agents->Show Agents tab. You should see both agents in the list, as well as a recent timestamp and IPv4 address in the "last activity" column.

If everything looks good then you are ready to start cracking. Check out the Hashtopolis wiki if you are new to Hashtopolis.

Scaling

If you would like to scale out the cluster, simply increment the "count" variable in the main configuration. Then run "terraform apply" again to pick up the changes.

Scaling up also works. We just need to edit the "hashcat_instancesize" variable in "terraform.tfvars" to the new size, and run "terraform apply" again. There may be a delay after Terraform finishes making the changes, but the agents should start checking in again shortly.

There may be data that needs adjustment when you scale up. For example, when I scaled up from t2.micros to p2.xlarges, Hashtopolis was still showing the agents as CPU only. This data is stored server-side in the MySQL database. If you do scale up, verify that server-side data is updated.

Note: Scaling will depend on your EC2 limits and availability in AWS.

Conclusion

In part 2, we created a custom hashcat AMI and hashcat module, and deployed the new module alongside the Hashtopolis module to create a working hashcat cluster managed by Hashtopolis. In part 3, we will work through the basics of the Hashtopolis UI (Release v0.6.0) and wrap up the series.

Leave a Reply