Automated Hashcat Cluster Deployment with Terraform: Part 1

Introduction

In this series, I will describe a method to quickly deploy a scalable password hash cracking cluster. To do this, we will use Hashicorp’s Terraform to deploy and configure a Hashtopolis server that will manage an arbitrary number of GPU-based AWS EC2 nodes running hashcat. Part 1 will focus on setting everything up and deploying the Hashtopolis server. In part 2, we will build the hashcat nodes. Finally, part 3 will demonstrate basic Hashtopolis usage and highlight some of the advantages of using my method to deploy the cluster.

Acknowledgements

I started experimenting with Terraform after reading @_RastaMouse‘s post on automating red team infrastructure. His blog is awesome and you should check it out.

Hashtopolis, formerly Hashtopussy, is the work of @s3inlc and @winxp5421, and can be found at https://github.com/s3inlc/hashtopolis. I’ve wanted to test the tool for a while, but the old name had dissuaded me. Now that I can confidently talk about the tool in meetings without HR being called, I decided to check it out.

Hashcat is an awesome cross-platform password hash cracking tool and is the standard for CPU and GPU-based cracking.

Terraform

To build out the hashcat cluster, we are using HashiCorp’s Terraform. Terraform is infrastructure-as-code software that allows users to deploy resources across many platforms using configuration files written in HashiCorp Configuration Language (HCL) or JSON. A full list of Terraform’s supported platform providers can be found here.

I used Terraform version v0.11.7 during my testing. The AWS provider version was v1.29.0, and the template provider version was v1.0.0. If you run into issues, check and compare the versions you are using with these.

Building the Hashtopolis Server

AWS Setup

In my AWS environment, I am using my default VPC and a default subnet. I also have an Ubuntu 16.04 server running Terraform with an IAM Role with the “AmazonEC2FullAccess” policy applied to it. This setup allows for quick deployment with the VPC without having to specify credentials for an IAM user.

Hashtopolis Module

The Hashtopolis instance will serve as the central management server for all of our hashcat nodes. In the hashtopolis module, we will deploy the server and run a couple of scripts to provision and configure the hashtopolis application.

Module Configuration

The hashtopolis module contains a configuration file called “main.tf”. We use a Terraform template called “user_data.template” to define a user data script to be ran when the instance is first launched to install the necessary packages for Hashtopolis. We use another template called “hashtopolis.template” to install and configure Hashtopolis via remote-exec provisoner. Once the instance is configured, we assign an Elastic IP (EIP) IPv4 address to the server.


data "aws_subnet" "subnet" {
  id = "${var.subnet}"
}

# install mysql, apache, php, etc.
data "template_file" "userdata" {
  template = "${file("${path.module}/user_data.template")}"

  vars {
    mysql_rootpass = "${var.mysql_rootpass}"
  }
}

# configure hashtopolis
data "template_file" "hashtopolis" {
  template = "${file("${path.module}/hashtopolis.template")}"

  vars {
    mysql_rootpass = "${var.mysql_rootpass}"
    mysql_apppass  = "${var.mysql_apppass}"
    web_adminpass  = "${var.web_adminpass}"
    voucher_name   = "${var.voucher_name}"
  }
}

resource "aws_instance" "hashtopolis" {
  subnet_id              = "${data.aws_subnet.subnet.id}"
  ami                    = "${var.ami}"
  instance_type          = "${var.instance_type}"
  user_data              = "${data.template_file.userdata.rendered}"
  vpc_security_group_ids = ["${var.security_group_ids}"]
  key_name               = "${var.key_name}"

  tags {
    "Name" = "${var.name}"
  }

  provisioner "file" {
    content     = "${data.template_file.hashtopolis.rendered}"
    destination = "/tmp/hashtopolis.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "chmod u+x /tmp/hashtopolis.sh",
      "/tmp/hashtopolis.sh",
      "rm /tmp/hashtopolis.sh",
    ]
  }

  connection {
    host        = "${aws_instance.hashtopolis.private_ip}"
    type        = "ssh"
    user        = "ubuntu"
    private_key = "${file("~/.ssh/${var.key_name}")}"
  }
}

resource "aws_eip" "hashtopolis" {
  instance = "${aws_instance.hashtopolis.id}"
}

Variable Declarations

In order to use the variables references in the module, they need to be declared. To do this we will create a file called “variables.tf”. The actual variable values will be passed to the module from the main configuration we will be creating in a moment.


# Instance

variable "ami" {
  description = "The AMI id for the instance"
}

variable "instance_type" {
  description = "The size of the instance"
  default     = "t2.micro"
}

variable "subnet" {
  description = "The subnet where the instance will reside"
}

variable "key_name" {
  description = "The name of the SSH private key used to connect to the instance"
}

variable "security_group_ids" {
  type        = "list"
  description = "The IDs of the security groups to be applied to the instance"
}

# Configuration Variables
variable "mysql_rootpass" {
  description = "The password for the MySQL root user"
}

variable "mysql_apppass" {
  description = "The password for the MySQL hashtopolis user"
}

variable "web_adminpass" {
  description = "The hashtopolis web app admin password"
}

variable "voucher_name" {
  description = "Hashtopolis voucher to register clients to server"
}

# Tagging
variable "name" {
  description = "The name of the instance"
}

# Outputs
output "hashtopolis_privateip" {
  value = "${aws_instance.hashtopolis.private_ip}"
}

output "hashtopolis_eip" {
  value = "${aws_eip.hashtopolis.public_ip}"
}

Install Prerequisites

Hashtopolis requires that MySQL, PHP, and apache be installed for the web application to function. We will install these packages in user data via the user_data.template file. This template file is a simple bash script that gets populated with the mysql_rootpass variable before being rendered at runtime.


#!/bin/bash

# update packages
sudo apt-get update

# set password for non-interactive mysql install
sudo debconf-set-selections <<< 'mysql-server mysql-server/root_password password ${mysql_rootpass}'
sudo debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password ${mysql_rootpass}'

# install dependencies
sudo apt-get install -y mysql-server apache2 libapache2-mod-php php-mcrypt php-mysql php php-gd php-pear php-curl git

Hashtopolis Installation

With the proper packages installed, now it's time to install and configure Hashtopolis. We'll use another template, hashtopolis.template, to download and install Hashtopolis, configure MySQL, and perform the initial setup tasks related to Hashtopolis. As with the user_data template, this template is a bash script that gets populated with variables.

Note: The sleep 60 command is necessary to give the user data script time to finish. This is just one of the nuances with Terraform. There are more elegant ways to check this.
Note 2: The sleeps between the curl commands give PHP and MySQL time to process the requests. Again, probably more elegant ways to do it.


#!/bin/bash

# give user data time to finish
sleep 60

# MySQL Setup
mysql -u root -p${mysql_rootpass} -e "create database hashtopolis; GRANT all PRIVILEGES ON hashtopolis.* TO 'hashtopolis'@'localhost' IDENTIFIED BY '${mysql_apppass}';"

# server files
git clone https://github.com/s3inlc/hashtopolis.git
cd hashtopolis/src
sudo mkdir /var/www/hashtopolis
sudo cp -r * /var/www/hashtopolis
sudo chown -R www-data:www-data /var/www/hashtopolis

sudo tee /etc/apache2/sites-available/000-default.conf << YOLO
<VirtualHost *:80>
        ServerAdmin webmaster@localhost
        DocumentRoot /var/www/hashtopolis/
        <Directory /var/www/hashtopolis>
                Options Indexes FollowSymLinks
                AllowOverride All
                Require all granted
        </Directory>

        ErrorLog \$${APACHE_LOG_DIR}/error.log
        CustomLog \$${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
YOLO

# restart apache
sudo systemctl restart apache2

# use web app to configure hashtopolis
cd ~/
curl -c cookie.txt http://127.0.0.1/
sleep 5
curl -b cookie.txt http://127.0.0.1/install/ -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php?type=install -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php -c cookie.txt
sleep 5
curl -b cookie.txt -d "server=localhost&port=3306&user=hashtopolis&pass=${mysql_apppass}&db=hashtopolis&check=Continue" http://127.0.0.1/install/index.php -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php?next=true -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php -c cookie.txt
sleep 5
curl -b cookie.txt -d "username=admin&email=test@example.com&password=${web_adminpass}&repeat=${web_adminpass}&create=Create" http://127.0.0.1/install/index.php -c cookie.txt
sleep 5
curl -b cookie.txt http://127.0.0.1/install/index.php -c cookie.txt

# remove web server configuration scripts
sudo rm -r /var/www/hashtopolis/install
rm -rf ~/hashtopolis
rm cookie.txt

# setup hashtopolis vouchers
mysql -u root -p${mysql_rootpass} -e "UPDATE hashtopolis.Config SET value = 1 WHERE item='voucherDeletion'; INSERT INTO hashtopolis.RegVoucher (voucher,time) VALUES ('${voucher_name}', UNIX_TIMESTAMP());"

Using Our Module

At this point we have a complete module containing the following files:

  • main.tf
  • variables.tf
  • user_data.template
  • hashtopolis.template

This module can now be sourced to easily create Hashtopolis servers. Place the module files in their own directory or git repository. If you just want to use mine, click here.

Creating the Main Configuration

The main configuration file will be used to instantiate the Hashtopolis module, and eventually the hashcat nodes. To start out, create another "main.tf" file in a separate directory. I'm using github to source the hashtopolis module, but if your storing the module locally, place the module directory within the directory holding the new main.tf file.

Note: If you are not using an IAM role on your terraform server, you will also need to set an AWS "access_key" and "secret_key" value in the "provider" block as noted here.


provider "aws" {
  region = "${var.region}"
}

data "aws_vpc" "default" {
  id = "${var.vpc_id}"
}

module "hashtopolis01" {
  source             = "github.com/trillphil/terraform/modules//hashtopolis"
  ami                = "ami-759bc50a"
  instance_type      = "t2.micro"
  key_name           = "${var.key_name}"
  security_group_ids = [
    "${module.secgroup_hashtopolishttp.group_id}",
    "${module.secgroup_internalssh.group_id}"
  ]
  name               = "hashtopolis01"
  subnet             = "${var.subnet_id}"
  mysql_rootpass     = "${var.mysql_rootpass}"
  mysql_apppass      = "${var.mysql_apppass}"
  web_adminpass      = "${var.web_adminpass}"
  voucher_name       = "${var.voucher_name}"
}

# Security group allowing http to hashtopolis
module "secgroup_hashtopolishttp" {
  source      = "github.com/trillphil/terraform/modules//security_group"
  name        = "SG_hashtopolishttp"
  vpc_id      = "${data.aws_vpc.default.id}"
  infrom_port = "80"
  into_port   = "80"
  inprotocol  = "tcp"
  incidr      = ["0.0.0.0/0"]
}

# Security group allowing ssh internally
module "secgroup_internalssh" {
  source      = "github.com/trillphil/terraform/modules//security_group"
  name        = "SG_internalssh"
  vpc_id      = "${data.aws_vpc.default.id}"
  infrom_port = "22"
  into_port   = "22"
  inprotocol  = "tcp"
  incidr      = ["172.31.48.0/20"]
}

In this configuration, we are creating an instance of the hashtopolis module and a couple of security groups to control access. In this example, I'm using my security groups module, but you could also just define the groups right in the configuration. If you want to you my security groups module, you can find it here or create your own. As with the hashtopolis module, we are using variables to add some flexibility and customization.

Main Configuration Variables

In order to use the variables in the main configuration, we need another "variables.tf" file. These variables allow us to change the SSH key that we are using, the credentials for hashtopolis, etc. Notice that several of the variables assigned here are the same ones that we defined in the hashtopolis module. The values that we assign here get passed to the module during creation.


# AWS DETAILS
variable "region" {
  description = "The AWS region"
  default     = "us-east-1"
}

variable "vpc_id" {
  description = "ID for the AWS VPC to use"
}

variable "subnet_id" {
  description = "ID of the AWS subnet to use"
}

variable "key_name" {
  description = "SSH key name to use for provisioning"
}

# HASHTOPOLIS DETAILS
variable "voucher_name" {
  description = "Hashtopolis voucher to register clients to server"
}

variable "mysql_rootpass" {
  description = "MySQL root password"
}

variable "mysql_apppass" {
  description = "MySQL password for hashtopolis application user"
}

variable "web_adminpass" {
  description = "Admin password for hashtopolis"
}

# OUTPUTS
output "hashtopolis_ip" {
    value = "${module.hashtopolis01.hashtopolis_eip}"
}

Note: The "hashtopolis_ip" output variable allows us the work with a custom variable created in the hashtopolis module. Since this is the root configuration file, the outputs will be displayed after successfully applying the configuration. You can read more about Terraform's outputs here.

Setting Variable Values

We'll need to set the variables values here since this is the root configuration. To do this, we'll create a file called "terraform.tfvars", and assign values to each of the variables. Add your AWS VPC ID, subnet ID, and SSH key name, as well as arbitrary values for the Hashtopolis configuration variables.


vpc_id = "your_vpc_id"
subnet_id = "your_subnet_id"
key_name = "your_ssh_key_name"

voucher_name = "3RVsilewJHu6nwLO"
mysql_rootpass = "HfCZngZIgiLU2hAP"
mysql_apppass = "66aNo2BUmUbnB3kL"
web_adminpass = "unzJPYVmqdEasPA1"

Deploying Hashtopolis

Now that everything should be configured, we can build the Hashtopolis server. From the configuration directory, run "terraform init" followed by "terraform apply". The build should take less than 3 minutes. If everything completed successfully, navigate to the IPv4 address displayed in the "hashtopolis_ip" output to reach the Hashtopolis web server.

hashtopolis successful
Test that everything worked correctly by logging in with the username "admin" and the password that you specified in the "web_adminpass" variable.

Conclusion

We now have a Terraform configuration to deploy a minimally configured Hashtopolis server in AWS. During my tests, the entire deployment occurs in under 3 minutes. In part 2, we will create a module for hashcat nodes, and register them to the Hashtopolis server.

Leave a Reply