Updates to our Triton Terraform Provider

September 18, 2017 - by Justin Reagor

We've recently made updates to our Triton Terraform Provider to include some of the more helpful features found on our Triton platform. We'll review the latest changes, including CNS integration, querying the Triton Image and Network APIs, and more.

Don't care about the specifics and want to start working already? Skip ahead to get started with Triton and Terraform.

History

Triton already has history with Terraform. The original provider was first introduced to this blog with Terraform version v0.6.14. Much of the previous article centered around defining various resources an operator can configure, as well as the many types of compute that Triton supports (VMs, containers, and Docker).

With the release of Terraform v0.10, the core project and its provider libraries have split into separate projects. This means Terraform providers can be tracked and released separately from the main release cycle of the tool itself. At Joyent, we're supporting this transition by filling in feature gaps of our own provider while focusing on our customer's use cases with Terraform.

Data sources

triton_image

Data sources allow Terraform to gather information from a provider (i.e. Triton) that is defined either outside of Terraform or in a separate Terraform configuration file. Data sources can either fetch read-only information or compute new values within Terraform. This is especially helpful when you need to provide an ID for a resource that can more easily be queried by name.

With our updated Terraform provider, you can now query the Triton Image API using a data source. You can query for machine images by attributes such as os, version, public, etc.

provider "triton" {}

data "triton_image" "lx-ubuntu" {
    name = "ubuntu-16.04"
    version = "20170403"
}

output "image_id" {
    value = "${data.triton_image.lx-ubuntu.id}"
}

Users who might be rebuilding the same images over and over can also add the most_recent attribute to pull the most recent image despite name and/or version.

triton_network

The same is true when attaching a triton_machine to a network. You can now query Triton's Network API and return any network or network pool ID accessible under your Triton account.

provider "triton" {}

data "triton_image" "lx-ubuntu" {
    name = "ubuntu-16.04"
    version = "20170403"
}

data "triton_network" "private" {
    name = "Joyent-SDC-Private"
}

resource "triton_machine" "test-net" {
    name    = "test-net"
    package = "g4-highcpu-256M"
    image   = "${data.triton_image.lx-ubuntu.id}"

    network = ["${data.triton_network.private.id}"]
}

NOTE: We've recently un-deprecated the network attribute under triton_machine.

Resources

triton_machine

We've introduced a few new ways to configure triton_machine resources as well, including CNS and custom metadata.

CNS

Launching distributed applications is a pain without having a way to bootstrap into service discovery. Triton Container Name Service (CNS) was built to assist that effort by automatically configuring DNS for newly created instances and containers. Configuring DNS automatically promotes self assembling deployments by presenting a single entry point for service discovery and DNS based load balancing. It was time that CNS was formalized into its own stanza under the triton_machine resource.

provider "triton" {}

data "triton_image" "lx-ubuntu" {
    name = "ubuntu-16.04"
    version = "20170403"
}

resource "triton_machine" "test-cns" {
    name    = "test-cns"
    package = "g4-highcpu-256M"
    image   = "${data.triton_image.lx-ubuntu.id}"

    cns {
        services = ["frontend", "app"]
    }
}

Custom metadata

The same goes for custom metadata, an oft-overlooked utility for cloud deployment.

While tags are a convenient way to organize and group any number of instances, metadata is best used to inject configuration details into instances. Metadata can include any piece of information you might need to bootstrap an instance or start/update a process.

Normally you'll utilize metadata through one of the attributes such as root_authorized_keys, user_data, user_script, cloud_config, etc. With our latest changes you can use custom user-defined metadata for any purpose within your provisioning lifecycle.

As an example, we'll demonstrate how Terraform can be tied together with custom metadata to enhance the deployment of Redis. Redis is installed via a user_script which reads the redis_port through the mdata-get tool. Also note the use of a triton_machine tag to label our instance to a firewall rule.

provider "triton" {}

variable "redis_server_port" {
    default = "6379"
}

data "triton_image" "lx-ubuntu" {
    name = "ubuntu-16.04"
    version = "20170403"
}

resource "triton_machine" "test-mdata" {
    name = "test-mdata"
    package = "g4-highcpu-256M"
    image   = "${data.triton_image.lx-ubuntu.id}"

    firewall_enabled = true

    # in this example redis_install.sh installs redis-server
    user_script = "${file('redis_install.sh')}"

    # define a redis port which both terraform and our script can reference
    metadata {
        redis_server_port = "${var.redis_server_port}"
    }

    tags {
        service = "redis"
    }
}

# make sure the redis port is open through our cloud firewall
resource "triton_firewall_rule" "test" {
    rule = "FROM any TO tag service = redis ALLOW tcp PORT ${var.redis_port}"
    enabled = true
}

User-defined metadata can be read or updated within the instance using the mdata-client CLI utilities. All Joyent machine images come with this pre-installed.

Go SDK

Finally, much of this work has been driven by our client library called triton-go. Started by recent Joyeur, and Terraform contributor, James Nugent, this library has helped centralize and focus Joyent's effort around supporting Go related tools. We plan to continue formalizing much of our integration effort around the Triton Go SDK, especially when interfacing with Triton's CloudAPI and Object Storage (Manta).

If you enjoy supporting open source software, like we do, checkout the project and consider contributing. Better still, we're hiring software and product engineers.

Get started today!

Let's walk-through what you need to get started with Terraform on Triton today. If you haven't already done so, create a Triton account. Read our getting started guide to complete the account setup and get your environment configured. Be sure to install and configure CloudAPI, including setting up a Triton CLI profile.

Confirm triton is set up by getting into the Triton environment and running triton info.

$ eval "$(triton env us-west-1)"
$ triton info
login: <username>
name: <full name>
email: <email@example.com>
url: https://us-west-1.api.joyent.com
totalDisk: 95.3 GiB
totalMemory: 7.3 GiB
instances: 1
  running: 1

Terraform will be automatically authenticated if triton is properly set up.

Download and install Terraform v0.10 after confirming your triton CLI. Once installed, make sure you can run terraform and see the following output.

$ terraform -v
Terraform v0.10.4

Ok great, now we have Terraform and Triton both installed. We can start using one of the examples above. Let's try setting up Redis on an Ubuntu KVM.

Create a new directory, let's call it tf-demo.

$ mkdir tf-demo
$ cd tf-demo

Create a Terraform configuration file called provider.tf.

$ touch provider.tf

Open provider.tf in your text editor of choice, and add the following content.

Note: We only use this file to ensure proper setup of the Triton provider with Terraform earlier in this process. Normally you can include this same configuration code under any Terraform file.

provider "triton" {}

Next, we'll initialize the provider in our current directory. Terraform will pull down the latest provider binary by default since we referenced it in our first Terraform file, provider.tf.

$ terraform init

Initializing provider plugins...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

We now have both Triton and Terraform setup to work together. Getting started is that simple.

Let's exercise the features I introduced above. We'll use Terraform to query Triton for an Ubuntu machine image, find a public fabric network, then install Redis and Redis Sentinal onto 3 KVM instances. To wrap it up, we'll lock everything down using Triton's Cloud Firewall.

Start by defining our infrastructure in a file called main.tf.

# The path to our redis installation script.
variable "redis_install_script" {
    default = "redis_install.sh"
}

# The name of the service we're deploying.
variable "service_name" {
    default = "cache"
}

# Port of our redis server.
variable "redis_server_port" {
    default = "6379"
}

# Port of our redis sentinal server.
variable "redis_sentinal_port" {
    default = "26379"
}

# Query Triton for the machine image our instances will use.
data "triton_image" "ubuntu" {
    name = "ubuntu-certified-16.04"
    version = "20170619.1"
}

# The network we'll attach our machine instances to.
data "triton_network" "public" {
    name = "Joyent-SDC-Public"
}

# Define our instances launched onto Triton
resource "triton_machine" "cache" {
    count    = 3
    name     = "${var.service_name}-${format("%01d", count.index+1)}"
    package  = "k4-highcpu-kvm-250M"
    image    = "${data.triton_image.ubuntu.id}"

    networks = ["${data.triton_network.public.id}"]
    firewall_enabled = true

    # The redis script that will be executed when we bootstrap our instance.
    user_script = "${file(var.redis_install_script)}"

    # Metadata that will be accessible within our instance.
    metadata {
        redis_server_version = "4:4.0.1-4chl1~xenial1"
        redis_server_port = "${var.redis_server_port}"

        redis_sentinal_version = "4:4.0.1-4chl1~xenial1"
        redis_sentinal_port = "${var.redis_sentinal_port}"
    }

    # Each instance should be tagged as the same service name. We use this for
    # our firewall rule below.
    tags {
        service = "${var.service_name}"
    }

    # Each instance should also be accessible at a single domain name provided
    # by CNS.
    cns {
        services = ["${var.service_name}"]
    }
}

# Open up the Redis server port between each Redis instance.
resource "triton_firewall_rule" "redis_server" {
    rule = "FROM tag service = ${var.service_name} TO tag service = ${var.service_name} ALLOW tcp PORT ${var.redis_server_port}"
    enabled = true
}

# Open up the Redis Sentinal server port between each Redis instance.
resource "triton_firewall_rule" "redis_sentinal" {
    rule = "FROM tag service = ${var.service_name} TO tag service = ${var.service_name} ALLOW tcp PORT ${var.redis_sentinal_port}"
    enabled = true
}

# Open up SSH access to our instances for debugging at the end of our walk-through.
resource "triton_firewall_rule" "redis_ssh" {
    rule = "FROM any TO tag service = ${var.service_name} ALLOW tcp PORT 22"
    enabled = true
}

Take notice of the user_script attribute defined in resource "triton_machine" "cache". The value of this attribute holds a script that will execute after Triton has created each of our KVM instances. In the above configuration, the redis_install_script variable is used to define the file path to this shell script, redis_install.sh. You can add this file into the same directory as our other files.

#!/usr/bin/env bash

set -o errexit
set -o pipefail

# Beginning timestamp of our bootstrap script
/usr/bin/printf "Started user-script at $(date -R)\n" >> /root/output.txt

# Access and set Redis server values through Triton metadata
REDIS_SERVER_VERSION="$(/usr/sbin/mdata-get redis_server_version)"
REDIS_SERVER_PORT="$(/usr/sbin/mdata-get redis_server_port)"

# Access and set Redis Sentinal server values through Triton metadata
REDIS_SENTINAL_VERSION="$(/usr/sbin/mdata-get redis_sentinal_version)"
REDIS_SENTINAL_PORT="$(/usr/sbin/mdata-get redis_sentinal_port)"

export DEBIAN_FRONTEND=noninteractive

# Install Redis
/usr/bin/add-apt-repository ppa:chris-lea/redis-server
/usr/bin/apt-get update
/usr/bin/apt-get install -y redis-server=${REDIS_SERVER_VERSION} redis-sentinel=${REDIS_SENTINAL_VERSION}

# Final timestamp of our bootstrap script
/usr/bin/printf "Ended user-script at $(date -R)\n" >> /root/output.txt

This shell script installs both redis-server and redis-sentinal using the REDIS_SERVER_VERSION and REDIS_SENTINAL_VERSION we referenced through custom metadata. This is just an example, but you can use custom metadata for any configuration when provisioning.

Now we should have provider.tf, main.tf and redis_install.tf.

Before we create everything, let's plan our infrastructure to make sure Terraform understands what we need.

$ terraform plan -out ./tf.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.triton_image.ubuntu: Refreshing state...
data.triton_network.public: Refreshing state...

...

Plan: 6 to add, 0 to change, 0 to destroy.

––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

This plan was saved to: ./tf.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "./tf.plan"

Running terraform plan allows us to see how Terraform intends to create our resources. Each of our three instances are listed with references to various pieces of information we configured in our main.tf file. Notice that each variable is resolved into their computed values and our data sources now reference the Triton image and network IDs.

This is looking good. Let's execute this plan and roll out our 3 new Redis nodes and 3 firewall rules. Run the following...

$ terraform apply ./tf.plan
triton_firewall_rule.redis_sentinal: Creating...
  enabled: "" => "true"
  global:  "" => "<computed>"
  rule:    "" => "FROM tag service = cache TO tag service = cache ALLOW tcp PORT 26379"
triton_firewall_rule.redis_server: Creating...
  enabled: "" => "true"
  global:  "" => "<computed>"
  rule:    "" => "FROM tag service = cache TO tag service = cache ALLOW tcp PORT 6379"
triton_machine.cache[0]: Creating...

...

triton_machine.cache[1]: Creation complete after 23s (ID: dabfcb4b-607d-c063-f3ac-874e432467ea)
triton_machine.cache[2]: Creation complete after 23s (ID: f7331b95-d150-c143-914b-88502a793137)
triton_machine.cache[0]: Creation complete after 23s (ID: 8f4d39c8-325d-475f-cc1f-f1f8146beb04)

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.

We can make sure our instances have been created by referencing triton instance ls...

SHORTID   NAME     IMG                                STATE    FLAGS  AGE
f13c1395  cache-3  ubuntu-certified-16.04@20170619.1  running  FK     2m
aa2fcb2b  cache-2  ubuntu-certified-16.04@20170619.1  running  FK     2m
8b4339c1  cache-1  ubuntu-certified-16.04@20170619.1  running  FK     2m

It'll take a few minutes for our user_script to complete. Once that's done, it will leave your instances running redis-server and redis-sentinal on the ports we defined in our configuration. Since these are KVM instances, you'll need to SSH in using triton ssh ubuntu@cache-1.

Wrapping up

With the recent hiring of prominate Terraform contributors Joyent plans to continue supporting this tool moving forward. We love Hashicorp's tools and the style of workflow when architecting infrastructure. We hope you become an active user as well and visit our GitHub pages for more information and updates!