February 21, 2018 - by Alexandra White
The three-word roundup that everyone (including Hashicorp) uses to describe Terraform is: infrastructure as code. If that's too succinct without being informative, let's add a word and modify it: managing infrastructure with code. More on what that means in a moment.
I’ll be using Terraform to deploy the Happiness Randomizer, an HTML and JavaScript application. I'll be skipping the instructions for how to create an image, as I've already covered that in a post on how to create custom infrastructure images with Packer. You can also follow these instructions using one of Triton's automatically available infrastructure images.
In this post, I'll share a video overview and step-by-step instructions for deploying the application infrastructure on Triton. If you want a longer, more in-depth video experience, sign up to watch our on-demand webinar.
Watch our video to see how simple it is to create and manage application infrastructure on Triton.
Hashicorp's Terraform is a tool designed for creating, managing, updating, and versioning reproducible application infrastructure. Application infrastructure is composed of all physical and virtual resources (including compute resources and upstack services) which support the flow, storage, processing, and analysis of data.
We've recently updated our Triton Terraform Provider, integrating the ability to use Triton CNS, query Triton images, and add network APIs. We already have a long history of Terraform and Triton integration, but it works well with other providers, too.
With simple configuration files, Terraform is told what images are needed to create what types of instances to run an application on a specific Triton datacenter. There's even a method of planning your use of Terraform before applying it, so you know what's going to happen before it happens. As you update those configuration files, Terraform sees what changes are made and incrementally executes those changes as requested.
There's a full glossary for Terraform, but here's some of the basics you should know before we get started:
Provider: the underlying platforms which support Terraform. Providers are responsible for managing the lifecycle of a resource: create, read, update, delete. Triton is a Terraform provider. Other providers include AWS, Microsoft Azure, Heroku, and Terraform Enterprise.
Resources: resource blocks define components of your infrastructure. This could be a VM or container on Triton, or it could be an email provider, DNS record, or database provider.
Data sources: data sources allow data to be fetched or computed for use within Terraform configuration, allowing Terraform to build infrastructure based on information from outside of Terraform (or from a separate Terraform configuration file). Providers are responsible for defining and implementing data sources, which present read-only views of pre-existing data or compute new values on the fly.
Plan: the plan is the first of two steps required for Terraform to make changes to infrastructure. Using terraform plan
determines what changes need to be made and outputs what will be done before it's done.
Apply: the second of two steps required to make changes to the infrastructure. With terraform apply
, Terraform communicates with external APIs (i.e. the providers) to make changes.
State: the Terraform state is the state of your infrastructure stored from the last time Terraform was run or applied. By default, this is stored in a local file named terraform.tfstate
.
With Terraform, we're going to plan and apply an infrastructure plan to launch our web application.
To install Terraform, download the appropriate package for your operating system. All systems have available packages on the Terraform downloads page.
Note: Be sure to download version 0.10+. I used Terraform 0.10.5 in this example.
Once you've downloaded Terraform, unzip the package. Terraform will run as a binary named terraform
. The final step is to ensure the binary is available on the PATH.
Open your terminal and run the following command:
export PATH=$PATH:/path/to/dir
You can also symlink to terraform
:
cd /usr/bin
sudo ln -s terraform
Go to: Control Panel -> System -> Advanced System settings* -> Environment Variables.
Scroll down in system variables until you find PATH. Click edit and change accordingly. You will need to launch a new console for the settings to take effect.
After Terraform has been installed and PATH has been set, verify the installation by opening a new terminal session. Execute terraform
and you should see a help output similar to this:
$ terraform
Usage: terraform [--version] [--help] [args]
The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you're just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.
Common commands:
apply Builds or changes infrastructure
console Interactive console for Terraform interpolations
# ...
If you receive an error that terraform
could not be found, PATH was not properly set up. Please go back and ensure your PATH variable has the correct directory where Terraform was installed.
Terraform configuration is the set of files which describes how to build and manage the various components of an application and the resources required for it to run (i.e. infrastructure as code). Our configuration files will launch a single container on Triton, requiring the Triton provider. We'll be creating two files, one for environment variables and one for provider setup.
You don't need to have a local version of the Happiness Randomizer application to build your infrastructure. Create a new directory for your Terraform files:
$ mkdir happy-randomizer-tf
$ cd happy-randomizer-tf
From within the new directory, create and edit your variables file:
$ touch variables.tf
$ vi variables.tf
Copy the following variables into the empty file:
variable "image_name" {
type = "string"
description = "The name of the image for the deployment."
default = "happy_randomizer"
}
variable "image_version" {
type = "string"
description = "The version of the image for the deployment."
default = "1.0.0"
}
variable "image_type" {
type = "string"
description = "The type of the image for the deployment."
default = "lx-dataset"
}
variable "package_name" {
type = "string"
description = "The package to use when making a deployment."
default = "g4-highcpu-128M"
}
variable "service_name" {
type = "string"
description = "The name of the service in CNS."
default = "happiness"
}
variable "service_networks" {
type = "list"
description = "The name or ID of one or more networks the service will operate on."
default = ["Joyent-SDC-Public"]
}
There are six variables included in this file, including the name, version, and type of image we'll be deploying, the package name for our container, a Triton CNS service name, and the Triton network to ensure our application is publicly accessible.
The next confirmation file will declare the provider, data sources, resources, and outputs after running Terraform. Create and edit the new file:
$ touch main.tf
$ vi main.tf
The first piece of information to include ensures that you're using Terraform version 0.10.x, which is required to create this application infrastructure.
terraform {
required_version = ">= 0.10.0"
}
Next, we'll add the Triton provider. We'll be creating our infrastructure within our default Triton data center.
provider "triton" {
# The provider takes the following environment variables:
# TRITON_URL, TRITON_ACCOUNT, and TRITON_KEY_ID
}
The "triton"
provider uses Triton environment variables including your Triton username, SSH fingerprint, and the CloudAPI endpoint.
NOTE: Though it is possible to proceed without setting up environment variables by replacing the contents with the corresponding information, we do not advise you do so. It is a best practice to store all important keys locally instead of tying it to your application files.
By setting our provider, we are establishing that triton
can be used to manage the lifecycle of our application.
Our Triton provider is responsible for implementing data sources, presenting read-only views of data. In particular, we need to define two pieces of data that already exist in our Triton data center: the infrastructure image for our container and Joyent's public network.
#
# Details about the deployment
#
data "triton_image" "happy_image" {
name = "${var.image_name}"
version = "${var.image_version}"
type = "${var.image_type}"
most_recent = true
}
data "triton_network" "service_networks" {
count = "${length(var.service_networks)}"
name = "${element(var.service_networks, count.index)}"
}
On Triton, we call these networks fabrics, and every customer automatically has access to a public network named "Joyent-SDC-Public" and a private network, "Joyent-SDC-Private."
While there are a number of possible network configuration, we focused on adding a public network so that the application can be seen on the web.
Most of the information in our data sources was defined in variables.tf
.
Next, we'll need to create a resource. Reminder, a resource is a component of your infrastructure. In this case, we're defining a container to be provisioned.
resource "triton_machine" "happy_machine" {
name = "happy_randomizer"
package = "${var.package_name}"
image = "${data.triton_image.happy_image.id}"
networks = ["${data.triton_network.service_networks.*.id}"]
cns {
services = ["${var.service_name}"]
}
}
Let's break down this block further:
happy_randomizer
g4-highcpu-128M
happy_image
service_networks
Finally, we'll add two outputs for our Terraform resources. By getting our primaryIP
and the DNS names, we can quickly connect to the application in our browser as soon as the infrastructure has been created.
output "primaryIp" {
value = ["${triton_machine.happy_machine.*.primaryip}"]
}
output "dns_names" {
value = ["${triton_machine.happy_machine.*.domain_names}"]
}
While you don't need both pieces of information to connect to the instance, this gives you both options.
You can view the completed main.tf
on GitHub.
Once your configuration file has been saved, you must download the provider to your local directory. This step is critical to determining how Terraform will be handled going forward.
Execute terraform init
to download the Triton provider in the background into the local application directory.
$ terraform init
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "triton" (0.4.1)...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.triton: version = "~> 0.4"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.
The output informs us that version 0.4 of Triton has been installed. If you require a different version of a provider, you can specify it within the configuration file.
Run terraform plan -out happy.plan
to review what Terraform will be building based on your configuration file. The -out
parameter saves the plan to happy.plan
to ensure you know exactly what's going to happen when you're ready to deploy.
The result should look similar the following:
$ terraform plan -out happy.plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
data.triton_network.service_networks: Refreshing state...
data.triton_image.happy_image: Refreshing state...
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ triton_machine.happy_machine
id:
cns.#: "1"
cns.0.services.#: "1"
cns.0.services.0: "happiness"
created:
dataset:
disk:
domain_names.#:
firewall_enabled: "false"
image: "45dff701-ce98-481d-94d3-ab0e66fbb8b6"
ips.#:
memory:
name: "happy_randomizer"
networks.#: "1"
networks.0: "31428241-4878-47d6-9fba-9a8436b596a4"
nic.#:
package: "g4-highcpu-128M"
primaryip:
root_authorized_keys:
type:
updated:
Plan: 1 to add, 0 to change, 0 to destroy.
This plan was saved to: happy.plan
To perform exactly these actions, run the following command to apply:
terraform apply "happy.plan"
If there have been any errors, you may have to go back and modify the configuration file before proceeding.
Once you know what Terraform will do, you can use terraform apply
to make it happen. This will build our new infrastructure container on the default data center. Enough talk, let's make it happen.
$ terraform apply "happy.plan"
triton_machine.happy_machine: Creating...
cns.#: "" => "1"
cns.0.services.#: "" => "1"
cns.0.services.0: "" => "happiness"
created: "" => ""
dataset: "" => ""
disk: "" => ""
domain_names.#: "" => ""
firewall_enabled: "" => "false"
image: "" => "45dff701-ce98-481d-94d3-ab0e66fbb8b6"
ips.#: "" => ""
memory: "" => ""
name: "" => "happy_randomizer"
networks.#: "" => "1"
networks.0: "" => "31428241-4878-47d6-9fba-9a8436b596a4"
nic.#: "" => ""
package: "" => "g4-highcpu-128M"
primaryip: "" => ""
root_authorized_keys: "" => ""
type: "" => ""
updated: "" => ""
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
dns_names = [
[
359f20b2-673d-6300-e918-fcea6a314a26.inst.d9a01feb-be7d-6a32-b58d-ec4a2bf4ba7d.us-east-3.triton.zone,
happy-randomizer.inst.d9a01feb-be7d-6a32-b58d-ec4a2bf4ba7d.us-east-3.triton.zone
]
]
primaryIp = [
165.225.173.96
]
Congrats! You have deployed an instance of the Happiness Randomizer web application.
The results reiterate much of the same information from the happy.plan
. The container may take a while to actually be created (the first time I ran it, it took upwards of three minutes), so be patient.
At the end of the application you can see both the available domain names and the primary IP address which you can use to view the application in your browser.
Terraform is an incredibly powerful tool to manage your infrastructure. With two short configuration files and two even shorter commands, we were able to spin up a container on Triton.
Now that you know the basics of how to spin up an application with Terraform, add more resources and more containers to spin up something more complex.