Terraform Brew



Infracost shows cloud cost estimates for infrastructure-as-code projects such as Terraform. It helps DevOps, SRE and developers to quickly see a cost breakdown and compare different options upfront.

$ brew install terraform $ terraform -version Note: Brew needs to be preinstalled To install Linuxbrew on your Linux distribution, fist you need to install following dependencies as shown.

  • If your repo has multiple Terraform projects or workspaces, use an Infracost config file to define them; their results will be combined into the same breakdown or diff output. Terraform directory# As shown below, any required Terraform flags can be passed using -terraform-plan-flags. The -terraform-workspace flag can be used to define a.
  • Terraform v0.11.14 The brew pin command will prevent Homebrew from updating/upgrading your version of Terraform when you run the brew upgrade command. I would strongly suggest pinning Terraform because otherwise, the brew upgrade command will remove all older versions of Terraform from your system. $ brew pin terraform.
  • Brew install terraform. Once I’ve installed terraform on my laptop, I hit the option+shift+P shortcut. VS Code pops a window to download the formatting plugin. Everything is automatically complete. Now I can enjoy the “lazy” formatting for my terraform template.

If you're upgrading from an older version to v0.8, please see the migration guide.

Installation#

1. Install Infracost#

Assuming Terraform is already installed, get the latest Infracost release:

  • macOS Homebrew
  • macOS manual
  • Linux
  • Docker
  • Windows

Subsequent updates can be installed in the usual way: brew upgrade infracost (you might need brew update first if your brew isn't up-to-date)

# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh |sh
# Downloads the CLI based on your OS/arch and puts it in /usr/local/bin
curl -fsSL https://raw.githubusercontent.com/infracost/infracost/master/scripts/install.sh |sh
-e INFRACOST_API_KEY=see_following_step_on_how_to_get_this
-e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
-v $PWD/:/code/ infracost/infracost breakdown --path /code/
# Add other required flags/envs for Infracost or Terraform
# For example, these might be required if you are using AWS assume-role:
# -e AWS_REGION=$AWS_REGION

Download and unzip the latest release. Rename the file infracost-windows-amd64 to infracost.exe, then run it from the Command Prompt or Powershell using .infracost.exe --no-color alongside other required commands/flags (color output has a bug we need to fix on Windows). You should also move the exe file to a folder that is in your PATHenvironment variable, e.g. C:Windows.

2. Get API key#

Register for a free API key:

The key is saved in ~/.config/infracost/credentials.yml.

3. Run it#

Run Infracost using our example Terraform project to see how it works:

git clone https://github.com/infracost/example-terraform.git
infracost breakdown --path .
# Show diff of monthly costs, edit the yml file and re-run to compare costs
infracost diff --path . --sync-usage-file --usage-file infracost-usage.yml

Use our CI/CD integrations to automatically add pull request comments showing cost estimate diffs.

Usage#

As mentioned in the FAQs, no cloud credentials, secrets, tags or resource identifiers are sent to the Cloud Pricing API. That API does not become aware of your cloud spend; it simply returns cloud prices to the CLI so calculations can be done on your machine. Infracost does not make any changes to your Terraform state or cloud resources.

The infracost CLI has the following main commands. Use the --path flag to point to a Terraform directory or plan JSON file:

  • breakdown: show full breakdown of costs
  • diff: show diff of monthly costs between current and planned state
Terraform Brew

If your repo has multiple Terraform projects or workspaces, use an Infracost config file to define them; their results will be combined into the same breakdown or diff output.

Terraform directory#

As shown below, any required Terraform flags can be passed using --terraform-plan-flags. The --terraform-workspace flag can be used to define a workspace.

Internally Infracost runs Terraform init, plan and show; Terraform init requires cloud credentials to be set, e.g. via the usual AWS_ACCESS_KEY_ID or GOOGLE_CREDENTIALS environment variables.

infracost breakdown --path /code --terraform-plan-flags '-var-file=my.tfvars'
infracost diff --path /code --terraform-plan-flags '-var-file=my.tfvars'

Terraform plan JSON#

Point to a Terraform plan JSON file using --path. This implies that Terraform init has been run, thus Infracost just runs Terraform show, which does not require cloud creds to be set.

terraform init
terraform show -json tfplan.binary > plan.json
infracost breakdown --path plan.json
infracost diff --path plan.json

See the advanced usage guide for other usage options.

Useful options#

Run infracost breakdown --help to see the available options, which include:

--terraform-workspace Terraform workspace to use. Applicable when path is a Terraform directory
--format Output format: json, table, html (default 'table')
--config-file Path to Infracost config file. Cannot be used with path, terraform* or usage-file flags
--usage-file Path to Infracost usage file that specifies values for usage-based resources
--sync-usage-file Sync usage-file with missing resources, needs usage-file too (experimental)
--show-skipped Show unsupported resources, some of which might be free
--log-level Use 'debug' to troubleshoot, can be set to 'info' or 'warn' in CI/CD systems to reduce noise, turns off spinners in output

The infracost diff --help and infracost output --help commands show related options.

Amazon AWS' Lambdas are incredibly powerful, mainly due to their stateless nature and ability to scale horizontally almost infinitely. But once you have written a Lambda function, how do you update it? Better yet, how do you automate deploying and updating it across multiple regions? Today, we're going to take a look at how to do exactly that using Hashicorp's Terraform

What is Terraform?

Upgrade my mac. Managing server resources can be either very manual, or you can automate the process. Automating the process can be tricky though, especially if you have a complex tree of resources that depend on one another. This is where Terraform comes in.

Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.

Terraform provides a DSL that allows you to describe the resources that you need and their dependencies, allowing Terraform to launch/configure resources in a particular order.

Installing Terraform

Installing Terraform is pretty straightforward.

If you're on macOS simply run:

If you're on Linux, depending on your distro and package manager of choice, it might be available, otherwise, follow the directions provided on the installation page.

Setting up AWS credentials

Before setting up the credentials, we're going to install the AWS command line interface.

On macOS, the awscli is available through homebrew:

On Linux, you can often find the awscli in your package manager:

You can also install it manually using pip:

Once installed, simply run:

And follow the prompts to provide your AWS credentials. This will generate the proper credentials file that Terraform will use when communicating with AWS.

Describe your infrastructure

Now that we have AWS configured, we can start to describe the AWS Lambda that we're going to deploy.

To start, create a new directory.

In that directory we're going to create a main.tf file that looks like this:

main.tf

Terraform Brewing

This is telling Terraform that we're going to be using the AWS provider and to default to the 'us-east-1' region for creating our resources.

Now, in main.tf, we're going to describe our lambda function:

Here, we're saying that we want a NodeJS based lambda and will expose its handler as an exported function called 'handler' on the index.js file (don't worry, we'll create this shortly), and that it will be uploaded as a zip file called 'function.zip'. We're also taking a hash of the zip file to determine if we should re-upload everything.

Create an execution role

Terraform Brewery

Next, what we need to do is set the execution role of our Lambda, otherwise it wont be able to run. In main.tf we're going to define a role in the following way:

This creates an IAM role in AWS that the Lambda function will assume during execution. If you wanted to grant access to other AWS services, such as S3, SNS, etc, this role is where you would attach those policies.

Now, we need to add the 'role' property to our lambda definition:

Creating a test NodeJS function

We specified NodeJS as runtime for our lambda, so let's create a function that we can upload and use.

index.js

Now let's zip it up:

Provider

Test our Terraform plan

To generate a plan and show what Terraform will execute, run terraform plan:

This tells us that terraform is going to add both the role and the lambda when it applies the plan.

When you're ready, go ahead and run terraform apply to create your lambda:

To see if it worked properly, you can use the aws cli to list all of your lambda functions:

We can now invoke our lambda directly from the aws cli. In this script, Im using a commandline utility called jq for parsing the JSON response. If you're on macOS, simply run brew install jq to install it:

This will run your lambda and decode the last 4kb of the logfile. To view the full logfile, log into the aws web console and head over to the CloudWatch logs.

Wrap up

That's it! From here, you'll be able to set up a lamba that gets run on certain triggers - SNS events, S3 operations, consume data from a Kinesis firehose, etc.

All of the files we've created here can be found on Github at seanmcgary/blog-lambda-terraform