Storing State in S3

We've covered internal details about Terraform's state in another document in this knowledge base, but I want to cover some best practices for storing Terraform's state in S3.

The Problem

We explored the state in some detail previously, but we never talked about how to best store the state. Should we store it in a Git repository? Perhaps alongside our HCL code? Or a database?

Terraform ships with support for storing your state in remote storage. It has a lot of options, some of which are developed by HashiCorp themselves such as Consul, whilst other options include PostgreSQL, Etcd and my favourite, S3. Having Terraform store your state in a remote location is trivial as we'll see shortly.

Terraform also takes state to another level with state locking, allowing you to prevent parallel executions against your state therefore protecting you from race conditions and, as we'll see below, utter chaos.

In the grand scheme of things we have two ways of storing state: locally or remotely. Locally just means working with the state from your local disk, but still backing it up somewhere for safe keeping, like a Git repository. Remote storage refers to the act of having Terraform use internal functionality to pull state from some remote storage engine, work with it and then push it back.


Whether a Git repository is "safe" is debatable. Make sure you're keeping your state file locked away from unauthorised writes AND unauthorised reads.

Let's look at what storing state locally (in a Git repository) is like before moving onto what using S3 and DynamoDB.

Local State

When you create a Terraform code base and use the apply command to instantiate it, you end up with a local file called terraform.tfstate. As discussed in Understanding State this is a local JSON file. Once you have this file how do you go about correctly storing it?

One idea that pops up often is storing it in the same Git repository as your Terraform code. This isn't an ideal solution for two primary reasons:

  1. Git is designed to make code branches cheap, but you can't "branch" the state in the same manner you can branch some Python code;
  2. You can't (easily) lock the state file when you're doing an apply against it, allowing someone else with an independant copy of the state to run a different apply at the same time;


There are more reasons, such as secrets being stored in plain text and hackers being able to map your entire network from your states, but those are different problems I'll address another time.

Let's briefly discuss each of these in a bit more detail.

Git Branching

The state file isn't something you should edit directly. It's not designed to be written to directly by you. Instead you're meant to write to it indirectly via your HCL code and Terraform commands. The thing with Git is it's designed to make branching your code off in another direction easy, but the state isn't code.

The question therefore becomes: what happens if you create a new branch on a repository with a state file, edit that branch's state file with Terraform and then attempt to merge it with master? Well the answer is clear, actually: it'll work fine... provided no one else is messing with the repository as well.

Can you guarantee the latter with team communition alone? What happens if that communication fails or is never communicated to begin with?

Let's try it and find out. Given the following simple HCL:

provider "aws" {
  region = "ap-southeast-2"

resource "aws_vpc" "main" {
  cidr_block = ""

I'm going to apply it against the master branch:

aws_vpc.main: Creating...
aws_vpc.main: Creation complete after 2s [id=vpc-0a3331df2d0d84ef9]

And commit the code to Git. I've also commited the terraform.tfstate into the repository because we're not storing our state file in S3 at this point.

The new VPC is now in place. Now I'm going to create a develop branch and add the following subnet:

resource "aws_subnet" "az-a" {
  vpc_id            =
  cidr_block        = ""
  availability_zone = "ap-southeast-2a"

At the same time my colleague elsewhere is going to clone my repository and add something else, but on the master branch and is also going to create a subnet:

resource "aws_subnet" "az-c" {
  vpc_id            =
  cidr_block        = ""
  availability_zone = "ap-southeast-2c"

And we both apply at the same time. My changes went through fine: Creation complete after 0s [id=subnet-02278b8ea7dd5aa10]

And my state looks like this now:

$ terraform state list

But so did my colleague's change: Creation complete after 1s [id=subnet-039bebacbfcd3c103]

And their state looks like this:

$ terraform state list

So what happens if I now commit and push my code to the remote repository, just as my colleague is doing after their victory, and I issue a merge request to the master branch? Keep in mind this commit will also include the state file. In short GitLab is telling me "There are merge conflicts". That's not a good position to be in.

Let's resolve the merge conflicts locally by attempting to merge the develop branch into the master branch:

$ git merge --no-ff develop
CONFLICT (add/add): Merge conflict in terraform.tfstate
Auto-merging terraform.tfstate
CONFLICT (content): Merge conflict in
Automatic merge failed; fix conflicts and then commit the result.

Oh good... /s

<<<<<<< HEAD
#resource "aws_subnet" "az-a" {
#vpc_id            =
#cidr_block        = ""
#availability_zone = "ap-southeast-2a"
resource "aws_subnet" "az-a" {
  vpc_id            =
  cidr_block        = ""
  availability_zone = "ap-southeast-2a"
>>>>>>> develop

The master has changes that conflict with our changes. Let's resolve that first, then move on to the terraform.tfstate file:

<<<<<<< HEAD
      "name": "az-c",
      "name": "az-a",
>>>>>>> develop
      "provider": "",
      "instances": [
          "schema_version": 1,
          "attributes": {
<<<<<<< HEAD
            "arn": "arn:aws:ec2:ap-southeast-2:040925562967:subnet/subnet-039bebacbfcd3c103",
            "assign_ipv6_address_on_creation": false,
            "availability_zone": "ap-southeast-2c",
            "availability_zone_id": "apse2-az2",
            "cidr_block": "",
            "id": "subnet-039bebacbfcd3c103",
            "arn": "arn:aws:ec2:ap-southeast-2:040925562967:subnet/subnet-02278b8ea7dd5aa10",
            "assign_ipv6_address_on_creation": false,
            "availability_zone": "ap-southeast-2a",
            "availability_zone_id": "apse2-az3",
            "cidr_block": "",
            "id": "subnet-02278b8ea7dd5aa10",
>>>>>>> develop

Er... that's a nasty mess to clean up. At this point we're literally doing open heart surgery on the state file and that's never a good idea. So what do we do here?

My advice in this situation would be:

  1. Checkout the master branch and terraform destroy -target the resources created there;
  2. Checkout the develop branch and implement your code there, but do not apply yet;
  3. Create a merge request to the master branch and have someone review the code;
  4. Happy with the code, the reviewing engineer should merge the code in to master and apply it;
  5. Never do this again and keep reading this article...


Keep in mind that you'll need to use the -target flag to delete only the aws_subnet we created earlier. If you attempt to delete the aws_vpc at this point it will fail. That's because our master branch created a subnet that's not being managed by this state and so the AWS API will refuse to delete the VPC due to "dangling" dependencies. The same is true in the other directon as well: master -> develop.

Remote State With Locking

The above problem completely centered around the idea of using the Git repository to store the terraform.tfstate file. This clearly isn't an option when working in a multi-engineer environment because as we saw the state file isn't something that can survive living in a Git repository. So what are we to do?

When working in a team you're going to have to work with a remote storage backend. You're also going to have to utilise state locking too. Let's look at using AWS S3 and DynamoDB to provide a highly resilient remote storage backend with state locking.

We'll go over a simple demo of using S3 and DynamoDB shortly, but first I just want to lightly touch on each topic to explain the benefits and the setup.


Using AWS S3 we gain a lot of advantages. The primary advantages that stand out include:

  • Remote, highly resilient storage of your state file;
  • You can replicate your state file to another region for extra data security;
  • You can, and should, enable versioning on your S3 Bucket for data recovery;
  • Encryption can be applied to your state file, perhaps even using a custom key;
  • IAM policies let you lock down access to this very sensitive resource.

And no doubt more.

Setting up an S3 Bucket can be done from the same code that uses it, but with an extra step and a limitation:

  1. You have to create the Bucket before you setup the Bucket as a remote storage;
  2. You can't destroy the Bucket as it will be not be empty - your state file will be in it;

Instead you'll be stuck with a chicken and egg situation: you can't delete the Bucket because of the state file in it and you can't delete the state file in it because then Terraform won't know of the remaining infrastructure. If the Bucket is the very last thing to be deleted then deleting it manually is an option too, but that defeats the point of using Terraform to begin with.

Instead I recommend you have a single Bucket per AWS account and have it managed, along with the IAM policies and access control, from a separate "meta" repository of Terraform code. Keep this repository under lock and key.


The state file is super sensitive. You have to protect it. Read access to the state file reveals the entire layout of the resources it manages - gold to an attacker. Write access is even worse because you can (currently) trick Terraform into deleting infrastructure by writing to the state file. Protect the S3 Bucket.


The key doesn't have to end with terraform.tfstate. You can actually use any filename you like. There's also no need to "nest" the file in a directory (keep in mind S3 doesn't have directories, just flat key names.)


I recommend using a single S3 Bucket for all Terraform states on a per-account basis. Just keep all state files isolated into sub-directories to avoid accidently writing over the top of each other.

The S3 Bucket backend in its simplist form is configured as such:

terraform {
  backend "s3" {
    bucket = "mybucket"
    key    = "path/to/my/key"
    region = "us-east-1"

This is pulled directly from the Terraform documentation.


Although locking isn't strictly required it's so simple and cheap to setup that not doing so would be a weird choice to make.

DynamoDB is a NoSQL database that can be setup in minutes, has no infrastructure to manage and will very much likely cost you nothing, or something very close to nothing, if all you're using it for is Terraform state locking.

When creating a DynamoDB Table for use by Terraform, all you have to do when creating the Table is use a primary key called LockID and make sure it's of type string. Done. Then you just add it to your S3 backend configuration (seen above):

terraform {
  backend "s3" {
    bucket         = "mybucket"
    key            = "path/to/my/key"
    region         = "us-east-1"
    dynamodb_table = "mytablename"

Terraform Configuration

The terraform{} block is used to configure Terraform directly. The provider{} block (which can be used multiple times) doesn't configure Terraform but instead instructs it to get a provider we're going to need for certain resources and data. One of the things this block of configuration lets us do is configure the remote backend and in this case we're telling it to use S3.

With eight lines of code, an S3 Bucket and a DynamoDB Table we've completely abolished the messy scenario we witnessed earlier, but don't just take my word for it let's take a look at this configuration in action.


Given our HCL code above and our master branch, which is now fixed and contains both subnets, let's make a slight change by adding in a terraform{} configuration block:

terraform {
  backend "s3" {
    bucket         = "generic-terraform-states"
    key            = "state-storage-article/terraform.tfstate"
    region         = "ap-southeast-2"
    dynamodb_table = "generic-terraform-locks"

What we've added here is the terraform{} block and configured it with an S3 backend{}. What we need to do now is reinitialise Terraform and have it use the new remote backend:

$ terraform init

Initializing the backend...
Do you want to copy existing state to the new backend?
  Pre-existing state was found while migrating the previous "local" backend to the
  newly configured "s3" backend. No existing state was found in the newly
  configured "s3" backend. Do you want to copy this state to the new "s3"
  backend? Enter "yes" to copy and "no" to start with an empty state.

  Enter a value: yes

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

And all that's left to do now is remove the terraform.tfstate file from the Git cache because we don't want to store it in Git anymore:

$ git rm --cache terraform.tfstate

We'll also setup a .gitignore file to ignore the file in the future and also prevent the .terraform directory from being added to the Git repository too (because it can get really big, really quickly!):

$ cat .gitignore


The .terraform directory is where the AWS provider binary is stored amongst other things. The AWS provider along is over 100MB on Linux. Don't push that to Git!

And finally we'll push our work up to Git to finalise the repository for future work.

So with these new lines in place, what happens now? Let's reply the events from earlier and see if we can get two engineers to step on each other's toes.

I'm going to add another subnet in Availability Zone B (ap-southeast-2b) and my collegue is going to add an AWS EC2 Instance. We're both going to work on the master branch at this point as it really doesn't matter which branch we're using.

resource "aws_subnet" "az-b" {
  vpc_id            =
  cidr_block        = ""
  availability_zone = "ap-southeast-2b"

And my colleague's EC2 Instance (and data for fetching a Ubuntu AMI ID):

data "aws_ami" "ubuntu" {
  most_recent = true

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]

  filter {
    name   = "virtualization-type"
    values = ["hvm"]

  owners = ["099720109477"] # Canonical

resource "aws_instance" "web" {
  ami           =
  instance_type = "t2.micro"

Just like we did last time we're both going to terraform apply our changes and cause a potential, nasty race condition. This time though we get a different result. I got this:

Terraform will perform the following actions:

  # will be created
  + resource "aws_subnet" "az-b" {
      + arn                             = (known after apply)
      + assign_ipv6_address_on_creation = false
      + availability_zone               = "ap-southeast-2b"
      + availability_zone_id            = (known after apply)
      + cidr_block                      = ""
      + id                              = (known after apply)
      + ipv6_cidr_block                 = (known after apply)
      + ipv6_cidr_block_association_id  = (known after apply)
      + map_public_ip_on_launch         = false
      + owner_id                        = (known after apply)
      + vpc_id                          = "vpc-0d65b1484fc5c502f"

Plan: 1 to add, 0 to change, 0 to destroy.

Great! My colleague got this:

Error: Error locking state: Error acquiring the state lock: ConditionalCheckFailedException: The conditional request failed
    status code: 400, request id: S6E2OE2CP2OP3FB3VROTI06VVFVV4KQNSO5AEMVJF66Q9ASUAAJG
Lock Info:
  ID:        ea065726-bc59-ba42-3421-9f0ade71c075
  Path:      generic-terraform-states/state-storage-article/terraform.tfstate
  Operation: OperationTypeApply
  Who:       michaelc@workstation
  Version:   0.12.23
  Created:   2020-03-24 08:00:13.9892716 +0000 UTC

Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.

Perfect. Now we know what happens when we use a DynamoDB Table to provide state locking.

After my changes have been completed my colleague can then retry their apply call and provided no one else is doing any work on the state/code their execution should go through. Now we've removed race conditions and can be certain things will be implemented in a safe manner.


It's very clear at this point that using an S3 Bucket and a DynamoDB Table is not only extremely easy but also very much worth while. They're cheap and secure, too.

There are other options for storing your state file remotely, and providing locking, but I've found S3 and DynamoDB to be the quickest and easiest to manage at any scale.

Last update: March 24, 2020