Terraform Code Quality

Terraform is like any other coding language, there should be code quality and pride in what you produce. In this post I would like to describe why you should have code quality and detail some of the aspects you should be consistently doing every time you produce code.

Code Quality is like when you are learning to drive, you don’t indicate for yourself, you indicate for others, so they know what you are doing and where you are going. It is the same with your code, as it should be easy enough to follow, that another develop would be able to come along and understand what is produced. With Infrastructure as Code, this extends the quality to not just down your code, but also the resources it will produce will have continuity and consistency that admins of the product can understand it as well.

This isn’t something you should only do for production-ready work, you should encompass code quality in your Proof of Concepts and even learning projects, it creates habits and routines, so they become second nature.

What follows here are some formatting and organisation practices I employ when writing code which you may find beneficial to adopt when writing Terraform code. 


Although with Terraform you can call the files anything you would like, I feel you should have a structure and a pattern to how you name files. It can give every reader understanding of knowing where they need to go to find what they need without having to hunt through the files. Below are the standard files I always use, which ensures you always have a base level of consistency. Past these files, it is generally down to your company/team standards to what files you create.


The ‘main.tf’ file is a standard named file even Hashicorp use in their examples, so is great to have as the starting point for your code journey. With this file as a starting point all others developer will naturally go to this file to see where the resources start. Within this file, I do not put everything in this file like you might do for a smaller project, instead it normally contains templated ‘local’ variables. These can be things like resource prefixes, environment name manipulation or converting variables into booleans. I might also have some shared data resources and even a resource group (if we are talking Azure, where all the resources tend to live in a resource group), basically all the artifacts that will be utilised across the other files and provides the reader the base information.


The ‘providers.tf’ is what it says on the tin. It contains all the providers, their versioning and their customised features. These should only ever be referenced once here in this file, so that the versioning can flow downstream and not cause dependency issues with other providers.


The ‘variables.tf’ should only contain the variables attributes, with no local variables or modules within it. This keeps it clean and a single purpose for the file.


There might be an ‘output.tf’ file for the resource properties that you would like to output, but you should only output data, even if it is not sensitive, if you have too. The less information you output then more secure the resources are, so you can consider this file optional.


I like to place these within a folder called ‘environments'(more on that below) and then call each file by its environment name, for example ‘dev.tfvars’. You can then also have a ‘local.tfvars’ file for local testing and working.


The ‘override.tf’ file will be something you will exclude from your git repository (via .gitignore) to avoid checking in sensitive data. This can be where you configure your remote state for doing plan testing, without the need for adding values into the checked in files or via the CLI. 

Folders and File Patterns

This tends to be driven by the size of your project, company and restrictions within that company. For larger companies that have a lot of independent projects, a standard approach will be to have a collection of shared modules in their own repository. This makes flexible and configurable modules that keep a certain standard across the company. You should ensure the module has a big enough purpose though or you could be creating a lot of modules for little impact. An example I would give is a database, for which you would create the server, users, security, networking and possibly the databases themselves, having a module for this makes sense.

For smaller companies or projects, creating modules might not make sense requiring too much effort to maintain for minimal impact. For example, for a small single product company it would mean one change to a database will cause multiple repository changes. However, you can keep to this maintainable and flexible pattern by putting your module local. For this I suggest having a parent folder called ‘modules’, then if you have multiple providers like Azure and AWS, create a folder for each of them. If you don’t have multiple different providers like this, then just keep it to the modules folder level. In this you can then add a folder for each module naming it relative to the purpose, containing the Terraform file as per above.


> modules
>> azure
>>> postgresSql
>>>> main.tf
>>>> output.tf
>>>> variables.tf

Some people prefer to not use modules and to then have everything flat within the root directory. This is not a bad thing, but you still need to give a journey to the reader to make finding resources easy and not have very large Terraform content within the file. The consistent thing to do is to split your resources into multiple files, so each file then has its own purpose. Depending on how big the content is within the files depends how much your split it down. For example, if you just have 2 storage accounts being created then you might keep them in one file, but if you have a Virtual Network then multiple Subnets then you might want to split them into more files.

The standard would be to just drop these into the root and leave them named as per their resource, but I feel this has no order to the files as it would put them alphabetically, resulting in the files being in a random order. To combat this, I have then seen people prefix the file names with numbers, so they create a order.



One challenge with this is if you now want to add a file in between 00 and 01, you need to rename all the following files, which causes a lot of work and pain.

My preferred approach would be to use a pattern that merges both ideas together, by prefixing all resource files with ‘tf’ so all the resources are then in one group together. I then follow it with an acronym of the resource and ‘main’ for the root file of the resource for example ‘tf-kv-main.tf’ for a Key Vault. Then if I would like to add another file for certificate generation then I would call it ‘tf-kv-cert.tf’. This results in all the resource files kept together, which subsequently keeps each related resource together and finally some kind of indication what each file does.




I feel sometimes variables get overlooked as the person writing the code knows what they are, what they are for and see that Terraform will handle a lot for them. But what you want to ensure is that when someone else comes along to look at your variables, they’ll actually be able to make sense of them.

Naming is key,  and variables should have a descriptive name, if you see it throughout the files, you know what it is and its purpose. They should be lowercase, using under scores and have a pattern to the name. I prefer to prefix the name with the resource type and then the variable name for example Storage Account Name would be either ‘storage_account_name’ or to make it more compact ‘sa_name’

Resource Type is one that gets ignored, as Terraform can and will interpret what data you push in, but it is worth describing this so reader know what type it is and how they might be able to expand on it, especially if it is an object or list of objects. I have seen variables without type added and then battled with what type I can pass it and then will work downstream with the different functions used on the data like count vs for_each.

Descriptions don’t need to be war and peace, but it does give a easy human readable text to what the resource is for, plus you can add information that would be helpful about the resource like limited values. This can be even more impactful if used with something like TerraDoc, which will use these descriptions to produce a ReadMe file into your project.

Conditions take some work, but they can also make life easier later down the line as it can make sure users do not put values that are not going to work, or you don’t want them to use. A great example for this is in Azure the SKU value for resources. Not only might you want to restrict the string type to certain values that match valid SKUs, but also restrict what SKUs can be used. This can validate the user only used the sku you want without them having to keep attempting with failure or creating resources for an admin to tell them to rebuild it.

variable "mysql_sku_name" {
	type = string
	description = "MySQL Server SKU. Limited Values are B_Gen4_1,  B_Gen4_2,  B_Gen5_1,  B_Gen5_2"
	default = "B_Gen4_1"
	validation {
    condition     = contains(["B_Gen4_1", "B_Gen4_2", "B_Gen5_1", "B_Gen5_2"], var.mysql_sku_name)
    error_message = "MySQL Server SKU are limited to B_Gen4_1,  B_Gen4_2,  B_Gen5_1,  B_Gen5_2."


This is about the resources in general within each file. Each file should have a pattern and flow to how each resource connects.

I always put the local variables to the top, so they are easy to file and most of the time when you use these, they are setting up data for usage in the following resources. Next should be your resources, starting with the parent and working down to the child. For example, you start with your Storage Account and then within the Storage Containers, so there is a flow like starting with the big box and going down to the boxes that fit in them.

Naming of the terraform resources and the deployed resources should be the same as the variables. There should be a pattern, with consistency and convention to them. Terraform resources should be lowercase, using underscores as breaks and have purpose to their names. The resource name is already comprised of the provider and resource, so you should not duplicate this in the custom name. You should give it an alias that describes what it is or just use an acronym for example:

Azure Storage Account for exports I would name ‘exports’, so the full name would be used ‘azurerm_storage_account.exports’, then for something like a single Azure Resource Group I would name it ‘rg’ to produce ‘azurerm_resource_group.rg’.

You should then have a pattern for the deployed resources as well, so they also have a naming convention to follow. There is no strict rule to this as it might depend on the company policy and the resource limitation. In general, I would prefix everything with the project, then the environment and then the resource type for example CMP Project, in Development Environment and a Resource Group would be ‘cmp-dev-rg’. This can then easily group the resources and keep consistency between all of them, however some resources don’t allow characters and have maximum number of characters. Therefore, you need to think about how many characters you put in so it doesn’t hit the limit, some resources might end up looking like ‘cmpdevsa’.

locals {
	resource_prefix = "cmp-${var.env}"
	resource_prefix_no_dash = replace(local.resource_prefix,"-","")
resource "azurerm_resource_group" "rg" {
	name = "${local.resource_prefix}-rg"
	location = var.location
resource "azurerm_storage_account" "exports" {

Ending Remarks

These are all guidelines to have a well written Terraform project, but they will vary depending on your setup. The key point is to have consistancy, naming conventions and a journey to make it easier to read, write and develop with.

Lastly, I’d always recommend using the Terraform command ‘fmt’ before checking in code to keep the style consistent as well.

Published by Chris Pateman - PR Coder

A Digital Technical Lead, constantly learning and sharing the knowledge journey.

Leave a message please

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: