Terraform plan output to JSON

The Terraform CLI currently doesn’t output the plan to a human readable file when running the plan command. It currently prints to the console in a readable format, at least within Azure DevOps, but the tfplan file outputted is not. This can be very unhelpful within a deployment pipeline when you save the output file to be processed with scripts.

The use case I had was to read the output to understand if there was any outstanding changes, which was to be used to determining what actions to be taken. I have seen some example where they would read the CLI output and parse that to find the information, but this didn’t seem standard or best practice.

Terraform CLI does have a command ‘terraform show’ that can read a plan file, then with the attribute ‘-json’ it prints it into a JSON output.

Details on the command can be found here > https://www.terraform.io/docs/cli/commands/show.html

You can then also read the JSON Schema here > https://www.terraform.io/docs/internals/json-format.html

This command will print out the plan file to JSON, which you could process, but I also wanted it downloaded so I needed it as a file. You can push this to a file by appending ‘ > outputfile.json’ to the command so it looks as per below:

terraform show -no-color -json output.tfplan > output.json

One very annoying part of this, is it still needs connection to the state file where the plan was made from. Therefore, even though we have the plan file locally and want to just read it, we still need to connect to the remote state. This makes it hard for testing as I can download the tfplan from the pipeline, but then need to make sure I have the connection details to the state file in, for example the Azure Blob Storage.

Below is my PowerShell code I used to read the outstanding changes from the plan file. This reads the Resource Changes and searches each action to see what they are trying to do. This can then be used in other methods to create pipeline variables and create if clauses to the next steps.

$planObj = Get-Content "output.json" | ConvertFrom-Json
$resourceChanges = $planObj.resource_changes

$add = ($resourceChanges | Where {$_.change.actions -contains "create"}).length
$change = ($resourceChanges | Where {$_.change.actions -contains "update"}).length
$remove = ($resourceChanges | Where {$_.change.actions -contains "delete"}).length
$totalChanges = $add + $change + $remove

Write-Host "There are $totalChanges ($add to add, $change to change, $remove to remove)"

Terraform remote backend for cloud and local with Azure DevOps Terraform Task

When working with Terraform, you will do a lot of work/testing locally. Therefore, you do not want to store your state file in a remote storage, and instead just store it locally. However, when deploy you don’t want to then be converting the configuration at that point and can get messy working with Azure DevOps. This is a solution that works for both local development and production deployment with the Azure DevOps Terraform Task.

The official Terraform Task in Azure DevOps by Microsoft is https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks

When using this task you configure the cloud provider you will be using as a Backend service like Azure, Amazon Web Services (AWS) or Google Cloud Platform (GCP). These details can be used to configure the Backend Service to store the State file, but they require the Terraform code to implement the service.

You can see all the different types here: https://www.terraform.io/docs/backends/types/index.html

For this walk through I will use the Azure Resource Manager, which uses an Azure Storage Account, as the example, but as mentioned this can be used in any provider.


This would be the standard Terraform configuration you would need for setting up the Backend Service for Azure:

terraform {
  backend "azurerm" {
    resource_group_name  = "StorageAccount-ResourceGroup"
    storage_account_name = "abcd1234"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"

When using this locally, you don’t want any of this in your main.tf Terraform file else it will error with no detail or add the state to the Azure Storage Account. Therefore, locally you will not add this.

Instead during the deployment using the Azure DevOps Pipelines, we will inject the configuration. This will be done, by inserting a backend.tf file using PowerShell. Within the file, we will inject the configuration, but we don’t need all the parameters as they are inserted by the Terraform task.

We will inject just:

terraform {
  backend "azurerm" {

Which as a single like string we will need to stringify it to:

"terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"

Using the PowerShell task we can then check for if the file already exist and if not then inject it into the same location as the main.tf file. This then causes when Terraform runs to process it with a Backend Service and with the Azure details we have provided in the Task.

- powershell: |
        $filename = "backend.tf"
        $path = "${{parameters.terraformPath}}"
        $pathandfile = "$path\$filename"
        if ((Test-Path -Path $pathandfile) -eq $false){
            New-Item -Path $path -Name $filename -ItemType "file" -Value "terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"
      failOnStderr: true
      displayName: 'Create Backend Azure'

- task: TerraformTaskV1@0
      provider: ${{parameters.provider}}
      command: 'init'
      workingDirectory: ${{parameters.terraformPath}}
      backendServiceArm: AzureServiceConnection
      backendAzureRmResourceGroupName: TerraformRg
      backendAzureRmContainerName: TerraformStateContainer
      backendAzureRmKey:  ***
      environmentServiceNameAzureRM:  AzureServiceConnection

With this solution you will be able to work locally with Terraform and also during deployment have a remote Backend Service configured.

I would suggest using the Pipeline YAML to put an IF statement round the PowerShell if using this in a template:

- ${{ if eq(parameters.provider, 'azurerm')  }}:
    - powershell: |
        $filename = "backend.tf"
        $path = "${{parameters.terraformPath}}"
        $pathandfile = "$path\$filename"
        if ((Test-Path -Path $pathandfile) -eq $false){
            New-Item -Path $path -Name $filename -ItemType "file" -Value "terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"
      failOnStderr: true

Authenticate Terraform with Azure CLI

Sometimes there are no error messages and they’re not helpful at all, but sometimes there are error message which are helpful for your debugging of the issues which are the best thing ever. Then again this is only helpful if the error message points you to the correct problem to fix. I stubbled across an issue recently when I could not add a Secret to an Azure Key Vault via Terraform, which the error message did not help at all.

To paint the picture around where I was at. I had used Terraform to create a Resource Group, Azure Container Instance and a Azure Key Vault. This had all deployed correctly, but the last part was to create a Secret in the Azure Key Vault. However, when doing this I was met with this error below:

Error: Error checking for presence of existing Secret “demo-container-registry-password” (Key Vault “https://demo-kv.vault.azure.net/”): keyvault.BaseClient#GetSecret: Failure responding to request: StatusCode=403 — Original Error: autorest/azure: Service returned an error. Status=403 Code=”Forbidden” Message=”The user, group or application ‘appid=00000000-8ddb-461a-bbee-02f9e1bf7b46;oid=00000000-5015-4074-9780-4907e90957a8;numgroups=1;iss=https://sts.windows.net/00000000-a490-4728-9c9d-1d1446b68e5e/’ does not have secrets get permission on key vault ‘demo-kv;location=uksouth’. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287″ InnerError={“code”:”AccessDenied”}

Now you would think this is to do with permissions, but I am logged in via my user with Owner permissions. Therefore, it couldn’t be permissions, plus I just created all these resources in Azure correctly.

After some intense Googling, I found the issue wasn’t being authenticated but how I was authenticated. There is a particular method to authenticating while using the Azure CLI, and my issue was the subscription I was using was not my default directory. Therefore, I could not access the secret from the default subscription it was using. I am not sure why all other processes worked fine and this didn’t, but sometimes you just don’t question the insanity.

Here is the details from Terraform on authenticating with the Azure CLI correctly: https://www.terraform.io/docs/providers/azurerm/guides/azure_cli.html

For a simple overview of what is said in there, you can follow these simple steps:

Sign in to Azure CLI using the ‘az’ command

az login

Once you are logged in then you can get the subscription details by listing the available subscriptions

az account list

From the response you can see what you have access to, so you can copy the Subscription ID from the response and set the Subscription context.

az account set --subscription="SUBSCRIPTION_ID"


az account set --subscription="00000000-0000-0000-0000-000000000000"

After this you should have no issue connecting and executing the Terraform for Azure.