Terraform plan output to JSON

The Terraform CLI currently doesn’t output the plan to a human readable file when running the plan command. It currently prints to the console in a readable format, at least within Azure DevOps, but the tfplan file outputted is not. This can be very unhelpful within a deployment pipeline when you save the output file to be processed with scripts.

The use case I had was to read the output to understand if there was any outstanding changes, which was to be used to determining what actions to be taken. I have seen some example where they would read the CLI output and parse that to find the information, but this didn’t seem standard or best practice.

Terraform CLI does have a command ‘terraform show’ that can read a plan file, then with the attribute ‘-json’ it prints it into a JSON output.

Details on the command can be found here > https://www.terraform.io/docs/cli/commands/show.html

You can then also read the JSON Schema here > https://www.terraform.io/docs/internals/json-format.html

This command will print out the plan file to JSON, which you could process, but I also wanted it downloaded so I needed it as a file. You can push this to a file by appending ‘ > outputfile.json’ to the command so it looks as per below:

terraform show -no-color -json output.tfplan > output.json

One very annoying part of this, is it still needs connection to the state file where the plan was made from. Therefore, even though we have the plan file locally and want to just read it, we still need to connect to the remote state. This makes it hard for testing as I can download the tfplan from the pipeline, but then need to make sure I have the connection details to the state file in, for example the Azure Blob Storage.

Below is my PowerShell code I used to read the outstanding changes from the plan file. This reads the Resource Changes and searches each action to see what they are trying to do. This can then be used in other methods to create pipeline variables and create if clauses to the next steps.

$planObj = Get-Content "output.json" | ConvertFrom-Json
$resourceChanges = $planObj.resource_changes

$add = ($resourceChanges | Where {$_.change.actions -contains "create"}).length
$change = ($resourceChanges | Where {$_.change.actions -contains "update"}).length
$remove = ($resourceChanges | Where {$_.change.actions -contains "delete"}).length
$totalChanges = $add + $change + $remove

Write-Host "There are $totalChanges ($add to add, $change to change, $remove to remove)"


DotNet User Secrets Feature

A little unknown feature of dotnet is User Secrets. This is used to create local App Setting overrides to set values for local usage. This can be a handy and powerful tool for keeping your local setup separate from checking in code.

You can find more details on the Microsoft Documentation here https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets

The goal of this feature is to overwrite your App Settings with local values. For example if you have a connection string within your JSON, you are not going to want your local Username/Password stored in there to be checked. You also don’t want to be pulling other peoples settings down and having to keep changing them. Therefore, with this feature you set your values on your local machine in a different file and they get overridden, not overwritten, so they will work for your but never be checked in.

Within your project location you can run the ‘init’ command, which will create a new node in the project file. This is called ‘UserSecretId’, which contains the ID for this project. When this runs it will use the ID to match up with where the secrets are stored.

The secrets are stored in the folder ‘C:/Users/[UserName]/AppData/Roaming/Microsoft/UserSecrets’ then in a directory with the ID as the directory name. Within this folder there is then a file called ‘secrets.json’ where you will store all the secrets in a json format. You can get more detail on how to format the name properties of your App Settings on the documentation.

When you run the ‘init’ command it doesn’t create this directory and file for you, so I whipped together a script below to generate the User Secret ID and to also create the required directory/file. Before I talk about that I will also show have to use User Secrets with Dotnet Core Console Apps.

This could be something I have done wrong, but when I create a Web Application and use the feature it just works with no extra effort. However, when I created a Console App it did not just work out the box. I found I needed to do a few things to get it working, which Stafford Williams talks about here

One part he missed was when using Dependency Injection to inject where to find the User Secrets ID in the Builder as per:

varbuilder=newConfigurationBuilder()
.AddJsonFile("appsettings.json",optional:false,reloadOnChange:true)
.AddJsonFile(envJson,optional:false,reloadOnChange:true)
.AddEnvironmentVariables()
.AddUserSecrets<Program>();

Create User Secrets

In the below code it accepts a single project path and a directory with many projects. It will find the project files and check if they have the User Secrets ID in the project.

If it doesn’t then it will go to the directory, run the ‘init’ command and then get the ID as well.

From there it can check/create the folders and files for the User Secrets.

Param (
    [string]$projectPath
)
$projects;
$filesCount = 0
if ($projectPath.EndsWith('.csproj')) {
    $projects = Get-ChildItem -Path $projectPath
    $filesCount = 1
}
else {
    
    if ($projectPath.EndsWith('/') -eq $false -or $projectPath.EndsWith('\') -eq $false) {
        $projectPath += "/";
    }
    $projects = Get-ChildItem -Path "$projectPath*.csproj" -Recurse -Force
    $filesCount = $projects.Length
}
Write-Host("Files Found $filesCount")
if ($filesCount -gt 0) {
    $userSecretsPath = "$ENV:UserProfile/AppData/Roaming/Microsoft/UserSecrets"
    if (!(Test-Path $userSecretsPath)) { 
        Write-Host("Create User Secrets Path")
        New-Item -ItemType directory -Path $userSecretsPath
    }
    $currentDir = [System.IO.Path]::GetDirectoryName($myInvocation.MyCommand.Definition)
    foreach ($project in $projects) {
        Write-Host(" ")
        Write-Host("Current Project $project")
        [xml]$fileContents = Get-Content -Path $project
        if ($null -eq $fileContents.Project.PropertyGroup.UserSecretsId) { 
            Write-Host("User Secret ID node not found in project file")
            Set-Location $project.DirectoryName
            dotnet user-secrets init
            Set-Location $currentDir
            Write-Host("User Secret Create")
            [xml]$fileContents = Get-Content -Path $project
        }
        $userSecretId = $fileContents.Project.PropertyGroup.UserSecretsId
        Write-Host("User Secret ID $userSecretId")
        if ($userSecretId -ne ""){
            $userSecretPath = "$userSecretsPath/$userSecretId"
            if (!(Test-Path $userSecretPath)) { 
                New-Item -ItemType directory -Path $userSecretPath
                Write-Host("User Secret ID $userSecretId Path Created")
            }
            $secretFileName = "secrets.json"
            $secretPath = "$userSecretsPath/$userSecretId/$secretFileName"
            if (!(Test-Path $secretPath)) {   
                New-Item -path $userSecretPath -name $secretFileName -type "file" -value "{}"
                Write-Host("User Secret ID $userSecretId secrets file Created")
            }
            Write-Host("User Secrets path $secretPath")
        }
    }
}

Automatic Change Checking on Pipeline Stages

Azure DevOps has the ability to add multiple stage approval styles like human intervention, Azure Monitoring and schedules. However, there is not the ability to value if a source has changed before triggering a Stage. This can be done on the Pipeline level, but not the Stage level where it can be helpful to have sometimes. I have a PowerShell solution on how this can be done.

The problem to solve came when I had a Pipeline that build multiple Nuget Packages. Without having a human intervention approval button on each of the stages to publish the Nuget Package, they would Publish each time even if there was not changes. This would then cause the version number in the Artifacts to keep increasing for no reason.

Therefore, we wanted a method to automatically check if there has been a change in a particular source location, before allowing it to publish the Package. This has been done using PowerShell, to get the requested branch changes through Git. It can get all the names of the changed files, which included the full path, then we can check what it contains.

The code below gets the Diff files, looping through each of them while checking if it matches the provided path. If it does then it can set the global variable to say there has been changes. I did add an option to break on the first found item, but you could leave it to keep going and leave a log of all the files changed. In my case, I just wanted to know if there has or has not been a change.

$files = $(git diff HEAD HEAD~ --name-only)
$changedFiles = $files -split ' '
$changedCount = $changedFiles.Length
Write-Host("Total changed $changedCount")
$hasBeenChanged = $false
For ($i = 0; $i -lt $count; $i++) {
    $changedFile = $changedFiles[$i]
    if ($changedFile -like "${{parameters.projectPath}}") {
        Write-Host("CHANGED: $changedFile")
        $hasBeenChanged = $true
        if (${{parameters.breakOnChange}} -eq 'true')
            break
    }
}

You can then set the outcome as an output variable of the task for usage later as per this Template below:

- name: 'projectPath'
    type: string
  - name: 'name'
    type: string
  - name: 'breakOnChange'
    type: string
    default: 'true'
steps:
  - task: PowerShell@2
    name: ${{parameters.name}}
    displayName: 'Code change check for ${{parameters.name}}'
    inputs:
      targetType: 'inline'
      script: |
        $files = $(git diff HEAD HEAD~ --name-only)
        $changedFiles = $files -split ' '
        $changedCount = $changedFiles.Length
        Write-Host("Total changed $changedCount")
        $hasBeenChanged = $false
        For ($i = 0; $i -lt $count; $i++) {
            $changedFile = $changedFiles[$i]
            if ($changedFile -like "${{parameters.projectPath}}") {
                Write-Host("CHANGED: $changedFile")
                $hasBeenChanged = $true
                if (${{parameters.breakOnChange}} -eq 'true')
                break
            }
        }
        if ($hasBeenChanged -eq $true) {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]True"
            break
        }
        else {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]False"
        }

Once you have that output variable, it can be used throughout your pipeline as a condition on Stages, Jobs and Task, plus more. Here I use it on a Stage to check before it is run:

- stage: '${{parameters.projectExtension}}Build'
    displayName: ' Nuget Stage'
    condition: eq(variables['${{parameters.name}}.IsContainFile'], 'true')

Auto Purge Azure Container Registry Images

When adding images into the Azure Container Registry you might start getting a backlog of images that need to be cleaned down. Azure CLI has some features, but you might want more…

With the Azure CLI you can use the ‘ACR’ commands, which contain the option of ‘purge’. This can be combined with a tag filter to narrow what images to remove, plus an ‘ago’ property to filter how old the images need to be.

This can then be run to clear down the images in the Registry as per the example below and you can get more information from the GitHub Repository.

acr purge --ago 5d --filter 'myRegistry:.*' --untagged 

This can then be merged with the ACR command to set an ACR Task. This can be a schedule task run on the ACR to trigger the Command, routinely cleaning down the repository. You can see from the example below and read more detail from the Microsoft Documentation.

$azureContainerRegistryName=""
$PURGE_CMD="acr purge --ago 5d --filter 'myRegistry:.*' --untagged "
az acr task create --name myRegistry-WeeklyPurgeTask --cmd "$PURGE_CMD" --schedule "0 1 * * Sun" --registry $azureContainerRegistryName --context /dev/null

The Purge method is really good, but unless you have good tag management, of making sure your running images are tagged differently to the older images, then it doesn’t work. This will in my case keep clearing out all images even ones that are in use.

Therefore, I created a PowerShell script to clean the images by the number of them in the registry. With this we can always be certain there are at leave X amount of images in the registries.

To make this more flexibly and reusable, it works on the Azure Container Registry level instead of the Registry level. First we set the ACR name and the maximum images we want left in the registry. With this we can use the Azure CLI to get an output of all the Registries.

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {

}

With this we can loop each registry to check their image count and remove the images. To do this we use the CLI to get all the image tags in the repository in date/time order descending, so our newer images come first. This means when we loop through them we can keep a counter until we reach the maximum images variable set earlier. Once we reach the set number and only then do we start deleting images.

To delete we can call the CLI action ‘az acr repository delete’, which requires the full name of the image, including the repository name.

Below is the full PowerShell example:

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {
        
    Write-Host("repo: $repository")
    $images = az acr repository show-tags -n $AcrName --repository $repository --orderby time_desc  --output tsv
    Write-Host("image: $images")
            
    $imageCount = 0
    foreach ($image in $images) {
        if ($imageCount -gt $maxImages) {
            $imageFullName = "$repository`:$image"
            Write-Host("image: $imageFullName")
            az acr repository delete -n $AcrName --image $imageFullName
        }
        $imageCount++
    }
}

You could then contain this code into a variable like the first example, to then put it into a Scheduled ACR Task, or just create an automated schedule with other technology like Azure DevOps Pipelines where you can add this into source control.

Setup Hyper Guest for SSH without IP Address

When setting up the Hyper-V Guest hosts, I found it a little tricky and hard to find documentation on how to easily set these up, so I thought I would share how I got them into a configuration with the most simple process. With this setup you can also SSH into the Guest Host even if you do not have an IP address exposed on the Guest Network Adaptor.

To make thing even more simple I am using the pre-selected OS versions from the Hyper-V quick create options, but the steps should also work on other versions.

Linux Virtual Machine

In these steps below you will create an Linux Virtual Machine(VM) with the version ‘Ubuntu 18.04.3 LTS’

  1. Install and Open Hyper-V.
  2. Click Quick Create from the menu on the right.
  3. Select ‘Ubuntu 18.04.3 LTS‘ from the menu and create it.
  4. Follow all the details from the wizard as requested with your chosen details.
  5. Once completed, start and login to your machine.
  6. Open the Terminal within the VM.
  7. Run the following commands
    1. Update installs
      sudo apt-get update
    2. Install open ssh server
      sudo apt-get install openssh-server
    3. Install linux azure
      sudo apt-get install linux-azure
    4. start services by running the below repacing SERVICE-NAME for each of these sshd, ssh, hv-kvp-daemon.service
      sudo systemctl start SERVICE-NAME
      sudo systemctl status SERVICE-NAME
    5. Allow SSH through the fire wall
      sudo ufw allow ssh

Windows Virtual Machine

In these steps below you will create an Windows Virtual Machine(VM) with the version ‘Windows 10 dev environment’

  1. Install and Open Hyper-V
  2. Click Quick Create from the menu on the right
  3. Select Windows 10 dev environment from the menu and create it
  4. Follow all the details from the wizard as requested.
  5. Once completed start and login to your machine
  6. Run these commands
    1. Install Open SSH
      Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

SSH Keys

If you would like to login to your Virtual Machine then you will need to install the SSH keys.

You can find out how to generate keys and what keys you need from the SSH website. (https://www.ssh.com/ssh/keygen/)

Here is some more information on where to store the Public Keys once generated.

Public Key Store

On Linux, you can store them in the users directory in .ssh/authorized_keys for example C:\Users\USERNAME\.ssh\authorized_keys

Unlike Linux there are one of two places you will need to add the keys. If you are admin add it to C:\ProgramData\ssh\administrators_authorized_keys If you are not admin add it to C:\Users\USERNAME\.ssh\authorized_keys

Check If Admin
  1. Run lusrmgr.msc
  2. Select Groups
  3. Select Admin
  4. Check if you are in the group.

Once these tasks are completed you should be able to SSH into your Virtual Machines via the Hyper-V Console(HVC).

I have written about how to use this in a previous post ‘SSH to Hyper-V Virtual Machine using SSH.NET without IP Address‘. Although this targets the SSH.NET, you can use the commands from it to SSH from the Terminal.