Push Docker Image to ACR without Service Connection in Azure DevOps

If you are like me and using infrastructure as code to deploy your Azure Infrastructure then using the Azure DevOps Docker task doesn’t work. To use this task you need to know what your Azure Container Registry(ACR) is and have it configured to be able to push your docker images to the registry, but you don’t know that yet. Here I show how you can still use Azure DevOps to push your images to a dynamic ACR.

In my case I am using Terraform to create the Container Registry and with that I pass what I want it to be called. For example ‘prc-acr’ which will generate an ACR with the full login server name ‘prc-acr.azurecr.io’. This can then be used later for sending the images to the correct registry.

When using the official Microsoft Docker Task the documentation asks that your have a Service Connection to your ACR. To do this though you need the registry login server name, username and password to connect, which unless you keep the registry static you will not know. Therefore, you can’t create the connection to then push your images up. I did read some potential methods to dynamically create this connection, but then we need to manage these so they do not get out of control.

To push the image we need only two things, a connection to Azure and where to push the image. The first we can get set up as we know the tenant and subscription we will be deploying to. The connection can be made up by following this guide to connection Azure to Azure DevOps. The other part of where to send the image, we mentioned earlier when we created the ACT in Terraform calling it ‘prc-acr’.

With these details we can use the Azure CLI to push the image to the ACR. First your need to login to the ACR using:

az acr login --name 'prc-acr'

This will connect you into the ACR that was created in Azure. From there you will need to tag your image with the acr login server name with registry name and tag. For example:

docker tag prcImage:latest prc-acr.azurecr.io/prc-registry:latest

This will then tell docker where to push the image to while you are logged in to the Azure Container Registry, which means from there we simply just need to push the image with that tag in the standard docker method:

docker push prc-acr.azurecr.io/prc-registry:latest

Now this is very each and simple as we do not need a connection to the Container Registry, but just a connection to the Azure environment. These details can then be used with the Azure CLI Task as below, where I am passing in the following parameters.

Parameter NameExample ValueDescription
azureServiceConnectionAzureServiceConnectionService Connection name to Azure
azureContainerRegistryNamePrc-acrAzure Container Registry Name
dockerImageprcImageDocker Image Name
tagNameLatestDocker Tag Name
registryNamePrc-registryACR Registry Name
steps:
  - task: AzureCLI@2
    displayName: 'Push Docker Image to ACR'
    inputs:
      azureSubscription: ${{parameters.azureServiceConnection}}
      scriptType: 'ps'
      scriptLocation: 'inlineScript'
      inlineScript: |
        az acr login --name ${{parameters.azureContainerRegistryName}}
        docker tag ${{parameters.dockerImage}}:${{parameters.tagName}} ${{parameters.azureContainerRegistryName}}.azurecr.io/${{parameters.registryName}}:${{parameters.tagName}}
        docker push ${{parameters.azureContainerRegistryName}}.azurecr.io/${{parameters.registryName}}:${{parameters.tagName}}

Where to find Azure Tenant ID in Azure Portal?

Some of the documentation about Azure from Microsoft can be confusing and missing, including one I get ask ‘Where is the Tenant ID’. Below I give 3 locations, which there is probably, on where to find the Tenant ID in the portal. I have also added how to get the Tenant ID with the Azure CLI.

The Tenant is  basically the Azure AD instance where you can store and configure users, apps and other security permissions. This is also referred to as the Directory in some of the menu items and documentation. Within the Tenant you can only have a single Azure AD instance, but you can have many Subscriptions associated with it. You can get further information from here https://docs.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide

Azure Portal

Azure Active Directory

If you use the Portal menu, once signed in, then you can select the ‘Azure Active Directory’ option.

This will load the Overview page with the summary of your Directory including the Tenant ID.

You can also go to this URL when signed in: https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview

1

Azure AD App Registrations

When configuring external applications or internal products to talk, you can use App Registrations or also know as Service Principal accounts. I know when using the REST API or the Azure SDK you will need the Tenant ID for the authentication, so within the registered app you also get the Tenant ID.

When in the Azure AD, select the ‘App registrations’ from the side menu. Find or add your App then select it.

From the App Overview page you can then find the Tenant ID or also known here as the Directory ID.

Switch Directory

If you have multiple Tenants then you can switch between the Tenants you have access to by switching Directory.

You can do this by selecting your Avatar/Email from the top right of the Portal, which should open a dropdown with your details. There will then be a link call ‘Switch directory’, and by clicking this you can see all the directories you have access to, what your default directory is and switch which one you are on.

As mentioned before the Directory is another word used my Azure for Tenant, so the ID you the see in this view is not just the Directory ID but also the Tenant ID.

Directory +

Azure CLI

From the Azure CLI you can get most every bit of information that is in the Portal depending on your permission.

If you don’t have the CLI then you can install it here: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

You can sign into the CLI by running:

az login

More information on logging in can be found here: https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli

Once you are signed into the Azure CLI, then you can use this command below to get a list of the Subscriptions you have access to, which intern will report back the Tenant ID. Remove everything after ‘–query’ to get the full details.

(https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az_account_list)

 az account list --query '[].{TenantId:tenantId}'

You can also get the current Tenant ID used to authenticate to Azure, by running this command and again remove after the ‘–query’ to get the full information.

(https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az_account_get_access_token)

 az account get-access-token --query tenant --output tsv

Terraform remote backend for cloud and local with Azure DevOps Terraform Task

When working with Terraform, you will do a lot of work/testing locally. Therefore, you do not want to store your state file in a remote storage, and instead just store it locally. However, when deploy you don’t want to then be converting the configuration at that point and can get messy working with Azure DevOps. This is a solution that works for both local development and production deployment with the Azure DevOps Terraform Task.

The official Terraform Task in Azure DevOps by Microsoft is https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks

When using this task you configure the cloud provider you will be using as a Backend service like Azure, Amazon Web Services (AWS) or Google Cloud Platform (GCP). These details can be used to configure the Backend Service to store the State file, but they require the Terraform code to implement the service.

You can see all the different types here: https://www.terraform.io/docs/backends/types/index.html

For this walk through I will use the Azure Resource Manager, which uses an Azure Storage Account, as the example, but as mentioned this can be used in any provider.

https://www.terraform.io/docs/backends/types/azurerm.html

This would be the standard Terraform configuration you would need for setting up the Backend Service for Azure:
 

terraform {
  backend "azurerm" {
    resource_group_name  = "StorageAccount-ResourceGroup"
    storage_account_name = "abcd1234"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

When using this locally, you don’t want any of this in your main.tf Terraform file else it will error with no detail or add the state to the Azure Storage Account. Therefore, locally you will not add this.

Instead during the deployment using the Azure DevOps Pipelines, we will inject the configuration. This will be done, by inserting a backend.tf file using PowerShell. Within the file, we will inject the configuration, but we don’t need all the parameters as they are inserted by the Terraform task.

We will inject just:

terraform {
  backend "azurerm" {
  }
}

Which as a single like string we will need to stringify it to:

"terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"

Using the PowerShell task we can then check for if the file already exist and if not then inject it into the same location as the main.tf file. This then causes when Terraform runs to process it with a Backend Service and with the Azure details we have provided in the Task.

- powershell: |
        $filename = "backend.tf"
        $path = "${{parameters.terraformPath}}"
        $pathandfile = "$path\$filename"
        if ((Test-Path -Path $pathandfile) -eq $false){
            New-Item -Path $path -Name $filename -ItemType "file" -Value "terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"
        }
      failOnStderr: true
      displayName: 'Create Backend Azure'

- task: TerraformTaskV1@0
    inputs:
      provider: ${{parameters.provider}}
      command: 'init'
      workingDirectory: ${{parameters.terraformPath}}
      backendServiceArm: AzureServiceConnection
      backendAzureRmResourceGroupName: TerraformRg
      backendAzureRmStorageAccountName:TerraformStateAccount
      backendAzureRmContainerName: TerraformStateContainer
      backendAzureRmKey:  ***
      environmentServiceNameAzureRM:  AzureServiceConnection

With this solution you will be able to work locally with Terraform and also during deployment have a remote Backend Service configured.

I would suggest using the Pipeline YAML to put an IF statement round the PowerShell if using this in a template:

- ${{ if eq(parameters.provider, 'azurerm')  }}:
    - powershell: |
        $filename = "backend.tf"
        $path = "${{parameters.terraformPath}}"
        $pathandfile = "$path\$filename"
        if ((Test-Path -Path $pathandfile) -eq $false){
            New-Item -Path $path -Name $filename -ItemType "file" -Value "terraform { `r`n backend ""azurerm"" {`r`n} `r`n }"
        }
      failOnStderr: true
      displayName: 

Authenticate Terraform with Azure CLI

Sometimes there are no error messages and they’re not helpful at all, but sometimes there are error message which are helpful for your debugging of the issues which are the best thing ever. Then again this is only helpful if the error message points you to the correct problem to fix. I stubbled across an issue recently when I could not add a Secret to an Azure Key Vault via Terraform, which the error message did not help at all.

To paint the picture around where I was at. I had used Terraform to create a Resource Group, Azure Container Instance and a Azure Key Vault. This had all deployed correctly, but the last part was to create a Secret in the Azure Key Vault. However, when doing this I was met with this error below:

Error: Error checking for presence of existing Secret “demo-container-registry-password” (Key Vault “https://demo-kv.vault.azure.net/”): keyvault.BaseClient#GetSecret: Failure responding to request: StatusCode=403 — Original Error: autorest/azure: Service returned an error. Status=403 Code=”Forbidden” Message=”The user, group or application ‘appid=00000000-8ddb-461a-bbee-02f9e1bf7b46;oid=00000000-5015-4074-9780-4907e90957a8;numgroups=1;iss=https://sts.windows.net/00000000-a490-4728-9c9d-1d1446b68e5e/’ does not have secrets get permission on key vault ‘demo-kv;location=uksouth’. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287″ InnerError={“code”:”AccessDenied”}

Now you would think this is to do with permissions, but I am logged in via my user with Owner permissions. Therefore, it couldn’t be permissions, plus I just created all these resources in Azure correctly.

After some intense Googling, I found the issue wasn’t being authenticated but how I was authenticated. There is a particular method to authenticating while using the Azure CLI, and my issue was the subscription I was using was not my default directory. Therefore, I could not access the secret from the default subscription it was using. I am not sure why all other processes worked fine and this didn’t, but sometimes you just don’t question the insanity.

Here is the details from Terraform on authenticating with the Azure CLI correctly: https://www.terraform.io/docs/providers/azurerm/guides/azure_cli.html

For a simple overview of what is said in there, you can follow these simple steps:

Sign in to Azure CLI using the ‘az’ command

az login

Once you are logged in then you can get the subscription details by listing the available subscriptions

az account list

From the response you can see what you have access to, so you can copy the Subscription ID from the response and set the Subscription context.

az account set --subscription="SUBSCRIPTION_ID"

E.g.

az account set --subscription="00000000-0000-0000-0000-000000000000"

After this you should have no issue connecting and executing the Terraform for Azure.

Azure DevOps Pipeline Templates and External Repositories

Working with Azure DevOps you can use YAML to create the build and deployment pipelines. To make this easier and more repeatable you can also use something called templates. However, if you want to use them in multiple repositories you don’t want to repeat yourself. There is a method to get these shared as I will demo below.

When I format my folders for holding the YAML files, I like to mirror how they were built in the UI editor in Azure DevOps website. That is with Tasks like DotNetCli and Group Tasks that are a collection of Tasks to complete a job like Build Dotnet Core Application.

DevOps
–Tasks
—-DotNetCli.yml
–GroupTasks
—-BuildDotnetApp.yml

In this method the ‘BuildDotnetApp.yml’ would inherit the ‘DotNetCli.yml’ and other Group Tasks could also inherit it as well. This makes them more reusable and dynamic, plus easier to upgrade if you need to change a Task version or add a new parameter.

This would be the Dot Net Core CLI Task:

parameters:
  diplayName: 'DotNetCoreCLI'
  projects: ''
  arguments: ''
  command: build
  customScript: ''
  continueOnError: false

steps:
- task: DotNetCoreCLI@2
  displayName: ${{parameters.diplayName}}
  inputs:
    publishWebProjects: false
    command: ${{parameters.command}}
    projects: ${{parameters.projects}}
    arguments: ${{parameters.arguments}}
    zipAfterPublish: false
    custom: ${{parameters.customScript}}
    continueOnError: ${{parameters.continueOnError}}

And can then be called in like below. Remember that the folder path is relative to where this file is hosted.

steps:
- template: ../Tasks/_DotNetCoreCLI.yml
  parameters:
    diplayName: 'Restore .NetCore Projects'
    projects:  '**/MicroServices/**/*.API.csproj'
    arguments: '--packages $(Build.SourcesDirectory)\packages'
    command: restore

- template: ../Tasks/_DotNetCoreCLI.yml
  parameters:
    diplayName: 'Build .NetCore Projects'
    projects:  '**/*.csproj'
    arguments: '--configuration $(BuildConfiguration) --output $(Build.SourcesDirectory)\bin\$(BuildConfiguration)'
    command: build

You can read more on using templates in the Azure DevOps Documentation.
https://docs.microsoft.com/en-us/azure/devops/pipelines/process/templates?view=azure-devops

Now we have these great reusable templates, we don’t want them sitting in a multiple repository to then be maintained in multiple times.

The idea here would to move these files to a single repository for example ‘deployment-files’, which will contain all them files to then be referenced later.

The first thing we need to do is reference this new repository in the applications pipeline file. Below is a standard azure pipeline file for building the dotnet application. It has array of stages with the first stage being the CI Build, a single job and the default agent pool.

stages:
  - stage: 'CIBuild'
    displayName: 'CI  Service'
    jobs:
      - job: CI_Service
        displayName: CI Service
        continueOnError: false
        pool:
          displayName: "CI Service"
          name: Default
        workspace:
          clean: all
        timeoutInMinutes: 120
        cancelTimeoutInMinutes: 2
        steps:

To add a reference to another repository you will need to add the following to the top of the file.

This reference will have a alias name, type of repository, location to the repository and a reference to the git branch reference as below.

resources:
  repositories:
    - repository: DeploymentTemplates #alias name
      type: git #type of repository
      name: deployment-files #repository name
      ref: 'refs/heads/main' #git branch reference

This is making a reference to another Azure DevOps Repository in the same Organisation, which might work for some setup, but others might have them in different repositories or different vendors like GitHub. The other alternative to this method above is you might want to get the reference from a Pipeline Artifacts after a build, which you can also do by following the instructions in this documentation. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema

With this reference, it means you have access to the repository, but it doesn’t do a git pull as far as I could tell. This might just be for repositories in the same system like Azure DevOps, but it does make things simple as your not download more resources when running the pipeline.

Now you have access to the repository you can call upon the templates in the same method as you normally would with once slight change. You need to reference the file relative to the location it is in the deployment files repository, not the current applications. The other part is you need to add ‘@alias name’ to the end of the path, so it knows where to get the files from. For our example it would look like this.

steps:
- template: DevOps/Tasks/_DotNetCoreCLI.yml@DeploymentTemplates
  parameters:
    diplayName: 'Restore .NetCore Projects'
    projects:  '**/MicroServices/**/*.API.csproj'
    arguments: '--packages $(Build.SourcesDirectory)\packages'
    command: restore

- template: DevOps/Tasks/_DotNetCoreCLI.yml@DeploymentTemplates
  parameters:
    diplayName: 'Build .NetCore Projects'
    projects:  '**/*.csproj'
    arguments: '--configuration $(BuildConfiguration) --output $(Build.SourcesDirectory)\bin\$(BuildConfiguration)'
    command: build

Notice I am not using  the ‘../Task’, but directly referencing the  path ‘DevOps/Task’. Also I have added the ‘@DeploymentTemplates’ to the end of the path.

Here is the full example.

Deployment Files Repository:
Location = ‘DevOps/Tasks’

parameters:
  diplayName: 'DotNetCoreCLI'
  projects: ''
  arguments: ''
  command: build
  customScript: ''
  continueOnError: false

steps:
- task: DotNetCoreCLI@2
  displayName: ${{parameters.diplayName}}
  inputs:
    publishWebProjects: false
    command: ${{parameters.command}}
    projects: ${{parameters.projects}}
    arguments: ${{parameters.arguments}}
    zipAfterPublish: false
    custom: ${{parameters.customScript}}
    continueOnError: ${{parameters.continueOnError}}



Application Repository:
Location = ‘azurepipeline.yml’

resources:
  repositories:
    - repository: DeploymentTemplates #alias name
      type: git #type of repository
      name: deployment-files #repository name
      ref: 'refs/heads/main' #git branch reference
stages:
  - stage: 'CIBuild'
    displayName: 'CI  Service'
    jobs:
      - job: CI_Service
        displayName: CI Service
        continueOnError: false
        pool:
          displayName: "CI Service"
          name: Default
        workspace:
          clean: all
        timeoutInMinutes: 120
        cancelTimeoutInMinutes: 2
	steps:
	- template: DevOps/Tasks/_DotNetCoreCLI.yml@DeploymentTemplates
	  parameters:
	    diplayName: 'Restore .NetCore Projects'
	    projects:  '**/MicroServices/**/*.API.csproj'
	    arguments: '--packages $(Build.SourcesDirectory)\packages'
	    command: restore
	
	- template: DevOps/Tasks/_DotNetCoreCLI.yml@DeploymentTemplates
	  parameters:
	    diplayName: 'Build .NetCore Projects'
	    projects:  '**/*.csproj'
	    arguments: '--configuration $(BuildConfiguration) --output $(Build.SourcesDirectory)\bin\$(BuildConfiguration)'
	    command: build