Automatic Change Checking on Pipeline Stages

Azure DevOps has the ability to add multiple stage approval styles like human intervention, Azure Monitoring and schedules. However, there is not the ability to value if a source has changed before triggering a Stage. This can be done on the Pipeline level, but not the Stage level where it can be helpful to have sometimes. I have a PowerShell solution on how this can be done.

The problem to solve came when I had a Pipeline that build multiple Nuget Packages. Without having a human intervention approval button on each of the stages to publish the Nuget Package, they would Publish each time even if there was not changes. This would then cause the version number in the Artifacts to keep increasing for no reason.

Therefore, we wanted a method to automatically check if there has been a change in a particular source location, before allowing it to publish the Package. This has been done using PowerShell, to get the requested branch changes through Git. It can get all the names of the changed files, which included the full path, then we can check what it contains.

The code below gets the Diff files, looping through each of them while checking if it matches the provided path. If it does then it can set the global variable to say there has been changes. I did add an option to break on the first found item, but you could leave it to keep going and leave a log of all the files changed. In my case, I just wanted to know if there has or has not been a change.

$files = $(git diff HEAD HEAD~ --name-only)
$changedFiles = $files -split ' '
$changedCount = $changedFiles.Length
Write-Host("Total changed $changedCount")
$hasBeenChanged = $false
For ($i = 0; $i -lt $count; $i++) {
    $changedFile = $changedFiles[$i]
    if ($changedFile -like "${{parameters.projectPath}}") {
        Write-Host("CHANGED: $changedFile")
        $hasBeenChanged = $true
        if (${{parameters.breakOnChange}} -eq 'true')
            break
    }
}

You can then set the outcome as an output variable of the task for usage later as per this Template below:

- name: 'projectPath'
    type: string
  - name: 'name'
    type: string
  - name: 'breakOnChange'
    type: string
    default: 'true'
steps:
  - task: PowerShell@2
    name: ${{parameters.name}}
    displayName: 'Code change check for ${{parameters.name}}'
    inputs:
      targetType: 'inline'
      script: |
        $files = $(git diff HEAD HEAD~ --name-only)
        $changedFiles = $files -split ' '
        $changedCount = $changedFiles.Length
        Write-Host("Total changed $changedCount")
        $hasBeenChanged = $false
        For ($i = 0; $i -lt $count; $i++) {
            $changedFile = $changedFiles[$i]
            if ($changedFile -like "${{parameters.projectPath}}") {
                Write-Host("CHANGED: $changedFile")
                $hasBeenChanged = $true
                if (${{parameters.breakOnChange}} -eq 'true')
                break
            }
        }
        if ($hasBeenChanged -eq $true) {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]True"
            break
        }
        else {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]False"
        }

Once you have that output variable, it can be used throughout your pipeline as a condition on Stages, Jobs and Task, plus more. Here I use it on a Stage to check before it is run:

- stage: '${{parameters.projectExtension}}Build'
    displayName: ' Nuget Stage'
    condition: eq(variables['${{parameters.name}}.IsContainFile'], 'true')

Auto Purge Azure Container Registry Images

When adding images into the Azure Container Registry you might start getting a backlog of images that need to be cleaned down. Azure CLI has some features, but you might want more…

With the Azure CLI you can use the ‘ACR’ commands, which contain the option of ‘purge’. This can be combined with a tag filter to narrow what images to remove, plus an ‘ago’ property to filter how old the images need to be.

This can then be run to clear down the images in the Registry as per the example below and you can get more information from the GitHub Repository.

acr purge --ago 5d --filter 'myRegistry:.*' --untagged 

This can then be merged with the ACR command to set an ACR Task. This can be a schedule task run on the ACR to trigger the Command, routinely cleaning down the repository. You can see from the example below and read more detail from the Microsoft Documentation.

$azureContainerRegistryName=""
$PURGE_CMD="acr purge --ago 5d --filter 'myRegistry:.*' --untagged "
az acr task create --name myRegistry-WeeklyPurgeTask --cmd "$PURGE_CMD" --schedule "0 1 * * Sun" --registry $azureContainerRegistryName --context /dev/null

The Purge method is really good, but unless you have good tag management, of making sure your running images are tagged differently to the older images, then it doesn’t work. This will in my case keep clearing out all images even ones that are in use.

Therefore, I created a PowerShell script to clean the images by the number of them in the registry. With this we can always be certain there are at leave X amount of images in the registries.

To make this more flexibly and reusable, it works on the Azure Container Registry level instead of the Registry level. First we set the ACR name and the maximum images we want left in the registry. With this we can use the Azure CLI to get an output of all the Registries.

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {

}

With this we can loop each registry to check their image count and remove the images. To do this we use the CLI to get all the image tags in the repository in date/time order descending, so our newer images come first. This means when we loop through them we can keep a counter until we reach the maximum images variable set earlier. Once we reach the set number and only then do we start deleting images.

To delete we can call the CLI action ‘az acr repository delete’, which requires the full name of the image, including the repository name.

Below is the full PowerShell example:

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {
        
    Write-Host("repo: $repository")
    $images = az acr repository show-tags -n $AcrName --repository $repository --orderby time_desc  --output tsv
    Write-Host("image: $images")
            
    $imageCount = 0
    foreach ($image in $images) {
        if ($imageCount -gt $maxImages) {
            $imageFullName = "$repository`:$image"
            Write-Host("image: $imageFullName")
            az acr repository delete -n $AcrName --image $imageFullName
        }
        $imageCount++
    }
}

You could then contain this code into a variable like the first example, to then put it into a Scheduled ACR Task, or just create an automated schedule with other technology like Azure DevOps Pipelines where you can add this into source control.

Automate Security for Azure Container Registry

From March 2021 Azure is deprecating the Container Setting in Azure Web Apps, which changes you to use the new Development Center. This look very nice, but there is a change that is going to force you to have weaker security. This change is to have the Admin Credentials enabled, but there is something you can do to be secure.

Before this change, the best practice method was to turn Admin Credentials off in your Azure Container Registry(ACR). This is because the user is a single user, so you can’t tell different peoples interactions while using this one account, and it also has as it says in the name, admin rights meaning it can do anything.

To make this secure, you would disable the Admin Credentials and then anything trying to connect to the repository would have a Service Principle set up or a role added with the correct access. In this post I describe some of these where you can set up the ACR credentials in the App Settings, so the Azure Web App has access to pull the image. This is using Terraform, but it has the basic idea as well.
Use Terraform to connect ACR with Azure Web App

Now when you create a new Azure Web App or go to an existing one you will be presented with an error like this:

Basics Docker Monitoring 
Tags 
Review + create 
Pull container images from Azure Container Registry, Docker Hub or a private Docker repository. App Service will 
deploy the containerized app with your preferred dependencies to production in seconds. 
Options 
Image Source 
Azure container registry options 
Registry * 
Image 
Tag 
Startup Command O 
Single Container 
Azure Container Registry 
Loading... 
Cannot perform credential operations for 
o 
as admin user is 
disabled. Kindly enable admin user as per docs: https://docs.microsoft.com 
/en-us/azure/container-registry/container-registry-authentication*admin- 
account

This is now the message you get telling you to turn on the Admin Credentials, but what makes it confusing is on the documentation they point you to says:

“The admin account is designed for a single user to access the registry, mainly for testing purposes. “

ref: https://docs.microsoft.com/en-us/azure/container-registry/container-registry-authentication#admin-account

However, it seems we need to play by their conflicting rules, so we need to work with this and make it more secure.
Turning on this setting can be insecure, but what we can do is rotate the keys. As you can tell from the UI you don’t need to enter these credentials as the authentication is handled behind the scenes.

Therefore, we can regenerate the passwords without affecting the connection between the ACR and the Resource. This is not perfect, but it does mean if anyone get your password or uses it, then it will be expired very quickly if you want at least.

To do this we can use the ACR and the Azure CLI. With the CLI you can use the ACR commands to trigger a password regeneration.

az acr credential renew -n MyRegistry --password-name password
az acr credential renew -n MyRegistry --password-name password2

ref: https://docs.microsoft.com/en-us/cli/azure/acr/credential?view=azure-cli-latest#az_acr_credential_renew

We can then schedule this and tie it to the ACR by using the ACR Tasks. These can run ACR commands and be put on a repeating timer to trigger when you wish.
ref: https://docs.microsoft.com/en-us/cli/azure/acr/task?view=azure-cli-latest#az_acr_task_create

unfortunalty the ‘acr’ doesn’t contain the ‘Credential’ command and if you run the ‘az’ cli command it says you need to login.

You can put the commands into a Dockerfile and run the commands using the Azure CLI image, but this seems overkill. I would suggest using alternatives to run the commands, like setting up an automated Azure DevOps pipeline to run the commands in the CLI task.

Merge Azure DevOps Pipeline Templates

As mentioned in my previous post about Azure DevOps Local Pipeline Testing, the available method of testing with the Azure DevOps API doesn’t have the feature to merge the YAML Templates. Therefore, I have set out to try and solve this issue..

You can view the full PowerShell script on GitHub.(https://github.com/PureRandom/AzureDevOpsYamlMerge-ps)

Please feel free to advise on more efficient coding and suggestions of other use cases that need to be considered.

Below I will walk through what it currently does as of the date of this post. I have tried to consider most, if not all, of the scenarios that I have come across, but I am sure there are other ways that need to be solved.

To use the script you simply need to pass it the root location of where your YAML is stored and the name of the main YAML file. For Example:

$yamlPath = "C:\Users\pateman.workspace\CodeRepos\"
$yamlName = "azurepipeline.yml"
$outputPath = processMainPipeline -pipelineYaml $yamlName -rootPath $yamlPath
Write-Host "Parsed into $outputPath"

This will read through each line and rebuild the YAML file. As it reads through if it finds a line that contains the template syntax then the processing starts, but only if the template path does not contain the ‘@’ symbol as that is assumed to be a path in a remote repository.

In the processing it will extract the parameters that are being passed to the template. Then getting a copy of the template YAML into a variable, it will start reading this file and rebuilding it. First it will assume the Parameters are set at the top, so it will extract the parameters. If the parameter found has been set by the main YAML then it will do nothing, else it will create the entry and update value from the default property.

Once it has all the parameters it can find and replace these as it goes through the file. Finally insert this now update version of the template into the same slot as where the template reference was in the main YAML.

This is then saved in the root location, where you can use this file in the pipeline testing API.

Create Identity in Google Cloud Platform and GSuite

Compare to some other cloud providers, creating an identity via code in GCP is a little fragmented if your using GSuite for your identity storage. The Google Cloud Platform holds your users identity reference and permission, while the other system GSuite hold the security of the users authentication. This can also make the documentation feel a little fragmented and not so easy to follow. Hence this post to stick them together in how I used C# Dot Net Core to create a Google Cloud Platform Identity using their SDK.

This part is standard for any SDK access to GCP, which is to have a service account authentication. For this you will need to create a Service Account in GCP, which needs to be associated with a project. You can create it against the project where you are deploying, or to keep things separate, like I would recommend, you can create a  Service Management Project. This is just a standard project, but you can use this to keep all the SDK activity on this project while the usage activity happens on the other project.

Create a Project

  1. Go to the Manage resources page in the Cloud Console.
  2. On the Select organization drop-down list at the top of the page, select the organization in which you want to create a project. If you are a free trial user, skip this step, as this list does not appear.
  3. Click Create Project.
  4. In the New Project window that appears, enter a project name and select a billing account as applicable. A project name can contain only letters, numbers, single quotes, hyphens, spaces, or exclamation points, and must be between 4 and 30 characters.
  5. Enter the parent organization or folder in the Location box. That resource will be the hierarchical parent of the new project.
  6. When you’re finished entering new project details, click Create.

Reference: https://cloud.google.com/resource-manager/docs/creating-managing-projects#console

Create a Service Account

  1. In the Cloud Console, go to the Service accounts page.
  2. Select a project (your new Service Management Project).
  3. Click Create service account.
  4. Enter a service account name to display in the Cloud Console.
    The Cloud Console generates a service account ID based on this name. Edit the ID if necessary. You cannot change the ID later.
  5. Optional: Enter a description of the service account.
  6. If you do not want to set access controls now, click Done to finish creating the service account.
    To set access controls now, click Create and continue to the next step.
  7. Optional: Choose one or more IAM roles to grant to the service account on the project.
  8. When you are done adding roles, click Continue.
  9. Optional: In the Service account users role field, add members that can impersonate the service account.
  10. Optional: In the Service account admins role field, add members that can manage the service account.
  11. Click Done to finish creating the service account.

Reference: https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console

You could then get more specific for the Identity Access Management (IAM) permissions, but to keep it simple you would just need to apply the Service Account ‘Owner’ and ‘Project IAM Admin’ access on the new Service Management Project. This will give the Service Account access to create the identities, but for more detail on the permissions, you can use this link to look them up. https://cloud.google.com/iam/docs/permissions-reference

Next we need the Service Account to have access to create the identities in the GSuite. The below sets the Service Account in GCP ready to give access in the Admin portal of GSuite.

  1. Locate the newly-created service account in the table. Under Actions, click more the 3 dots at the end, then Edit.
  2. In the service account details, click the down arrow to see more Show domain-wide delegation, then ensure the Enable G Suite Domain-wide Delegation checkbox is checked.
  3. If you haven’t yet configured your app’s OAuth consent screen, you must do so before you can enable domain-wide delegation. Follow the on-screen instructions to configure the OAuth consent screen, then repeat the above steps and re-check the checkbox.
  4. Click Save to update the service account, and return to the table of service accounts. A new column, Domain-wide delegation, can be seen. Click View Client ID, to obtain and make a note of the client ID.

Reference: https://developers.google.com/admin-sdk/directory/v1/guides/delegation#create_the_service_account_and_credentials

Now we connect these together, but giving the Service Account access in the GSuite Admin Portal.

  1. From your Google Workspace domain’s Admin console, go to Main menu menu> Security > API controls.
  2. In the Domain wide delegation pane, select Manage Domain Wide Delegation.
  3. Click Add new.
  4. In the Client ID field, enter the client ID obtained from the service account creation steps above.
  5. In the OAuth Scopes field, enter a comma-delimited list of the scopes required for your application (for a list of possible scopes, see Authorize requests).
    For example, if you require domain-wide access to Users and Groups enter: https://www.googleapis.com/auth/admin.directory.user, https://www.googleapis.com/auth/admin.directory.group
  6. Click Authorize.

Reference: https://developers.google.com/admin-sdk/directory/v1/guides/delegation#delegate_domain-wide_authority_to_your_service_account

At this point our Service Account has access to the GCP Account/Project and also has the access needed for the GSuite to create the identities. Therefore, we can start getting into the code to create these accounts.

To start with the SDK we need the Service Accounts JSON Key, which you can get by:

  1. In the Cloud Console, go to the Service Accounts page.
  2. Click Select a project, choose a project, and click Open.
  3. Find the row of the service account that you want to create a key for. In that row, click the More button, and then click Create key.
  4. Select a Key type and click Create.

Reference: https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console

Once you have downloaded the JSON File we can move to the Authentication in C#.

You will need to install the Google.Apis.Auth Nuget package to your project. There are then multiple difference methods to do this depending on how you are storing your JSON Key, but for my example we are injecting the JSON straight into the method, which we need the GoogleCredential.  The method we need to call is:

GoogleCredential.FromJson(gcpAuthenticationJson);

With gcpAuthenticationJson being the JSON string from the downloaded file. We also need to add scope to the request of access, which we can string together like below with these scopes required:

GoogleCredential.FromJson(gcpAuthenticationJson)
.CreateScoped(new List<string>
{
"https://www.googleapis.com/auth/admin.directory.user",
"https://www.googleapis.com/auth/admin.directory.group",
"https://www.googleapis.com/auth/admin.directory.user.security"
});

Now although we have given the Service Account all the permissions it requires to do the job, it needs to be executed by a GSuite Admin. We of course cannot have the admin logging in every time, therefore we just need the code to act as the admin. We can do this by adding an addition command to the methods:

GoogleCredential.FromJson(gcpAuthenticationJson)
.CreateScoped(new List<string>
{
 "https://www.googleapis.com/auth/admin.directory.user",
 "https://www.googleapis.com/auth/admin.directory.group",
 "https://www.googleapis.com/auth/admin.directory.user.security"
}).CreateWithUser(adminEmail);

We can of course make this a little more flexable as it can be reused for other authentications, so this is the method I would recommend:

///<summary>
///GettheGCPCredentialviatheServiceAccount
///https://cloud.google.com/docs/authentication/production
///</summary>
///<paramname="authJson">DownloadedAuthenticationJSON</param>
///<paramname="apiScopes">CustomAPIScopes</param>
///<paramname="adminEmail">UserEmailAddresstoImpersonate</param>
///<returns>GCPCredentials</returns>
publicGoogleCredential GetGcpCredential(string authJson,List<string> apiScopes=null,string adminEmail="")
{
 var googleCredential = GoogleCredential.FromJson(authJson)
 .CreateScoped(apiScopes ?? new List<string>
 {
  "https://www.googleapis.com/auth/cloud-platform"
 });

 if(!string.IsNullOrEmpty(adminEmail))
  googleCredential=googleCredential.CreateWithUser(adminEmail);

 return googleCredential;
}

From this can then create users with the SDK using this simple bit of code:

var directoryService = new DirectoryService( new BaseClientService.Initializer
            {
                HttpClientInitializer = GetGcpCredential(authJson, apiScopes, userEmail)
            });
            try
            {
                var request = directoryService.Users.Insert(userData);
                return await request.ExecuteAsync();
            }
            finally
            {
                directoryService.Dispose();
            }