Merge Azure DevOps Pipeline Templates

As mentioned in my previous post about Azure DevOps Local Pipeline Testing, the available method of testing with the Azure DevOps API doesn’t have the feature to merge the YAML Templates. Therefore, I have set out to try and solve this issue..

You can view the full PowerShell script on GitHub.(https://github.com/PureRandom/AzureDevOpsYamlMerge-ps)

Please feel free to advise on more efficient coding and suggestions of other use cases that need to be considered.

Below I will walk through what it currently does as of the date of this post. I have tried to consider most, if not all, of the scenarios that I have come across, but I am sure there are other ways that need to be solved.

To use the script you simply need to pass it the root location of where your YAML is stored and the name of the main YAML file. For Example:

$yamlPath = "C:\Users\pateman.workspace\CodeRepos\"
$yamlName = "azurepipeline.yml"
$outputPath = processMainPipeline -pipelineYaml $yamlName -rootPath $yamlPath
Write-Host "Parsed into $outputPath"

This will read through each line and rebuild the YAML file. As it reads through if it finds a line that contains the template syntax then the processing starts, but only if the template path does not contain the ‘@’ symbol as that is assumed to be a path in a remote repository.

In the processing it will extract the parameters that are being passed to the template. Then getting a copy of the template YAML into a variable, it will start reading this file and rebuilding it. First it will assume the Parameters are set at the top, so it will extract the parameters. If the parameter found has been set by the main YAML then it will do nothing, else it will create the entry and update value from the default property.

Once it has all the parameters it can find and replace these as it goes through the file. Finally insert this now update version of the template into the same slot as where the template reference was in the main YAML.

This is then saved in the root location, where you can use this file in the pipeline testing API.

BadImageFormatException When Running 32/64 Bit Applications in Jet Brains Rider

I posted before about the error of getting BadImageFormatException and how it was associated to the processor settings. The fixed suggested were for Visual Studio only and in recent times I have now started working with Jet Brains Rider, which I then got the same issue, but found the correcting process.

If you do have Visual Studio and have this issue then you can read how to correct it on this post. BadImageFormatException When Running 32/64 Bit Applications in Visual studio

If you are using Jet Brains Rider then you can follow this instead.

  1. Open up your .Net Framework Project in Jet Brains Rider
  2. Select ‘Edit Configuration’ from the top right menu:
  1. Within the new window that opens you can then change the IIS Express path, which currently you can see is using the ‘x86’ version that is 32 bit. Update the path without this to configure the 64 bit version.

Azure REST API Scopes

When working with the Azure REST API you need to provide the scope in all API requests, so Azure knows where you are looking. However, throughout their documentation that although they ask for the scope they do not explain or link to an explanation of what a scope is and what the formats are. Therefore, I have collected them and got a simple explanation for each of them.

As mentioned above, the scope is like a search filter and also part of the permissions. For example, if you was getting a list of Resources you might use the Resource Group scope to get only them Resources, or you might go for the Subscription scope to get all Resources in the requested Subscription. This might also be due to permissions, if the Service Principle account you are using doesn’t have access to the whole Tenant, but does to specific Subscriptions.

Scopes

Subscription scope

subscriptions/{subscriptionId}

Example:

subscriptions/d7f90b53-af20-4061-8206-f05e31852a44

Resource Group scope

subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}

Example:

subscriptions/d7f90b53-af20-4061-8206-f05e31852a44/resourceGroups/my-rg-2020

Providers scope

These scopes can vary depending on what the scope is for. For example this is the scope for the Billing Account:

providers/Microsoft.Billing/billingAccounts/{billingAccountId}

You can find all of the scopes by following https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations

Tenant scope

The Tenant scope is easist, as you just don’t put anything. For example getting a list of Role Definitions

The official URL is:

GET https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01

But to get the Role Definitions from the Tenant Level and below, you just remove the scope segment:

GET https://management.azure.com/providers/Microsoft.Authorization/roleDefinitions?api-version=2015-07-01

BadImageFormatException When Running 32/64 Bit Applications in Visual Studio

You might be about to run your application and suddenly you are getting ‘System.BadImageFormatException’. This is what I got when running a new application that I didn’t build. It took me a little bit to figure out what the issue was, but as below, the culprit was found and also some other interesting configurations required.

The problem I found was the application was built using a DotNet Core Console Application, but specifically using the x64 Bit processor. Now this is not a problem of course, as you want it to use the best performance processor the application can, so you can deliver that performance to your end users. However, it seems the rest of Visual Studio was not ready for this, so running Local IIS or running Unit Tests was causing a System.BadImageFormatException exception.

When I have been building application before, I have just defaulted to using the ‘AnyCpu’ configuration, which has worked perfect with no conflict or issues. What this actually does under the hood though, is it chooses the lowest configuration it can depending on the requirements of the application, which for this case was 32bit.

When an application can run fine either in 32-bit or 64-bit mode, the 32-bit mode tends to be a little faster. Larger pointers means more memory and cache consumption, and the number of bytes of CPU cache available is the same for both 32-bit and 64-bit processes. Of course the WOW layer does add some overhead, but the performance numbers I’ve seen indicate that in most real-world scenarios running in the WOW is faster than running as a native 64-bit process

Ref: Rick Byers – exMSFT June 9, 2009

Fixing This

If you still want to run specifically as 64 bit or 32 bit then there are a few places this needs to be changed for both IIS Express and Unit Tests to work.

Build Configuration

  1. Open the Configuration Manager dialog box.

    undefined
  2. In the Active solution configuration drop-down list, choose New.
    The New Solution Configuration dialog box opens.
     
    undefined
  3. In the Name text box, enter a name for the new configuration.
  4. To use the settings from an existing solution configuration, in the Copy settings from drop-down list, choose a configuration.
     
    undefined
  5. If you want to create project configurations at the same time, select the Create new project configurations check box

You can get further instructions from Microsoft Official Documentation for other setups and starting points.

IIS Express

IIS Express is what is used when running the application from Visual Studio. Each project has it’s own setting for this, so if you have multiple projects, then you will need to do the following for each of them.

  1. If you right click the required project and select properties, then the new window will open up.
  2. Under the Build tab on the left you can then see the ‘Platform target‘ selection.

Unit Tests

Yes even the tests can run under a different process, so we need to configure them as well. This is global to all test projects, so will onlt need to be done once. You can configure them seperately, but doing the about section for each project.

In the ‘Test‘ option at the top of the window select the ‘Processor Architecture for AnyCPU Projects‘ then the desired processor setting.

undefined

Microsoft Graph Client ClientCredentialProvider not Recognised

So you have downloaded the latest version of the Graph Nuget Package and your dotnet core application is all ready to start building with the Microsoft Graph Client SDK. However, when you create the client as per the documentation, Visual Studio is complaining it can’t find the ClientCredentialProvider.

This is because it required the Microsoft Graph Auth Nuget package, which is not production ready yet. When you download the Graph Nuget it does not download this, so you would need to install it seperatly as per the github repository suggests.

However, if you do not perfer adding a non-production ready package to you production code then there is an alternative method.

This method uses the Microsoft Authentication Library for .NET (MSAL) to set up the Microsoft Graph Client using the app-only provider. In the example below I am following the Client Credentials Provider as per the Authentication Providers documentation.

First you will need the Tenant ID for the Azure Subscription you wish to use the SDK with. This can be retrieved from:

// The Azure AD tenant ID  (e.g. tenantId.onmicrosoft.com)
var tenantId = "{tenant-id}";

Then you will also need the Applications Client ID and Secret. If you haven’t registered your application yet then you can follow this to get that setup and ready > https://docs.microsoft.com/en-us/graph/auth-register-app-v2. Make sure you have given the Application enough permissions on the Graph API to execute the required action for your project.

// The client ID of the app registered in Azure AD
var clientId = "{client-id}";


// Application Client Secret (Recommended this is stored safely and not hardcoded)
var clientSecret = "{client-secret}"

With this information, we can now create the MSAL client credentials to authenticate the application to Azure

var scopes = new string[] { "https://graph.microsoft.com/.default" };
var confidentialClient = ConfidentialClientApplicationBuilder
    .Create(clientId)
    .WithAuthority($"https://login.microsoftonline.com/$tenantId/v2.0")
    .WithClientSecret(clientSecret)
    .Build();

Now we can create the Graph Client by passing the Authentication Provider as a variable. In this we are getting the Authentication Bearer Token from Azure for the application. Once we have this then we can add it to all the API requests headers for authentication.

This means when ever you use the SDK it will add this token, or a new token, to every request to authenticate the API request.

GraphServiceClient graphServiceClient =
new GraphServiceClient(new DelegateAuthenticationProvider(async (requestMessage) => {

// Retrieve an access token for Microsoft Graph (gets a fresh token if needed).
var authResult = await confidentialClient.AcquireTokenForClient(scopes).ExecuteAsync();

// Add the access token in the Authorization header of the API
requestMessage.Headers.Authorization =
new AuthenticationHeaderValue("Bearer", authResult.AccessToken);

})
);

From there you can use the Microsoft Graph SDK just as normal.

// Make a Microsoft Graph API query
var users = await graphServiceClient.Users.Request().GetAsync();

For more information on the Microsoft Graphe SDK and API, you can read the GitHub Repository