Use Terraform to connect ACR with Azure Web App

You can connect an Azure Web App to Docker Hub, Private Repository and also an Azure Container Registry(ACR). Using Terraform you can take it a step further and build your whole infrastructure environment at the same time as connecting these container registries. However, how do you connect them together in Terraform?

I am going to focus on the connection of an ACR, but you can also follow the same method for the other providers.

Why I am using this as an example, is when correcting the other methods they are a simple URL, username and password, but the Azure Container Registry within the portal has a different user interface where it connects natively in the Azure. Why I was learning to do this, I kept getting my ACR connecting like a private repository instead of an actual ACR. Therefore, the method below will have the desired outcome of within the Azure portal the Web App showing it is connected to an ACR.

I will go through the general setup I have got for a simple Web App connecting to an ACR with all of the supporting  elements. I am not showing best practice of having the variables and outputs in separate files as this is not the point of the post, but I would encourage people to do that.

First we will need to create the infrastructure to support the Web App, by connecting to the Azure Resource Manager provider in Terraform:

provider "azurerm" {
  version         = "=2.25.0"
  subscription_id = var.subscription_id
  features {}
}

This passes a ‘subscription_id’ variable to connect to the correct subscription. We then create the Resource Group to contain all the resources.

variable "resource_group_name" {
  type        = string
  description = "Azure Resource Group Name. "
}
variable "location" {
  type        = string
  description = "Azure Resource Region Location"
}

# Create a Resource Group
resource "azurerm_resource_group" "acr-rg" {
  name = var.resource_group_name
  location = var.location  
}

The next part is to create the Azure Container Registry with your chosen name and the SKU for the service level you would like. For this example we have use the ‘Standard’ to keep it cheap and simple, while using the same location as the Resource Group.

variable "container_registry_name" {
  type        = string
  description = "Azure Container Registry Name"
}

# Azure Container Regristry
resource "azurerm_container_registry" "acr" {
  name                     = var.container_registry_name
  resource_group_name      = azurerm_resource_group.acr-rg.name
  location                 = azurerm_resource_group.acr-rg.location
  sku                      = "Standard"
  admin_enabled            = true
}

For the Web App we will need an App Service Plan to contain the Web App and set the SKU Level. You can see this is the same as before using the same locations and also I am using Linux as the base operating system.

variable "app_plan_name" {
  type        = string
  description = "Azure App Service Plan Name"
}

# App Plan
resource "azurerm_app_service_plan" "service-plan" {
  name = var.app_plan_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  kind = "Linux"
  reserved = true  
  sku {
    tier = "Standard"
    size = "S1"
  }  
}

Now is where we declare the Web App itself, but first create the 3 variables we will need. The Web App name, your Registry name and the Tag assigned to your image.

variable "web_app_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "registry_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "tag_name" {
  type        = string
  description = "Azure Web App Name"
 default: 'latest'
}

To link to Docker Registries you need 3 App Settings configured ‘ DOCKER_REGISTRY_SERVER_URL’, ‘ DOCKER_REGISTRY_SERVER_USERNAME’, and ‘DOCKER_REGISTRY_SERVER_PASSWORD’.

These are used to gain the correct access to the registries.

For the ACR, the URL is the ‘Login Server’ and then the username/password is the Admin Username/Password.

These can be found here in the portal, if your ACR is already created.

For example:

    DOCKER_REGISTRY_SERVER_URL      = "https://myacr.azurecr.io"
    DOCKER_REGISTRY_SERVER_USERNAME = myacr
    DOCKER_REGISTRY_SERVER_PASSWORD = *********

A key part to see here is the URL is prefixed with the ‘https’ and it needs to be this, not http as it needs to be secure.

Instead of getting these details manually, we are using Terraform so we have access to these details from the created Azure Container Registry that we can use:

    DOCKER_REGISTRY_SERVER_URL              = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password

We now have a connection to the ACR, but need to tell the Web App what registry and tag to look for. As we are using a Linux based server, we configure the ‘linux_fx_version’ in the site config with this pattern below, but for Windows you would use ‘windows_fx_version’.

"DOCKER|[RegistryName]:[TagName]"

For an example with a registry name MyRegistry and a tag name MyTag:

"DOCKER|MyRegistry:MyTag"

Below is the full example of the Web App generation in Terraform. With all these parts together you should have a Resource Group containing a ACR, App Service Plan and a Web App all connected.

# web App
resource "azurerm_app_service" "app-service" {
  name = var.web_app_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  app_service_plan_id = azurerm_app_service_plan.service-plan.id
  app_settings = {
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
   
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password
 
  }
  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.registry_name}:${var.tag_name}"
    always_on        = "true"
  }
  identity {
    type = "SystemAssigned"
  }
}

## Outputs
output "app_service_name" {
  value = "${azurerm_app_service.app-service.name}"
}
output "app_service_default_hostname" {
  value = "https://${azurerm_app_service.app-service.default_site_hostname}"
}

Deleting Recovery Service Vault Backup Server (MABS)

This is one of them quick shares to try help the world of Azure developers just a little bit.

While doing some Azure Development using the Microsoft Azure REST API, I got to a point to working in the Azure Recovery Service Vault. I was trying to delete a Back Up Server Agent using the API, which after a little bit of a hunt I found it is actually called ‘Registered Identities

This of course only has one endpoint to call, so it must just be that easy…
However when calling this API I would just get the following:

{
"error":
{
"code":"ServiceContainerNotEmptyWithBackendMessage",
"message":" There are backup items associated with the server. Stop protection and delete backups for each of the backup items to delete the server's registration. If the issue persists, contact support. Refer to https://aka.ms/DeleteBackupItems for more info. ",
"target":null,
"details":null,
"innerError":null
}
}

This was odd as I couldn’t find anything to imply a reason for this error. After then investigating the network traffic while deleting the resource in the Azure Portal, I found a slight difference which also fixed the issue. This was the version of the API.

In the documentation it is stated as ‘2016-06-01’ , but in the network traffic it had version ‘2019-05-13’. If you now change:

DELETE https://management.azure.com/Subscriptions/77777777-d41f-4550-9f70-7708a3a2283b/resourceGroups/BCDRIbzRG/providers/Microsoft.RecoveryServices/vaults/BCDRIbzVault/registeredIdentities/dpmcontainer01?api-version=2016-06-01

To

DELETE https://management.azure.com/Subscriptions/77777777-d41f-4550-9f70-7708a3a2283b/resourceGroups/BCDRIbzRG/providers/Microsoft.RecoveryServices/vaults/BCDRIbzVault/registeredIdentities/dpmcontainer01?api-version=2019-05-13

<p value="<amp-fit-text layout="fixed-height" min-font-size="6" max-font-size="72" height="80">It will now all work.It will now all work.

How to authenticate with Fortify Security with PowerShell

Fortify, a security scanning tool for code, has some great features but also some limiting features. Therefore I sought to use their open REST API to expand on its functionality to enhance how we are using it within the DevOps pipeline. This was of course step one to find how to authenticate with Fortify to start doing the requests to its services.

Fortify does have the Swagger page of the URL’s to show what endpoints it offers, but doesn’t detail the authentication endpoint. It then does have the documentation on how to authenticate, but it is not detailed out for easy use.

Therefore this is why I thought I would expand on the details to show other how to authenticate easily, while using PowerShell as the chosen language.

Fortify Swagger

The API Layer from Fortify provides the Swagger definitions. If you chose you provided Data Centre from the link below, you can then simply add ‘/swagger’ to the end to see the definitions, for example https://api.emea.fortify.com/swagger/ui/index

Data Centre URL: https://emea.fortify.com/Docs/en/Content/Additional_Services/API/API_About.htm

Authentication

As mentioned before Fortify does document how to authenticate with the API here https://emea.fortify.com/Docs/en/index.htm#Additional_Services/API/API_Auth.htm%3FTocPath%3DAPI%7C_____3

First thing is to find out what details you require for the request like it has mentioned in the documentation. We require the calling Data Centre URL, which you used above for the Swagger definitions, that is then suffixed with ‘/oauth/token’ e.g. ‘https://api.emea.fortify.com/oauth/token&#8217;

We then need scope of what you would like to request, which are both detailed out in this link in the documentation plus also on each of the Swagger definition under the ‘Implementation Notes’, it specifies what scope is required for each request. This value needs to be entered as lowercase to be accepted.

This is the same as the Grant Type, which is a fixed value of ‘client_credentials’ all in lowercase.

Final detail we need is the ‘client_id’ and the ‘client_secret’, but what I found is what we really need is the API Key and the API Secret that is managed in your Fortify portal. If you sign in to your portal, for the Data Centre and product I have access to, you can navigate to the ‘Administration’ then ‘Settings’ and finally ‘API’. From this section you can create the API details with the required set of permissions. Note that the permission are changeable post setting this up so you do not need to commit yet. You should then get all the details required for these two parameters where client_id = API Key and client_secret = API Secret.

Your details in PowerShell should look like this:

$body = @{
scope = "api-tenant"
grant_type = "client_credentials"
client_id = "a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa"
client_secret = "AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11"
}

From there we can do a simple ‘Invoke-RestMethod’ using PowerShell, with a key things to note. It is that the content type is ‘application/x-www-form-urlencoded’, without this you will keep getting an error saying the ‘Grant Type’ is not valid. With this as well you will notice as above the body is not in JSON, but are formatted as Parameters in the body of the request.

Below is the full example of the request using PowerShell, which I have also included the requests to set the default proxy so if you are requesting behind a proxy, this should still work.

## Set Proxy

[System.Net.WebRequest]::DefaultWebProxy = [System.Net.WebRequest]::GetSystemWebProxy()

[System.Net.WebRequest]::DefaultWebProxy.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials

## Create Details

$uri = “https://api.emea.fortify.com/oauth/token”

$body = @{

scope = “api-tenant”

grant_type = “client_credentials”

client_id = “a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa”

client_secret = “AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11”

}

## Request

$response = Invoke-RestMethod -ContentType “application/x-www-form-urlencoded” -Uri $uri -Method POST -Body $body -UseBasicParsing

## Response

Write-Host $response

How to setup AppDynamics for multiple .Net Core 2.0 applications

We have decided to go with App Dynamics to do monitoring on our infrastructure and code, which is great and even better they have released support for .Net Core 2.0. However when working with their product and consultant we found an issue with monitoring multiple .Net Core instances on one server, plus with console apps, but we found a way.

Currently their documentation, that is helpful, shows you have to set up monitoring for the .Net Core Application with environment variables. Follow the direction in the App Dynamics Documentation says to set the environment variable for the profilers path, which is in each of the applications, but of course we can’t set multiple environment variables. Therefore we copied the profiler DLL to a central source and used that as the environment variable, but quickly found out that it still didn’t work. For the profiler to start tracking, it needs to be set to point to the applications root folder for each application.

The consultants investigation then lend to looking at how we can set the environment variables for each application, to which we found the application can be set in the web.config using the node ‘environmentVariables’ under the ‘aspNetCore’ node as stated as part of the Microsoft Documentation. Of course using the ‘dotnet publish’ command generates this web.config, so you can’t just set this in the code. Therefore in the release of the code I wrote some PowerShell to set these parameters.

In the below PowerShell, I get the XML content of the web.config, then create each of the environment variable nodes I want to insert. Once I have these I can then insert them into the correct ‘aspNetCore’ node of the XML variable, which I then use to overwrite the contents of the existing file.

Example PowerShell:

$configFile = "web.config";
$sourceDir = "D://wwwroot";

## Get XML
$doc = New-Object System.Xml.XmlDocument
$doc.Load()
$environmentVariables = $doc.CreateElement("environmentVariables")

## Set 64 bit version
$Profiler64 = $doc.CreateElement("environmentVariable")
$Profiler64.SetAttribute("name", "CORECLR_PROFILER_PATH_64")
$Profiler64.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x64.dll")
$environmentVariables.AppendChild($Profiler64)

## Set 32 bit version
$Profiler32 = $doc.CreateElement("environmentVariable")
$Profiler32.SetAttribute("name", "CORECLR_PROFILER_PATH_32")
$Profiler32.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x86.dll")
$environmentVariables.AppendChild($Profiler32)

$doc.SelectSingleNode("configuration/system.webServer/aspNetCore").AppendChild($environmentVariables)

$doc.Save($configFile.FullName)

Example Web.config result:

<configuration>
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\SecurityService.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
<environmentVariables>
<environmentVariable name="CORECLR_PROFILER_PATH_64" value="D:\IIS\ServiceOne\AppDynamics.Profiler_x64.dll" />
<environmentVariable name="CORECLR_PROFILER_PATH_32" value="D:\IIS\ServiceTwo\AppDynamics.Profiler_x86.dll" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>

This will work for the application that have a web.config, but something like a Console App doesn’t have one, so what do we do?

The recommendation and solution is to create an organiser script. This script will set the Environment Variable, which will only effect the application triggered in the session. To do this you can use any script really like PowerShell or Command Line.

In this script you just need to set the environment variables and then run the Exe after.

For example in PowerShell:

Param(
[string] $TaskLocation,
[string] $Arguments
)

# Set Environment Variables for AppDynamics
Write-Host “Set Environment Variables in path $TaskLocation”
$env:CORECLR_PROFILER_PATH_64 = “$TaskLocation\AppDynamics.Profiler_x64.dll”
$env:CORECLR_PROFILER_PATH_32 = “$TaskLocation\AppDynamics.Profiler_x86.dll”

# Run Exe
Write-Host “Start  Script”
cmd.exe \c
exit

These two solutions will mean you can use AppDynamics with both .NetCore Web Apps and Console App, with multiple application on one box.

Resharper DotCover Analyse for Visual Studio Team Services

Do you use Visual Studio Team Services (VSTS) for Builds and/or Releases? Do you use Resharper DotCover? Do you want to use them together? Then boy do I have an extension for you!

That might be a corny introduction, but it is exactly what I have here.

In my current projects we use Resharpers, or also know as Jet Brains, DotCover to run code coverage on all our code. However to run this in VSTS there is a bit of a process to install DotCover on the server and then write a Batch command to execute it with settings. This isn’t the most complex task, but it does give you a dependency to always install this on a server, and have the written Batch script in source control or in the definitions on VSTS. This can cause issues if you forget to get it installed or you need to update the script for every project.

Therefore I got all that magic of the program and cramed it into a pretty package for VSTS. This tool is not reinventing the wheel, but putting some greese on it to run faster. The Build/Release extension simply gives you all the input parameters the program normally offers and then runs them with the packaged version of DotCover that comes with the extension. See simply.

There is however one extra bit of spirit fingers I added into the extension. When researching and running my own tests, I found that some times it is helpful to only run the coverage on certain projects, but to do this you need to specify every project path in the command. Now I don’t know about you, but that sounds boring, so I added an extra field.

Instead of in the Target Arguments passing each project separately and manually, you can pass wildcards in the Project Pattern. If you pass anything in the Project Pattern parameter it will detect you want to use this feature. It then uses the Target Working Directory as the base to recursively search for projects.

For Example: Project Pattern = “*Test.dll” and Target Working Directory = “/Source”

This will search for all DLL that end with ‘Test’ in the ‘Source’ directory and then prepend it to any other arguments in the Target Arguments.

For Example: “/Source/MockTest.dll;/Source/UnitTest.dll”

You can download the extension from the VSTS Marketplace
Here are is a helpful link for Resharper DotCover Analyse – JetBrains
Then this is the GitHub Repository for any issues or some advancements you would like – Pure Random Code GitHub

Update 20-07-2018

There was a recent issue raise on the GitHub Repository that addressed a problem I have also seen before. When running the DotCover from Visual Studio Team Services an error appears as below:

Failed to verify x64 COM object registration: Empty path to COM object.

From the issue raise, the user had linked to a Community Article about “DotCover console runner fails when running as VSTS task“, which in the comments they discussed how to fix this.

To correct it we simply add the following command to the request, that specifies what profiled process bitness to use as they say.

/CoreInstructionSet=[x86|x64]

Therefore the task has now been updated with this field and feature to accomadate this issue and fix. It has been run and tested by myself plus the user that raised the issue, so please enjoy.