Use Terraform to connect ACR with Azure Web App

You can connect an Azure Web App to Docker Hub, Private Repository and also an Azure Container Registry(ACR). Using Terraform you can take it a step further and build your whole infrastructure environment at the same time as connecting these container registries. However, how do you connect them together in Terraform?

I am going to focus on the connection of an ACR, but you can also follow the same method for the other providers.

Why I am using this as an example, is when correcting the other methods they are a simple URL, username and password, but the Azure Container Registry within the portal has a different user interface where it connects natively in the Azure. Why I was learning to do this, I kept getting my ACR connecting like a private repository instead of an actual ACR. Therefore, the method below will have the desired outcome of within the Azure portal the Web App showing it is connected to an ACR.

I will go through the general setup I have got for a simple Web App connecting to an ACR with all of the supporting  elements. I am not showing best practice of having the variables and outputs in separate files as this is not the point of the post, but I would encourage people to do that.

First we will need to create the infrastructure to support the Web App, by connecting to the Azure Resource Manager provider in Terraform:

provider "azurerm" {
  version         = "=2.25.0"
  subscription_id = var.subscription_id
  features {}
}

This passes a ‘subscription_id’ variable to connect to the correct subscription. We then create the Resource Group to contain all the resources.

variable "resource_group_name" {
  type        = string
  description = "Azure Resource Group Name. "
}
variable "location" {
  type        = string
  description = "Azure Resource Region Location"
}

# Create a Resource Group
resource "azurerm_resource_group" "acr-rg" {
  name = var.resource_group_name
  location = var.location  
}

The next part is to create the Azure Container Registry with your chosen name and the SKU for the service level you would like. For this example we have use the ‘Standard’ to keep it cheap and simple, while using the same location as the Resource Group.

variable "container_registry_name" {
  type        = string
  description = "Azure Container Registry Name"
}

# Azure Container Regristry
resource "azurerm_container_registry" "acr" {
  name                     = var.container_registry_name
  resource_group_name      = azurerm_resource_group.acr-rg.name
  location                 = azurerm_resource_group.acr-rg.location
  sku                      = "Standard"
  admin_enabled            = true
}

For the Web App we will need an App Service Plan to contain the Web App and set the SKU Level. You can see this is the same as before using the same locations and also I am using Linux as the base operating system.

variable "app_plan_name" {
  type        = string
  description = "Azure App Service Plan Name"
}

# App Plan
resource "azurerm_app_service_plan" "service-plan" {
  name = var.app_plan_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  kind = "Linux"
  reserved = true  
  sku {
    tier = "Standard"
    size = "S1"
  }  
}

Now is where we declare the Web App itself, but first create the 3 variables we will need. The Web App name, your Registry name and the Tag assigned to your image.

variable "web_app_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "registry_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "tag_name" {
  type        = string
  description = "Azure Web App Name"
 default: 'latest'
}

To link to Docker Registries you need 3 App Settings configured ‘ DOCKER_REGISTRY_SERVER_URL’, ‘ DOCKER_REGISTRY_SERVER_USERNAME’, and ‘DOCKER_REGISTRY_SERVER_PASSWORD’.

These are used to gain the correct access to the registries.

For the ACR, the URL is the ‘Login Server’ and then the username/password is the Admin Username/Password.

These can be found here in the portal, if your ACR is already created.

For example:

    DOCKER_REGISTRY_SERVER_URL      = "https://myacr.azurecr.io"
    DOCKER_REGISTRY_SERVER_USERNAME = myacr
    DOCKER_REGISTRY_SERVER_PASSWORD = *********

A key part to see here is the URL is prefixed with the ‘https’ and it needs to be this, not http as it needs to be secure.

Instead of getting these details manually, we are using Terraform so we have access to these details from the created Azure Container Registry that we can use:

    DOCKER_REGISTRY_SERVER_URL              = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password

We now have a connection to the ACR, but need to tell the Web App what registry and tag to look for. As we are using a Linux based server, we configure the ‘linux_fx_version’ in the site config with this pattern below, but for Windows you would use ‘windows_fx_version’.

"DOCKER|[RegistryName]:[TagName]"

For an example with a registry name MyRegistry and a tag name MyTag:

"DOCKER|MyRegistry:MyTag"

Below is the full example of the Web App generation in Terraform. With all these parts together you should have a Resource Group containing a ACR, App Service Plan and a Web App all connected.

# web App
resource "azurerm_app_service" "app-service" {
  name = var.web_app_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  app_service_plan_id = azurerm_app_service_plan.service-plan.id
  app_settings = {
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
   
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password
 
  }
  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.registry_name}:${var.tag_name}"
    always_on        = "true"
  }
  identity {
    type = "SystemAssigned"
  }
}

## Outputs
output "app_service_name" {
  value = "${azurerm_app_service.app-service.name}"
}
output "app_service_default_hostname" {
  value = "https://${azurerm_app_service.app-service.default_site_hostname}"
}

Is Home Working the New Normal?

Change was always coming for people to be working from home and not from company offices, but it looks like the Covid-19 events have forced this change to be pushed my faster. While some companies are struggling to get working practices in place, some are working with this very well and finding that this might be the new normal. What could normal look like after the pandemic has passed?

Trust.

Some employers seem to be avert from their employees working from home and I think some of it is due to trust. When your boss can see you, then they can see you doing work, which is what they want you to be doing on their time. However, this puts a piece of distrust in the middle that doesn’t make it a fun place to work, but a duty you have to do. Giving your employee’s trust to get the job done at home or in the office means they will feel respected and more inclined to get the job done. From the employer view you still want to know that the work is getting done, which is to be expected, but this can be done still by putting in Key Performance Indicators(KPIs). As a developer my KPIs are measure by how many backlog tickets I get complete to the expected standard. If these drop then my boss knows I might not be pulling my weight and need to review why I am not performing. This can all be done remotely without having to have a Big Brother camera on my back.

Digital Office.

Another issue that is stopping people today in the pandemic is Technical issues. I know from a friend working in a very large company, they were in the process of trialing people working from home 2 days a week before, but only a portion of people were doing that. This was due to the company employing thousands of employees around the whole global, so they have a very large amount of people to support on a global system, that was never intended when created to support people on unsecure networks at home or even at coffee shops. Therefore they have had to make a large technical shift to get the infrastructure in place, which is no small task, however they are doing it. If you are an older company trying to shift or even a new company starting out, there are hundreds of tools out there to support your company to get the technical set up at a reasonable cost. This doesn’t have to cost with free tooling’s like Slack (https://slack.com/intl/en-gb/pricing) for online chatting, Azure DevOps(https://azure.microsoft.com/en-gb/services/devops/) for coding, Google Drive(https://www.google.co.uk/drive/) for storing documents and a lot of other options for emailing. These can make your shift cheap, but mainly only for small teams. When you are a larger company this cost can grow, but you would want the support for the larger teams.

Communication.

I have ready talked about how communication and trust are parts of keeping the team working while you can’t see them. This works well with employee’s that work on their own with some check-ups like developers, but not all industries are like this and even some aspect of development are not as well. For instance if you are trying to have a brain storm session, I normally love getting the whiteboard into play for drawing designs and ideas. However, you shouldn’t really draw on your monitors… in pen, but you can get the whiteboard feature on most chat tools. This is another example of technology taking the physical tools in the office to the digital world.

Then there is the old school managers that prefer that face to face value for conversations. When you have a problem you can stand up and walk over to your colleges desk for some assistance (at 2 meters at the moment). However, with online people are more inclined to send a quick IM on their chat service, brush their hands together and say ‘well at least I tried’ before going to get their 100th coffee break. This is the hardest part I think to overcome, the social and closeness of being near your colleges to collaborate. This is not due to home working not working though, but rather a change of mind set and working practices. I know I prefer to just message people as it is easier to get it off my desk so I can focus on something else, but instead while working from home I am starting to try change my mindset to call people via phone or video chat to get to the conclusion faster. This effort, like any change in a company, needs to come from the top down, so your boss need to encourage it and stick to it as well, so the monkeys follow the banana. The other part of this as mentioned, is using video chat where possible. This can break the wall of just talking to an anonymous person to having the face to face connection of seeing facial expression and context. Also as well as bringing more connection, it means you are forced to put clothes on.

Cost.

I mentioned some of the costs of setting up a collaborative network of chat and working tools, that don’t always need to cost so much, but they are only some of the costs. There can be costs to your employer if they so choose as some companies choose to pay a contribution to their employee’s internet, electric and heating. These are normal cost that are being used at the office that the company pays for, so why not when you are at your home office? This is where I am on the employers side, as you are saving on travel and possibly lunch costs, I think it is only right you pay for the home costs. If companies do pay for these then it is perk as if you was out at a coffee shop or another location, then you are basically getting free money. Working from home is to benefit both parties, so where they make a saving on your desk costs, you make a saving on travel and food. Where you will need to increase your home costs potentially, the company will be increasing their running cost for the digital office technology. This as you can tell is a give and take partnership between you and your company.

Home or Office.

Personally I think working from home is the future for most industries where possible. It saves on so much and bring more to both sides of the coin, but I do also think that there is a need for a base. I enjoy the flexibility of working from home and I have been able to see my daughter grow up, which I might have missed being in the office. However, I do like to have separation from the home life sometimes and from these four walls. Going into the office now and again, gives me a change of scenery and is great for collaborative meetings. Therefore, I think having the home working 3-4 days a week and then in a hot desk office 1-2 days, is the perfect balance of life. Lets see where it goes from here though…

BadImageFormatException When Running 32/64 Bit Applications in Jet Brains Rider

I posted before about the error of getting BadImageFormatException and how it was associated to the processor settings. The fixed suggested were for Visual Studio only and in recent times I have now started working with Jet Brains Rider, which I then got the same issue, but found the correcting process.

If you do have Visual Studio and have this issue then you can read how to correct it on this post. BadImageFormatException When Running 32/64 Bit Applications in Visual studio

If you are using Jet Brains Rider then you can follow this instead.

  1. Open up your .Net Framework Project in Jet Brains Rider
  2. Select ‘Edit Configuration’ from the top right menu:
  1. Within the new window that opens you can then change the IIS Express path, which currently you can see is using the ‘x86’ version that is 32 bit. Update the path without this to configure the 64 bit version.

AppDynamics grouping Database Custom Metric Queries

When you create Custom Database Metrics in AppDynamics, you first thought is to create a new row for each metric, but if you have a lot to report on this can become messy. Not only that but in the metric view you will have a very long list of reports to go through. Therefore when we had the consultant down at work, we was shown how to group collections of metrics in one query, which then shows in the metric view in a sub folder. This tactic, we could not find anywhere on the internet, so I thought I would share this very handle insight for AppDynamics.

Your stand method to add custom queries and metrics would be to go to this view below configuration view in AppDynamics and add new queries for each of the metrics you wish to report on.

AppDynamics Databases

You can then go to the metric view and see the data coming in like below.

AppDyanmics Metric Browser

However, like I said above, this list can grow fast plus by default you are limited to only 20 of theses’ queries, which can disappear faster. Therefore this method gives you more bang for your buck on custom metrics, plus also the organisation of your data.

Instead of adding each query separate, what we can do is create a grouping of queries into sub folders of the ‘Custom Metric’ folder, to look like this.

  • Before
  • Custom Metric
    • Queue 1
    • Queue 2
    • Queue 3
  • After
  • Custom Metric
    • MessagingQueueMontioring
      • Queue 1
    • Queue 2
    • Queue 3

As we, at my company, completed this in Microsoft SQL I will use that as an example, but I would be confident it can be translated to other languages with the same outcome with some slight changes to syntax.

Say we start with the 3 queries that we would want to monitor and we will keep them simple:

SELECT count(id) FROM MessageQueueOne
SELECT count(id) FROM MessageQueueTwo
SELECT count(id) FROM MessageQueueThree

To create the top level folder, you simply create a single query item called ‘MessagingQueueMonitoring’. In this new custom metric query you need to add the above 3 SQL statements, but we need them to be a single query instead of 3. For this to work we will use the SQL command ‘UNION ALL’ to join them together:

SELECT count(id) FROM MessageQueueOne
UNION ALL
SELECT count(id) FROM MessageQueueTwo
UNION ALL
SELECT count(id) FROM MessageQueueThree

This will now create one table with 3 rows and their values, but for AppDynamics to recognise these in the metrics view we need to tell it what each of these rows mean. To tell AppDynamics what the nodes under it are called you add a column to each query for the name. This column should be called ‘Type’ and then for AppDyanmics to know what the value part of the table is, you call that column ‘Total’.

You should end up with a query like below:

SELECT 'Message Queue One' as Type, count(id) as Total FROM MessageQueueOne
UNION ALL

SELECT 'Message Queue Two' as Type, count(id) as Total FROM MessageQueueTwo
UNION ALL

SELECT 'Message Queue Three' as Type, count(id) as Total FROM MessageQueueThree

Then this should result in a table like this:

TypeTotal
Message Query One4
Message Query Two2
Message Query Three56

What do you consider when building a new application?

When you’re starting a new project and thinking about what you’re going to use in your application, what factors do you consider? Sometimes this depends on what your role is, like a developer might jump straight in with just use X coding language and continue on their way. Whereas others might want to play with whatever the new and latest technology is. Then there is people like myself, that likes to think about the whole picture, and so here are some of the key factors I consider when building a new application.

 

Code Repository

This one should just come hand in hand with your company, as there should already be a standard of where and how you store your code. However there’s a lot of ‘should’ in that sentence, as some junior companies don’t have this thought through yet, or you could be doing it alone, or even the company might have something in place but you are thinking of exploring new technologies and new grounds.

The big factor to consider with a repository is the company that is holding that information. It starts with where the code will be held for legal laws, security and for access. Now you might think access is a silly thing to think about in this, as it is just all done over https on your computer isn’t it?, but you might need to consider if you are going through a proxy so security might lock you down unless it is a secure root. You also might put the repository on premise due to the value of the code you are storing, which might also be the reason for your choice on the company to store your code. If you think that the company storing your code will be going after 2 years, then you might want to think about either a different company or a good get out plan just in case. These days there a few big players that just make clear sense, so after this it would come down to the cost of that companies services for the level you require.

The other factor is how it is stored and retrieved from the repository with things like GIT, as this is another technology that you will depend on. You will need to consider what learning curve will others need to undertake if they are to use this Version Control System and again like the storage factor, will they still be around in as few years’ time?

Linked from this would be what tools you are thinking of using later in the thought process for build, test and deployment, as these might be harder work for you to move code between locations and tools. For example if your repository is on premised behind a firewall and security, but your build tool is in the cloud with one company and then the test scripts are stored in another companies repository.

 

Language

You might have an easy job with choosing a language if you are a pure Java house or PHP only then that is what you will be using, as you can only do what you know. However, if you want to branch out or you do have more possibilities then the world can open up for you.

A bit higher level than choosing the language you want, but design patterns do come into this. I have seen where someone would choose a .NET MVC language for their back end system, but then put a AngularJS front end  framework on top. What you are doing here is putting an MVC language design on top of an MVC language design, which causes all types of issues. Therefore you need to consider, if you are using more than one language then how do they complement each other. For instance in this circumstance you could either go for the AngularJS MVC with a micro service .NET backend system, or have the .NET MVC application with a ReactJS front end to enrich the users experience.

As I said before, you might already know what languages you are going to use as that is your bread and butter now, but if it is not then you need to think about the learning curve for yourself and other developers. If you are throwing new technologies into the mix then you need to be sure everyone can keep up with what you intend on using, or you will become the Single Point Of Failure and cause support issue when someone is off.

As well as thinking about who will be developing the technology, you need to think about who will be using the technology. This can either be from an end users experience or even the people controlling the data like content editors, if this is that type of system. If you would like a fast and interactive application then you will want to push more of the feature to the client side technologies to improve the users experience, but you might not need to make it all singing and dancing if it is an console application running internally you want to just do the job. Therefore the use case of the language has an importance to the choice.

 

Testing

Testing is another choice in itself, as once you know your language you know what testing tools are available to use, but they then have all the same consideration as what coding language you want to use, as you will still need to develop these tests and trust in their results.

I add this section in though, as it is a consideration you need to have and also how it factors into giving you, the developer, feedback on your test results. These might run as part of your check in or they might be part of a nightly build that reports back to you in the morning, so how they are reported instantly to the develop depends on how fast they can react to them.

As part of the tooling for the tests you will need to recognize what level of testing they go down to, for example unit tests, integration tests, UI tests or even security testing. These then need to consist of what tools you can integrate into your local building of the application, to give you instant feedback, for example a linter for JavaScript which will tell you instantly if there is a conflict or error. This will save you time of checking in and waiting for a build result, which might clog up the pipeline for other checking in.

 

Continuous Integration(CI) and Continuous Delivery(CD)

This is a little out of touch with what application you are building as another person in the DevOps roll might be doing this and it should have no major impact on your code, as this is abstract to what you are developing. However the link can be made through how you are running this application on your local machine. You could be using a task runner like Gulp in your application to build and deploy you code on your local machine, which then makes sense to use the same task runner in the CI/CD.

Therefore you need to think about what tooling can and will be used between the your local machine and the CI/CD system to have a single method of build and deployment. You want to be able to mirror what the pipeline will be doing, so you can replicate any issue, plus also the other way round as it will help that DevOps person build the pipeline for you application.

 

Monitoring and logging

Part of the journey of your code, is not just what you are building and deploying, but also what your code is doing after that in the real world. The best thing to help with this is logging for reviewing past issues and monitoring to detect current or coming issues.

For your logging I would always encourage 3 levels of logging Information, Debug and Error, which are configurable to turn on or off in production. Information will help when trying to source where the issue happens and what kind of data is being passed through. It will be medium level of output as to not fill up your drive fast, but to give you a lot of information to help with your investigation. Debug is then the full level down, giving you everything that is happening with the application and all the details, but be careful of printing GDRP data that will sit in the logs and to not crash your drives from over filling. Errors are then what they say on the tin, they will only get reported out when there is an error in the application, which you should constantly check to make sure your remove all potential issue with the code. The considering factor with this for your application is technology and implementation to your code. We have recently changed a logging technology, but how it was implemented made it a longer task then it should have been, which can be made easier with abstraction.

Monitoring depends on what your application is doing, but can also expand past your code monitoring. If you have something like message queue’s you can monitor the levels or you could be monitoring the errors in the logs folder remotely. These will help pre-warn you if there is something going wrong before it hits the peak issue. However the issue might not be coming from your code, so you should also be monitoring things like the machine it is sitting on and the network traffic in case there is an issue there. These have an impact on the code because some monitoring tools do not support some languages, like .NetCore which we have found hard in some places.

 

Documentation

Document everything is the simple way to put it. Of course you need to do it in a sensible manner and format, but you should have documentation before even the first character of code is written to give you and others the information you have decided above. Then you will need to be documenting any processes or changes during the building for others to see. If you know exactly how it all work then someone else takes over while you are away, then you put that person is a rubbish position unless they have something to reference to.

These need to have a common location that everyone can have access to read, write and edit. However a thought you could try is using automated documentation draw from the codes comments and formatting, so you would need to bear this in mind when writing out your folder structure and naming convention.

You can go over board by documenting to much as somethings like in the code or the CI/CD process should be clear from the comments and naming. However even if documentation for tools like GIT have already been written, it is helpful to create a document saying what tooling you are using from a high level, why you are using this and then reference their documentation. It gives the others on the project a single point of truth to get all the information they require, plus if the tooling changes you can update that one document to reference the new tooling’s, and everyone will already know where to find that new information.

 

DevOps

In the end what we have just gone through is the DevOps process of Design, Build, Test, Deploy, Report and Learn.

  • You are currently looking at the design point while looking at what languages and tools you would like to use.
  • We are going to get a language to build the new feature or application.
  • There will be a few levels of testing through the process of building the new project.
  • The consideration of CI and CD gets our product deployed to new locations in a repeatable and easy method.
  • Between the Logging and Monitoring we are both reporting information back to developers and business owners, who can learn from the metrics to repeat the cycle again.

DevOps

Reference: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0