Connect Azure MySQL to Private Endpoint with Terraform

To connect an Azure MySQL Database, or other services in Azure, one of the most secure methods to do this is with a Private Endpoint. Microsoft document the architecture they recommend using an App Service connecting to a MySQL Server, which is good if you are using the Azure Portal, but there are some missing components if you are using Terraform.



Microsoft do give some more simple method to allow access to the MySQL Server, by whitelisting IP Addresses or allowing Azure Services Access. However, you can only whitelist with confidence if you have a fixed IP range and the allowing all access would open it up to all services within all subscriptions. You can see these methods on the Microsoft Documentation:

A better method is to lock the server off from the internet and allow access via a Private Endpoint in a Virtual Network. This design is references in the documentation here: As you can see it is very simple and have few components, however, when you use the Portal to create these most are dynamically created through simple entries.

App Service 
Web App 
App Service 
Vlrtual Network ( 
Private Endpoint 
DNS Zones

I have then depicted the connectivity of these in a slightly different view that shows all the components and how we are going to connection them via Terraform.

App Service Plan 
App Service 
Virtual Network 
DNS Private zone 
Private Endpoint 
Private Endpoint 
Network Interface

Building with Terraform

In each section I will highlight any particular code, but the full example is at the end.

Virtual Network

This is the centre feature to the design that we will create first. You can also create the subnets at this time, but to break it down I am only going to create the VNet itself. With the VNet we are keeping it simple so we are only entering the name and the DNS IP Address, which we will use some hardcoded values that you can always change.


address_space       = [""]
dns_servers         = ["", ""]

MySQL Server

You probably can do this in another order, but I followed the same process in creating them in the Portal. Within this I have included dynamically creating the MySQL password. A key part is the ‘mysql_server_sku’ version as this requires to be a General Purpose version or above, which is what I have set the default to. You can also see where I have enforced the firewall to disallow public access with ‘public_network_access_enabled’.


variable "mysql_server_sku" {
  type        = string
  description = "MySQL Server SKU"
  default     = "GP_Gen5_2"

Dynamic Password

resource "random_password" "password" {
  length      = 20
  min_upper   = 2
  min_lower   = 2
  min_numeric = 2
  min_special = 2
 administrator_login_password      = var.mysql_server_password == "" ? random_password.password.result : var.mysql_server_password

MySQL Firewall Settings

variable "mysql_server_settings" {
  type = object({
    auto_grow_enabled                 = bool
    backup_retention_days             = number
    geo_redundant_backup_enabled      = bool
    infrastructure_encryption_enabled = bool
    public_network_access_enabled     = bool
    ssl_enforcement_enabled           = bool
    ssl_minimal_tls_version_enforced  = string
  description = "MySQL Server Configuration"
  default = {
    auto_grow_enabled                 = true
    backup_retention_days             = 7
    geo_redundant_backup_enabled      = false
    infrastructure_encryption_enabled = false
    public_network_access_enabled     = false
    ssl_enforcement_enabled           = true
    ssl_minimal_tls_version_enforced  = "TLS1_2"


As the subnet will be used more then once, I have made this a custom Module. This can then be called for creating the Private Endpoint subnet and the Applications Subnet. The Private Endpoint Subnet requires there to be no Delegation set, but the Application Subnet does need it. Therefore, I have made that section of the module dynamic. In this example you can also see the subnets I am giving the Private Endpoint and Application.


 dynamic "delegation" {
    for_each = var.subnet_delegation_name == "" ? [] : [1]
    content {
      name = var.subnet_delegation_name
      service_delegation {
        name    = var.subnet_delegation_type
        actions = var.subnet_delegation_actions


variable "private_endpoint_subnet" {
  type        = string
  description = "Azure Private Endpoint VNet Subnet Address"
  default     = ""
variable "app_subnet" {
  type        = string
  description = "Azure Application Subnet Address"
  default     = ""

Private Endpoint

As this contains a few components to link everything up, I have also put this into a custom Module. You can see the order of creation below, so you can get an idea of how they are built. I try to keep things dynamic so the connecting of the MySQL Server is part of an array, you can add more resources to this endpoint. Something that caught me out was the DNS Zone name in Terraform. Most every resource that has a ‘name’ can be anything you like within the boundaries of validation as it just gives a title to the resource, but the DNS Zone must be a valid link from the Azure Documentation, which in the Terraform variable below.

Creation Order

  1. Create Private DNS Zone
  2. Create Private Endpoint
  3. Create a Private Endpoint Connection
  4. Link the DNS Zone to the Virtual Network

Dynamic Resources

private_endpoint_service_connections = [
      name                           = "${var.mysql_server_name}.privateEndpoint"
      private_connection_resource_id =
      subresource_names              = ["mysqlServer"]
      is_manual_connection           = false

DNS Zone name

variable "dnszone_private_link" {
  type        = string
  description = "Validate Private Link URL"
resource "azurerm_private_dns_zone" "private-endpoint-dns-private-zone" {
  name                = var.dnszone_private_link
  resource_group_name = var.resource_group_name
dnszone_private_link = ""

App Service

We can now add the connecting App Service by creating an App Plan with an App Service connected to a subnet within the same VNet. This has all the standard values and settings, which do not have much requirements to this solution. Two factors that do is the App Plan SKU, which just needs to be Standard or above and the other is the App Settings. Again making thing dynamic, I allow a variable to pass in custom App Settings, but we also need the Subnet settings added. These include the ‘WEBSITE_VNET_ROUTE_ALL’ and the ‘WEBSITE_DNS_SERVER which is set to the Azure DNS IP Address unless you have private DNS Server.

App Settings

locals {
  app_settings_subnet = {
  app_settings = merge(var.webapp_app_settings, local.app_settings_subnet)

End-to-end Code

You can view the full code on my GitHub Repository PureRandom

These are also two sites where I drew a lot of knowledge from, so I thought they deserved a mention.

If you do find any issues with the code, please message me. This was taken from a larger project so I might have missed one or two things 🙂

Reference Sites:

Setup Hyper Guest for SSH without IP Address

When setting up the Hyper-V Guest hosts, I found it a little tricky and hard to find documentation on how to easily set these up, so I thought I would share how I got them into a configuration with the most simple process. With this setup you can also SSH into the Guest Host even if you do not have an IP address exposed on the Guest Network Adaptor.

To make thing even more simple I am using the pre-selected OS versions from the Hyper-V quick create options, but the steps should also work on other versions.

Linux Virtual Machine

In these steps below you will create an Linux Virtual Machine(VM) with the version ‘Ubuntu 18.04.3 LTS’

  1. Install and Open Hyper-V.
  2. Click Quick Create from the menu on the right.
  3. Select ‘Ubuntu 18.04.3 LTS‘ from the menu and create it.
  4. Follow all the details from the wizard as requested with your chosen details.
  5. Once completed, start and login to your machine.
  6. Open the Terminal within the VM.
  7. Run the following commands
    1. Update installs
      sudo apt-get update
    2. Install open ssh server
      sudo apt-get install openssh-server
    3. Install linux azure
      sudo apt-get install linux-azure
    4. start services by running the below repacing SERVICE-NAME for each of these sshd, ssh, hv-kvp-daemon.service
      sudo systemctl start SERVICE-NAME
      sudo systemctl status SERVICE-NAME
    5. Allow SSH through the fire wall
      sudo ufw allow ssh

Windows Virtual Machine

In these steps below you will create an Windows Virtual Machine(VM) with the version ‘Windows 10 dev environment’

  1. Install and Open Hyper-V
  2. Click Quick Create from the menu on the right
  3. Select Windows 10 dev environment from the menu and create it
  4. Follow all the details from the wizard as requested.
  5. Once completed start and login to your machine
  6. Run these commands
    1. Install Open SSH
      Add-WindowsCapability -Online -Name OpenSSH.Client~~~~

SSH Keys

If you would like to login to your Virtual Machine then you will need to install the SSH keys.

You can find out how to generate keys and what keys you need from the SSH website. (

Here is some more information on where to store the Public Keys once generated.

Public Key Store

On Linux, you can store them in the users directory in .ssh/authorized_keys for example C:\Users\USERNAME\.ssh\authorized_keys

Unlike Linux there are one of two places you will need to add the keys. If you are admin add it to C:\ProgramData\ssh\administrators_authorized_keys If you are not admin add it to C:\Users\USERNAME\.ssh\authorized_keys

Check If Admin
  1. Run lusrmgr.msc
  2. Select Groups
  3. Select Admin
  4. Check if you are in the group.

Once these tasks are completed you should be able to SSH into your Virtual Machines via the Hyper-V Console(HVC).

I have written about how to use this in a previous post ‘SSH to Hyper-V Virtual Machine using SSH.NET without IP Address‘. Although this targets the SSH.NET, you can use the commands from it to SSH from the Terminal.

SSH to Hyper-V Virtual Machine using SSH.NET without IP Address

I have uses the Dotnet Core Nuget package SSH.NET to SSH into machines a few times, is a very simple, slick and handy tool to have. However, you cannot SSH into a Virtual Machine(VM) in Hyper-V that easy without some extra fiddling to get an exposed IP Address.

With your standard SSH command you can run the simple:

ssh User@Host

This can have many other attributes, but lets keep it simple.

If your VM has an IP Address assigned to the Network Adapter then this can still be very simple, with using the user for the machine and the IP Address as the host.

However, not every VM in some situation will have an IP Address and therefore you cannot connect to it like this.

You can though if you use the Hyper-V Console CLI(HVC). If installed it can be located in ‘C:\Windows\System32\hvc.exe‘ and it is normally install when enabling the Hyper-V Feature in Windows. This tool enables your to communicate to your VM via the Hyper-V Bus in-between your local machine and the VM.

To use this tool you can run the same SSH command but with the HVC prefix:

hvc ssh User@Host

However, instead of the host you can pass the Hyper-V VM name, which you can get from the Hyper-V Program or with PowerShell in Administrator mode:


This is great to use in the Terminal, but doesn’t let you use standard SSH commands, which the SSH.Net tool uses. I have not come across a tool to do this execution via Dotnet Core yet, so I have come up with this solution.

What we can do to accomplish this is port forwarding, where we tell the VM to route traffic from one port on the VM to another port on the local machine.

Below we are telling the VM to push port 22 traffic, which is the SSH standard port, to port 2222 on the local machine with the correct Username and VM Name.

hvc.exe ssh -L 2222:Localhost:22 User@VmName

Once this has been done you can then run the standard SSH command, but with the port parameter and ‘Localhost’ as the Host, the same as you SSH to your own local machine.

Ssh user@Localhost -p 2222

To get this working in C# I would recommend using SSH keys to avoid the requirement of passwords as you would need key entry for that, and then the PowerShell Nuget package to run the HVC command like below:

$SystemDirectory = [Environment]::SystemDirectory
cd $SystemDirectory
hvc.exe ssh -L 2222:Localhost:22 User@VmName -i "KeyPath" -FN

Testing Azure DevOps Pipelines Locally

I have tried and tried before, plus also read a lot of other people asking for the ability to test Azure YAML Pipelines locally. Now although it is not perfect and some room for improvement, Microsoft has create a way to test your pipeline with some local testing and remote testing without having to check in your YAML.

The issue to solve is, you making a change to a Azure DevOps Pipelines YAML, then to test this you need to check it in just to find out something small like, the value you passed is not correct, or the linked repository isn’t spelt correctly. This can then get very annoying as you need to keep doing this back and forward until it is finally able to run.

A quick tool you can use, which is completely local, is the Visual Studio Code (VS Code) extension ‘Azure Pipelines‘. This will validate your YAMLs formatting and in the Output window to the left it can show you the hierarchy flow of the YAML, which is the first step to validate.

Another handy tool is ‘Azure DevOps Snippets‘, which is an auto-complete library of the Azure DevOps Tasks. This can help in getting the correct names of parameters and tasks, instead of having to need a solid memory.

However, these tools will only help with formatting the local YAML and without the parameters you have set in the Azure DevOps Library. What we need is to test this remotely against the information in Azure DevOps and for that Microsoft have provided an API.

This API can be used to run a pipeline remotely that is stored in Azure DevOps, so you can automate via the API to trigger builds. It also has two handy parameters. One being ‘previewRun’, a boolean to determine if this should run the pipeline for real or validate the run. This will use all the information you have set in Azure DevOps and the pipeline, but it will not create a new build and run code. What it will do is use the correct parameters, link to the other real repositories and check all the data will work. It is also the setting use when you edit the pipeline in Azure DevOps and select ‘Validate’ from the options menu in the top right.

This is really good, but would still require you to check in the YAML that you have edited, which is why there is the parameter ‘yamlOverride’ in the API. In this parameter you can add you newly edited YAML content and this will replace what is store in Azure DevOps without overwriting it.

Here is an example POST Body:


To make life a little easier, if you are using PowerShell there is a VSTeam Module you can install to run this command through PowerShell. It takes the ID of the pipeline you want to run against, which you can get from the pipeline URL in Azure DevOps. It then take the file path to the local YAML file and the project name from Azure DevOps.

Test-VSTeamYamlPipeline -PipelineId 12 -FilePath .\azure-pipelines.yml -ProjectName ProjectName

These are great tools and solve a lot of the issue, but they do not solve all of them. You would still need internet connection to run these, as they run on the remote DevOps instance. Also, I have a repository purely containing groups of YAML tasks for better reusability, which you cannot run tests on these as they are not complete pipelines only templats, so I would still need to do the check in and run process. Finally, it does not merge local YAML files. If you have a main file and then templates locally linked it will not merge them, you need to create a single YAML to run this.

However, I don’t think you can solve all of these as it would require you to run Azure DevOps locally in an exact copy of the live instance you have. If you was using Azure DevOps Server then you might be able to take copies of these and run them in a container on your local, but it could be a longer process compare to the current one.

If you have any other methods to make developing Azure DevOps YAML Pipelines easier then please share.

Use Terraform to connect ACR with Azure Web App

You can connect an Azure Web App to Docker Hub, Private Repository and also an Azure Container Registry(ACR). Using Terraform you can take it a step further and build your whole infrastructure environment at the same time as connecting these container registries. However, how do you connect them together in Terraform?

I am going to focus on the connection of an ACR, but you can also follow the same method for the other providers.

Why I am using this as an example, is when correcting the other methods they are a simple URL, username and password, but the Azure Container Registry within the portal has a different user interface where it connects natively in the Azure. Why I was learning to do this, I kept getting my ACR connecting like a private repository instead of an actual ACR. Therefore, the method below will have the desired outcome of within the Azure portal the Web App showing it is connected to an ACR.

I will go through the general setup I have got for a simple Web App connecting to an ACR with all of the supporting  elements. I am not showing best practice of having the variables and outputs in separate files as this is not the point of the post, but I would encourage people to do that.

First we will need to create the infrastructure to support the Web App, by connecting to the Azure Resource Manager provider in Terraform:

provider "azurerm" {
  version         = "=2.25.0"
  subscription_id = var.subscription_id
  features {}

This passes a ‘subscription_id’ variable to connect to the correct subscription. We then create the Resource Group to contain all the resources.

variable "resource_group_name" {
  type        = string
  description = "Azure Resource Group Name. "
variable "location" {
  type        = string
  description = "Azure Resource Region Location"

# Create a Resource Group
resource "azurerm_resource_group" "acr-rg" {
  name = var.resource_group_name
  location = var.location  

The next part is to create the Azure Container Registry with your chosen name and the SKU for the service level you would like. For this example we have use the ‘Standard’ to keep it cheap and simple, while using the same location as the Resource Group.

variable "container_registry_name" {
  type        = string
  description = "Azure Container Registry Name"

# Azure Container Regristry
resource "azurerm_container_registry" "acr" {
  name                     = var.container_registry_name
  resource_group_name      =
  location                 = azurerm_resource_group.acr-rg.location
  sku                      = "Standard"
  admin_enabled            = true

For the Web App we will need an App Service Plan to contain the Web App and set the SKU Level. You can see this is the same as before using the same locations and also I am using Linux as the base operating system.

variable "app_plan_name" {
  type        = string
  description = "Azure App Service Plan Name"

# App Plan
resource "azurerm_app_service_plan" "service-plan" {
  name = var.app_plan_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name =
  kind = "Linux"
  reserved = true  
  sku {
    tier = "Standard"
    size = "S1"

Now is where we declare the Web App itself, but first create the 3 variables we will need. The Web App name, your Registry name and the Tag assigned to your image.

variable "web_app_name" {
  type        = string
  description = "Azure Web App Name"
variable "registry_name" {
  type        = string
  description = "Azure Web App Name"
variable "tag_name" {
  type        = string
  description = "Azure Web App Name"
 default: 'latest'

To link to Docker Registries you need 3 App Settings configured ‘ DOCKER_REGISTRY_SERVER_URL’, ‘ DOCKER_REGISTRY_SERVER_USERNAME’, and ‘DOCKER_REGISTRY_SERVER_PASSWORD’.

These are used to gain the correct access to the registries.

For the ACR, the URL is the ‘Login Server’ and then the username/password is the Admin Username/Password.

These can be found here in the portal, if your ACR is already created.

For example:


A key part to see here is the URL is prefixed with the ‘https’ and it needs to be this, not http as it needs to be secure.

Instead of getting these details manually, we are using Terraform so we have access to these details from the created Azure Container Registry that we can use:

    DOCKER_REGISTRY_SERVER_URL              = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password

We now have a connection to the ACR, but need to tell the Web App what registry and tag to look for. As we are using a Linux based server, we configure the ‘linux_fx_version’ in the site config with this pattern below, but for Windows you would use ‘windows_fx_version’.


For an example with a registry name MyRegistry and a tag name MyTag:


Below is the full example of the Web App generation in Terraform. With all these parts together you should have a Resource Group containing a ACR, App Service Plan and a Web App all connected.

# web App
resource "azurerm_app_service" "app-service" {
  name = var.web_app_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name =
  app_service_plan_id =
  app_settings = {
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password
  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.registry_name}:${var.tag_name}"
    always_on        = "true"
  identity {
    type = "SystemAssigned"

## Outputs
output "app_service_name" {
  value = "${}"
output "app_service_default_hostname" {
  value = "https://${}"