SSH to Hyper-V Virtual Machine using SSH.NET without IP Address

I have uses the Dotnet Core Nuget package SSH.NET to SSH into machines a few times, is a very simple, slick and handy tool to have. However, you cannot SSH into a Virtual Machine(VM) in Hyper-V that easy without some extra fiddling to get an exposed IP Address.

With your standard SSH command you can run the simple:

ssh User@Host

This can have many other attributes, but lets keep it simple.

If your VM has an IP Address assigned to the Network Adapter then this can still be very simple, with using the user for the machine and the IP Address as the host.

However, not every VM in some situation will have an IP Address and therefore you cannot connect to it like this.

You can though if you use the Hyper-V Console CLI(HVC). If installed it can be located in ‘C:\Windows\System32\hvc.exe‘ and it is normally install when enabling the Hyper-V Feature in Windows. This tool enables your to communicate to your VM via the Hyper-V Bus in-between your local machine and the VM.

To use this tool you can run the same SSH command but with the HVC prefix:

hvc ssh User@Host

However, instead of the host you can pass the Hyper-V VM name, which you can get from the Hyper-V Program or with PowerShell in Administrator mode:

Get-VM

This is great to use in the Terminal, but doesn’t let you use standard SSH commands, which the SSH.Net tool uses. I have not come across a tool to do this execution via Dotnet Core yet, so I have come up with this solution.

What we can do to accomplish this is port forwarding, where we tell the VM to route traffic from one port on the VM to another port on the local machine.

Below we are telling the VM to push port 22 traffic, which is the SSH standard port, to port 2222 on the local machine with the correct Username and VM Name.

hvc.exe ssh -L 2222:Localhost:22 User@VmName

Once this has been done you can then run the standard SSH command, but with the port parameter and ‘Localhost’ as the Host, the same as you SSH to your own local machine.

Ssh user@Localhost -p 2222

To get this working in C# I would recommend using SSH keys to avoid the requirement of passwords as you would need key entry for that, and then the PowerShell Nuget package to run the HVC command like below:

$SystemDirectory = [Environment]::SystemDirectory
cd $SystemDirectory
hvc.exe ssh -L 2222:Localhost:22 User@VmName -i "KeyPath" -FN

Testing Azure DevOps Pipelines Locally

I have tried and tried before, plus also read a lot of other people asking for the ability to test Azure YAML Pipelines locally. Now although it is not perfect and some room for improvement, Microsoft has create a way to test your pipeline with some local testing and remote testing without having to check in your YAML.

The issue to solve is, you making a change to a Azure DevOps Pipelines YAML, then to test this you need to check it in just to find out something small like, the value you passed is not correct, or the linked repository isn’t spelt correctly. This can then get very annoying as you need to keep doing this back and forward until it is finally able to run.

A quick tool you can use, which is completely local, is the Visual Studio Code (VS Code) extension ‘Azure Pipelines‘. This will validate your YAMLs formatting and in the Output window to the left it can show you the hierarchy flow of the YAML, which is the first step to validate.

Another handy tool is ‘Azure DevOps Snippets‘, which is an auto-complete library of the Azure DevOps Tasks. This can help in getting the correct names of parameters and tasks, instead of having to need a solid memory.

However, these tools will only help with formatting the local YAML and without the parameters you have set in the Azure DevOps Library. What we need is to test this remotely against the information in Azure DevOps and for that Microsoft have provided an API.

This API can be used to run a pipeline remotely that is stored in Azure DevOps, so you can automate via the API to trigger builds. It also has two handy parameters. One being ‘previewRun’, a boolean to determine if this should run the pipeline for real or validate the run. This will use all the information you have set in Azure DevOps and the pipeline, but it will not create a new build and run code. What it will do is use the correct parameters, link to the other real repositories and check all the data will work. It is also the setting use when you edit the pipeline in Azure DevOps and select ‘Validate’ from the options menu in the top right.

This is really good, but would still require you to check in the YAML that you have edited, which is why there is the parameter ‘yamlOverride’ in the API. In this parameter you can add you newly edited YAML content and this will replace what is store in Azure DevOps without overwriting it.

Here is an example POST Body:

POST:

https://dev.azure.com/MyOrg/7e8cdd97-0000-0000-a9ed-8eb0e5c748f5/_apis/pipelines/12/runs
{
   "resources":{
      "pipelines":{
         
      },
      "repositories":{
         "self":{
            "refName":"azure-pipelines"
         }
      },
      "builds":{
         
      },
      "containers":{
         
      },
      "packages":{
         
      }
   },
   "templateParameters":{
      
   },
   "previewRun":true,
   "yamlOverride":"MyYAML'"
}

To make life a little easier, if you are using PowerShell there is a VSTeam Module you can install to run this command through PowerShell. It takes the ID of the pipeline you want to run against, which you can get from the pipeline URL in Azure DevOps. It then take the file path to the local YAML file and the project name from Azure DevOps.

Test-VSTeamYamlPipeline -PipelineId 12 -FilePath .\azure-pipelines.yml -ProjectName ProjectName


These are great tools and solve a lot of the issue, but they do not solve all of them. You would still need internet connection to run these, as they run on the remote DevOps instance. Also, I have a repository purely containing groups of YAML tasks for better reusability, which you cannot run tests on these as they are not complete pipelines only templats, so I would still need to do the check in and run process. Finally, it does not merge local YAML files. If you have a main file and then templates locally linked it will not merge them, you need to create a single YAML to run this.

However, I don’t think you can solve all of these as it would require you to run Azure DevOps locally in an exact copy of the live instance you have. If you was using Azure DevOps Server then you might be able to take copies of these and run them in a container on your local, but it could be a longer process compare to the current one.

If you have any other methods to make developing Azure DevOps YAML Pipelines easier then please share.

Use Terraform to connect ACR with Azure Web App

You can connect an Azure Web App to Docker Hub, Private Repository and also an Azure Container Registry(ACR). Using Terraform you can take it a step further and build your whole infrastructure environment at the same time as connecting these container registries. However, how do you connect them together in Terraform?

I am going to focus on the connection of an ACR, but you can also follow the same method for the other providers.

Why I am using this as an example, is when correcting the other methods they are a simple URL, username and password, but the Azure Container Registry within the portal has a different user interface where it connects natively in the Azure. Why I was learning to do this, I kept getting my ACR connecting like a private repository instead of an actual ACR. Therefore, the method below will have the desired outcome of within the Azure portal the Web App showing it is connected to an ACR.

I will go through the general setup I have got for a simple Web App connecting to an ACR with all of the supporting  elements. I am not showing best practice of having the variables and outputs in separate files as this is not the point of the post, but I would encourage people to do that.

First we will need to create the infrastructure to support the Web App, by connecting to the Azure Resource Manager provider in Terraform:

provider "azurerm" {
  version         = "=2.25.0"
  subscription_id = var.subscription_id
  features {}
}

This passes a ‘subscription_id’ variable to connect to the correct subscription. We then create the Resource Group to contain all the resources.

variable "resource_group_name" {
  type        = string
  description = "Azure Resource Group Name. "
}
variable "location" {
  type        = string
  description = "Azure Resource Region Location"
}

# Create a Resource Group
resource "azurerm_resource_group" "acr-rg" {
  name = var.resource_group_name
  location = var.location  
}

The next part is to create the Azure Container Registry with your chosen name and the SKU for the service level you would like. For this example we have use the ‘Standard’ to keep it cheap and simple, while using the same location as the Resource Group.

variable "container_registry_name" {
  type        = string
  description = "Azure Container Registry Name"
}

# Azure Container Regristry
resource "azurerm_container_registry" "acr" {
  name                     = var.container_registry_name
  resource_group_name      = azurerm_resource_group.acr-rg.name
  location                 = azurerm_resource_group.acr-rg.location
  sku                      = "Standard"
  admin_enabled            = true
}

For the Web App we will need an App Service Plan to contain the Web App and set the SKU Level. You can see this is the same as before using the same locations and also I am using Linux as the base operating system.

variable "app_plan_name" {
  type        = string
  description = "Azure App Service Plan Name"
}

# App Plan
resource "azurerm_app_service_plan" "service-plan" {
  name = var.app_plan_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  kind = "Linux"
  reserved = true  
  sku {
    tier = "Standard"
    size = "S1"
  }  
}

Now is where we declare the Web App itself, but first create the 3 variables we will need. The Web App name, your Registry name and the Tag assigned to your image.

variable "web_app_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "registry_name" {
  type        = string
  description = "Azure Web App Name"
}
variable "tag_name" {
  type        = string
  description = "Azure Web App Name"
 default: 'latest'
}

To link to Docker Registries you need 3 App Settings configured ‘ DOCKER_REGISTRY_SERVER_URL’, ‘ DOCKER_REGISTRY_SERVER_USERNAME’, and ‘DOCKER_REGISTRY_SERVER_PASSWORD’.

These are used to gain the correct access to the registries.

For the ACR, the URL is the ‘Login Server’ and then the username/password is the Admin Username/Password.

These can be found here in the portal, if your ACR is already created.

For example:

    DOCKER_REGISTRY_SERVER_URL      = "https://myacr.azurecr.io"
    DOCKER_REGISTRY_SERVER_USERNAME = myacr
    DOCKER_REGISTRY_SERVER_PASSWORD = *********

A key part to see here is the URL is prefixed with the ‘https’ and it needs to be this, not http as it needs to be secure.

Instead of getting these details manually, we are using Terraform so we have access to these details from the created Azure Container Registry that we can use:

    DOCKER_REGISTRY_SERVER_URL              = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password

We now have a connection to the ACR, but need to tell the Web App what registry and tag to look for. As we are using a Linux based server, we configure the ‘linux_fx_version’ in the site config with this pattern below, but for Windows you would use ‘windows_fx_version’.

"DOCKER|[RegistryName]:[TagName]"

For an example with a registry name MyRegistry and a tag name MyTag:

"DOCKER|MyRegistry:MyTag"

Below is the full example of the Web App generation in Terraform. With all these parts together you should have a Resource Group containing a ACR, App Service Plan and a Web App all connected.

# web App
resource "azurerm_app_service" "app-service" {
  name = var.web_app_name
  location = azurerm_resource_group.acr-rg.location
  resource_group_name = azurerm_resource_group.acr-rg.name
  app_service_plan_id = azurerm_app_service_plan.service-plan.id
  app_settings = {
    WEBSITES_ENABLE_APP_SERVICE_STORAGE = false
   
    # Settings for private Container Registires  
    DOCKER_REGISTRY_SERVER_URL      = "https://${azurerm_container_registry.acr.login_server}"
    DOCKER_REGISTRY_SERVER_USERNAME = azurerm_container_registry.acr.admin_username
    DOCKER_REGISTRY_SERVER_PASSWORD = azurerm_container_registry.acr.admin_password
 
  }
  # Configure Docker Image to load on start
  site_config {
    linux_fx_version = "DOCKER|${var.registry_name}:${var.tag_name}"
    always_on        = "true"
  }
  identity {
    type = "SystemAssigned"
  }
}

## Outputs
output "app_service_name" {
  value = "${azurerm_app_service.app-service.name}"
}
output "app_service_default_hostname" {
  value = "https://${azurerm_app_service.app-service.default_site_hostname}"
}