DotNet User Secrets Feature

A little unknown feature of dotnet is User Secrets. This is used to create local App Setting overrides to set values for local usage. This can be a handy and powerful tool for keeping your local setup separate from checking in code.

You can find more details on the Microsoft Documentation here https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets

The goal of this feature is to overwrite your App Settings with local values. For example if you have a connection string within your JSON, you are not going to want your local Username/Password stored in there to be checked. You also don’t want to be pulling other peoples settings down and having to keep changing them. Therefore, with this feature you set your values on your local machine in a different file and they get overridden, not overwritten, so they will work for your but never be checked in.

Within your project location you can run the ‘init’ command, which will create a new node in the project file. This is called ‘UserSecretId’, which contains the ID for this project. When this runs it will use the ID to match up with where the secrets are stored.

The secrets are stored in the folder ‘C:/Users/[UserName]/AppData/Roaming/Microsoft/UserSecrets’ then in a directory with the ID as the directory name. Within this folder there is then a file called ‘secrets.json’ where you will store all the secrets in a json format. You can get more detail on how to format the name properties of your App Settings on the documentation.

When you run the ‘init’ command it doesn’t create this directory and file for you, so I whipped together a script below to generate the User Secret ID and to also create the required directory/file. Before I talk about that I will also show have to use User Secrets with Dotnet Core Console Apps.

This could be something I have done wrong, but when I create a Web Application and use the feature it just works with no extra effort. However, when I created a Console App it did not just work out the box. I found I needed to do a few things to get it working, which Stafford Williams talks about here

One part he missed was when using Dependency Injection to inject where to find the User Secrets ID in the Builder as per:

varbuilder=newConfigurationBuilder()
.AddJsonFile("appsettings.json",optional:false,reloadOnChange:true)
.AddJsonFile(envJson,optional:false,reloadOnChange:true)
.AddEnvironmentVariables()
.AddUserSecrets<Program>();

Create User Secrets

In the below code it accepts a single project path and a directory with many projects. It will find the project files and check if they have the User Secrets ID in the project.

If it doesn’t then it will go to the directory, run the ‘init’ command and then get the ID as well.

From there it can check/create the folders and files for the User Secrets.

Param (
    [string]$projectPath
)
$projects;
$filesCount = 0
if ($projectPath.EndsWith('.csproj')) {
    $projects = Get-ChildItem -Path $projectPath
    $filesCount = 1
}
else {
    
    if ($projectPath.EndsWith('/') -eq $false -or $projectPath.EndsWith('\') -eq $false) {
        $projectPath += "/";
    }
    $projects = Get-ChildItem -Path "$projectPath*.csproj" -Recurse -Force
    $filesCount = $projects.Length
}
Write-Host("Files Found $filesCount")
if ($filesCount -gt 0) {
    $userSecretsPath = "$ENV:UserProfile/AppData/Roaming/Microsoft/UserSecrets"
    if (!(Test-Path $userSecretsPath)) { 
        Write-Host("Create User Secrets Path")
        New-Item -ItemType directory -Path $userSecretsPath
    }
    $currentDir = [System.IO.Path]::GetDirectoryName($myInvocation.MyCommand.Definition)
    foreach ($project in $projects) {
        Write-Host(" ")
        Write-Host("Current Project $project")
        [xml]$fileContents = Get-Content -Path $project
        if ($null -eq $fileContents.Project.PropertyGroup.UserSecretsId) { 
            Write-Host("User Secret ID node not found in project file")
            Set-Location $project.DirectoryName
            dotnet user-secrets init
            Set-Location $currentDir
            Write-Host("User Secret Create")
            [xml]$fileContents = Get-Content -Path $project
        }
        $userSecretId = $fileContents.Project.PropertyGroup.UserSecretsId
        Write-Host("User Secret ID $userSecretId")
        if ($userSecretId -ne ""){
            $userSecretPath = "$userSecretsPath/$userSecretId"
            if (!(Test-Path $userSecretPath)) { 
                New-Item -ItemType directory -Path $userSecretPath
                Write-Host("User Secret ID $userSecretId Path Created")
            }
            $secretFileName = "secrets.json"
            $secretPath = "$userSecretsPath/$userSecretId/$secretFileName"
            if (!(Test-Path $secretPath)) {   
                New-Item -path $userSecretPath -name $secretFileName -type "file" -value "{}"
                Write-Host("User Secret ID $userSecretId secrets file Created")
            }
            Write-Host("User Secrets path $secretPath")
        }
    }
}

Automatic Change Checking on Pipeline Stages

Azure DevOps has the ability to add multiple stage approval styles like human intervention, Azure Monitoring and schedules. However, there is not the ability to value if a source has changed before triggering a Stage. This can be done on the Pipeline level, but not the Stage level where it can be helpful to have sometimes. I have a PowerShell solution on how this can be done.

The problem to solve came when I had a Pipeline that build multiple Nuget Packages. Without having a human intervention approval button on each of the stages to publish the Nuget Package, they would Publish each time even if there was not changes. This would then cause the version number in the Artifacts to keep increasing for no reason.

Therefore, we wanted a method to automatically check if there has been a change in a particular source location, before allowing it to publish the Package. This has been done using PowerShell, to get the requested branch changes through Git. It can get all the names of the changed files, which included the full path, then we can check what it contains.

The code below gets the Diff files, looping through each of them while checking if it matches the provided path. If it does then it can set the global variable to say there has been changes. I did add an option to break on the first found item, but you could leave it to keep going and leave a log of all the files changed. In my case, I just wanted to know if there has or has not been a change.

$files = $(git diff HEAD HEAD~ --name-only)
$changedFiles = $files -split ' '
$changedCount = $changedFiles.Length
Write-Host("Total changed $changedCount")
$hasBeenChanged = $false
For ($i = 0; $i -lt $count; $i++) {
    $changedFile = $changedFiles[$i]
    if ($changedFile -like "${{parameters.projectPath}}") {
        Write-Host("CHANGED: $changedFile")
        $hasBeenChanged = $true
        if (${{parameters.breakOnChange}} -eq 'true')
            break
    }
}

You can then set the outcome as an output variable of the task for usage later as per this Template below:

- name: 'projectPath'
    type: string
  - name: 'name'
    type: string
  - name: 'breakOnChange'
    type: string
    default: 'true'
steps:
  - task: PowerShell@2
    name: ${{parameters.name}}
    displayName: 'Code change check for ${{parameters.name}}'
    inputs:
      targetType: 'inline'
      script: |
        $files = $(git diff HEAD HEAD~ --name-only)
        $changedFiles = $files -split ' '
        $changedCount = $changedFiles.Length
        Write-Host("Total changed $changedCount")
        $hasBeenChanged = $false
        For ($i = 0; $i -lt $count; $i++) {
            $changedFile = $changedFiles[$i]
            if ($changedFile -like "${{parameters.projectPath}}") {
                Write-Host("CHANGED: $changedFile")
                $hasBeenChanged = $true
                if (${{parameters.breakOnChange}} -eq 'true')
                break
            }
        }
        if ($hasBeenChanged -eq $true) {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]True"
            break
        }
        else {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]False"
        }

Once you have that output variable, it can be used throughout your pipeline as a condition on Stages, Jobs and Task, plus more. Here I use it on a Stage to check before it is run:

- stage: '${{parameters.projectExtension}}Build'
    displayName: ' Nuget Stage'
    condition: eq(variables['${{parameters.name}}.IsContainFile'], 'true')

Auto Purge Azure Container Registry Images

When adding images into the Azure Container Registry you might start getting a backlog of images that need to be cleaned down. Azure CLI has some features, but you might want more…

With the Azure CLI you can use the ‘ACR’ commands, which contain the option of ‘purge’. This can be combined with a tag filter to narrow what images to remove, plus an ‘ago’ property to filter how old the images need to be.

This can then be run to clear down the images in the Registry as per the example below and you can get more information from the GitHub Repository.

acr purge --ago 5d --filter 'myRegistry:.*' --untagged 

This can then be merged with the ACR command to set an ACR Task. This can be a schedule task run on the ACR to trigger the Command, routinely cleaning down the repository. You can see from the example below and read more detail from the Microsoft Documentation.

$azureContainerRegistryName=""
$PURGE_CMD="acr purge --ago 5d --filter 'myRegistry:.*' --untagged "
az acr task create --name myRegistry-WeeklyPurgeTask --cmd "$PURGE_CMD" --schedule "0 1 * * Sun" --registry $azureContainerRegistryName --context /dev/null

The Purge method is really good, but unless you have good tag management, of making sure your running images are tagged differently to the older images, then it doesn’t work. This will in my case keep clearing out all images even ones that are in use.

Therefore, I created a PowerShell script to clean the images by the number of them in the registry. With this we can always be certain there are at leave X amount of images in the registries.

To make this more flexibly and reusable, it works on the Azure Container Registry level instead of the Registry level. First we set the ACR name and the maximum images we want left in the registry. With this we can use the Azure CLI to get an output of all the Registries.

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {

}

With this we can loop each registry to check their image count and remove the images. To do this we use the CLI to get all the image tags in the repository in date/time order descending, so our newer images come first. This means when we loop through them we can keep a counter until we reach the maximum images variable set earlier. Once we reach the set number and only then do we start deleting images.

To delete we can call the CLI action ‘az acr repository delete’, which requires the full name of the image, including the repository name.

Below is the full PowerShell example:

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {
        
    Write-Host("repo: $repository")
    $images = az acr repository show-tags -n $AcrName --repository $repository --orderby time_desc  --output tsv
    Write-Host("image: $images")
            
    $imageCount = 0
    foreach ($image in $images) {
        if ($imageCount -gt $maxImages) {
            $imageFullName = "$repository`:$image"
            Write-Host("image: $imageFullName")
            az acr repository delete -n $AcrName --image $imageFullName
        }
        $imageCount++
    }
}

You could then contain this code into a variable like the first example, to then put it into a Scheduled ACR Task, or just create an automated schedule with other technology like Azure DevOps Pipelines where you can add this into source control.

Setup Hyper Guest for SSH without IP Address

When setting up the Hyper-V Guest hosts, I found it a little tricky and hard to find documentation on how to easily set these up, so I thought I would share how I got them into a configuration with the most simple process. With this setup you can also SSH into the Guest Host even if you do not have an IP address exposed on the Guest Network Adaptor.

To make thing even more simple I am using the pre-selected OS versions from the Hyper-V quick create options, but the steps should also work on other versions.

Linux Virtual Machine

In these steps below you will create an Linux Virtual Machine(VM) with the version ‘Ubuntu 18.04.3 LTS’

  1. Install and Open Hyper-V.
  2. Click Quick Create from the menu on the right.
  3. Select ‘Ubuntu 18.04.3 LTS‘ from the menu and create it.
  4. Follow all the details from the wizard as requested with your chosen details.
  5. Once completed, start and login to your machine.
  6. Open the Terminal within the VM.
  7. Run the following commands
    1. Update installs
      sudo apt-get update
    2. Install open ssh server
      sudo apt-get install openssh-server
    3. Install linux azure
      sudo apt-get install linux-azure
    4. start services by running the below repacing SERVICE-NAME for each of these sshd, ssh, hv-kvp-daemon.service
      sudo systemctl start SERVICE-NAME
      sudo systemctl status SERVICE-NAME
    5. Allow SSH through the fire wall
      sudo ufw allow ssh

Windows Virtual Machine

In these steps below you will create an Windows Virtual Machine(VM) with the version ‘Windows 10 dev environment’

  1. Install and Open Hyper-V
  2. Click Quick Create from the menu on the right
  3. Select Windows 10 dev environment from the menu and create it
  4. Follow all the details from the wizard as requested.
  5. Once completed start and login to your machine
  6. Run these commands
    1. Install Open SSH
      Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

SSH Keys

If you would like to login to your Virtual Machine then you will need to install the SSH keys.

You can find out how to generate keys and what keys you need from the SSH website. (https://www.ssh.com/ssh/keygen/)

Here is some more information on where to store the Public Keys once generated.

Public Key Store

On Linux, you can store them in the users directory in .ssh/authorized_keys for example C:\Users\USERNAME\.ssh\authorized_keys

Unlike Linux there are one of two places you will need to add the keys. If you are admin add it to C:\ProgramData\ssh\administrators_authorized_keys If you are not admin add it to C:\Users\USERNAME\.ssh\authorized_keys

Check If Admin
  1. Run lusrmgr.msc
  2. Select Groups
  3. Select Admin
  4. Check if you are in the group.

Once these tasks are completed you should be able to SSH into your Virtual Machines via the Hyper-V Console(HVC).

I have written about how to use this in a previous post ‘SSH to Hyper-V Virtual Machine using SSH.NET without IP Address‘. Although this targets the SSH.NET, you can use the commands from it to SSH from the Terminal.

SSH to Hyper-V Virtual Machine using SSH.NET without IP Address

I have uses the Dotnet Core Nuget package SSH.NET to SSH into machines a few times, is a very simple, slick and handy tool to have. However, you cannot SSH into a Virtual Machine(VM) in Hyper-V that easy without some extra fiddling to get an exposed IP Address.

With your standard SSH command you can run the simple:

ssh User@Host

This can have many other attributes, but lets keep it simple.

If your VM has an IP Address assigned to the Network Adapter then this can still be very simple, with using the user for the machine and the IP Address as the host.

However, not every VM in some situation will have an IP Address and therefore you cannot connect to it like this.

You can though if you use the Hyper-V Console CLI(HVC). If installed it can be located in ‘C:\Windows\System32\hvc.exe‘ and it is normally install when enabling the Hyper-V Feature in Windows. This tool enables your to communicate to your VM via the Hyper-V Bus in-between your local machine and the VM.

To use this tool you can run the same SSH command but with the HVC prefix:

hvc ssh User@Host

However, instead of the host you can pass the Hyper-V VM name, which you can get from the Hyper-V Program or with PowerShell in Administrator mode:

Get-VM

This is great to use in the Terminal, but doesn’t let you use standard SSH commands, which the SSH.Net tool uses. I have not come across a tool to do this execution via Dotnet Core yet, so I have come up with this solution.

What we can do to accomplish this is port forwarding, where we tell the VM to route traffic from one port on the VM to another port on the local machine.

Below we are telling the VM to push port 22 traffic, which is the SSH standard port, to port 2222 on the local machine with the correct Username and VM Name.

hvc.exe ssh -L 2222:Localhost:22 User@VmName

Once this has been done you can then run the standard SSH command, but with the port parameter and ‘Localhost’ as the Host, the same as you SSH to your own local machine.

Ssh user@Localhost -p 2222

To get this working in C# I would recommend using SSH keys to avoid the requirement of passwords as you would need key entry for that, and then the PowerShell Nuget package to run the HVC command like below:

$SystemDirectory = [Environment]::SystemDirectory
cd $SystemDirectory
hvc.exe ssh -L 2222:Localhost:22 User@VmName -i "KeyPath" -FN