DotNet User Secrets Feature

A little unknown feature of dotnet is User Secrets. This is used to create local App Setting overrides to set values for local usage. This can be a handy and powerful tool for keeping your local setup separate from checking in code.

You can find more details on the Microsoft Documentation here https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets

The goal of this feature is to overwrite your App Settings with local values. For example if you have a connection string within your JSON, you are not going to want your local Username/Password stored in there to be checked. You also don’t want to be pulling other peoples settings down and having to keep changing them. Therefore, with this feature you set your values on your local machine in a different file and they get overridden, not overwritten, so they will work for your but never be checked in.

Within your project location you can run the ‘init’ command, which will create a new node in the project file. This is called ‘UserSecretId’, which contains the ID for this project. When this runs it will use the ID to match up with where the secrets are stored.

The secrets are stored in the folder ‘C:/Users/[UserName]/AppData/Roaming/Microsoft/UserSecrets’ then in a directory with the ID as the directory name. Within this folder there is then a file called ‘secrets.json’ where you will store all the secrets in a json format. You can get more detail on how to format the name properties of your App Settings on the documentation.

When you run the ‘init’ command it doesn’t create this directory and file for you, so I whipped together a script below to generate the User Secret ID and to also create the required directory/file. Before I talk about that I will also show have to use User Secrets with Dotnet Core Console Apps.

This could be something I have done wrong, but when I create a Web Application and use the feature it just works with no extra effort. However, when I created a Console App it did not just work out the box. I found I needed to do a few things to get it working, which Stafford Williams talks about here

One part he missed was when using Dependency Injection to inject where to find the User Secrets ID in the Builder as per:

varbuilder=newConfigurationBuilder()
.AddJsonFile("appsettings.json",optional:false,reloadOnChange:true)
.AddJsonFile(envJson,optional:false,reloadOnChange:true)
.AddEnvironmentVariables()
.AddUserSecrets<Program>();

Create User Secrets

In the below code it accepts a single project path and a directory with many projects. It will find the project files and check if they have the User Secrets ID in the project.

If it doesn’t then it will go to the directory, run the ‘init’ command and then get the ID as well.

From there it can check/create the folders and files for the User Secrets.

Param (
    [string]$projectPath
)
$projects;
$filesCount = 0
if ($projectPath.EndsWith('.csproj')) {
    $projects = Get-ChildItem -Path $projectPath
    $filesCount = 1
}
else {
    
    if ($projectPath.EndsWith('/') -eq $false -or $projectPath.EndsWith('\') -eq $false) {
        $projectPath += "/";
    }
    $projects = Get-ChildItem -Path "$projectPath*.csproj" -Recurse -Force
    $filesCount = $projects.Length
}
Write-Host("Files Found $filesCount")
if ($filesCount -gt 0) {
    $userSecretsPath = "$ENV:UserProfile/AppData/Roaming/Microsoft/UserSecrets"
    if (!(Test-Path $userSecretsPath)) { 
        Write-Host("Create User Secrets Path")
        New-Item -ItemType directory -Path $userSecretsPath
    }
    $currentDir = [System.IO.Path]::GetDirectoryName($myInvocation.MyCommand.Definition)
    foreach ($project in $projects) {
        Write-Host(" ")
        Write-Host("Current Project $project")
        [xml]$fileContents = Get-Content -Path $project
        if ($null -eq $fileContents.Project.PropertyGroup.UserSecretsId) { 
            Write-Host("User Secret ID node not found in project file")
            Set-Location $project.DirectoryName
            dotnet user-secrets init
            Set-Location $currentDir
            Write-Host("User Secret Create")
            [xml]$fileContents = Get-Content -Path $project
        }
        $userSecretId = $fileContents.Project.PropertyGroup.UserSecretsId
        Write-Host("User Secret ID $userSecretId")
        if ($userSecretId -ne ""){
            $userSecretPath = "$userSecretsPath/$userSecretId"
            if (!(Test-Path $userSecretPath)) { 
                New-Item -ItemType directory -Path $userSecretPath
                Write-Host("User Secret ID $userSecretId Path Created")
            }
            $secretFileName = "secrets.json"
            $secretPath = "$userSecretsPath/$userSecretId/$secretFileName"
            if (!(Test-Path $secretPath)) {   
                New-Item -path $userSecretPath -name $secretFileName -type "file" -value "{}"
                Write-Host("User Secret ID $userSecretId secrets file Created")
            }
            Write-Host("User Secrets path $secretPath")
        }
    }
}

Automatic Change Checking on Pipeline Stages

Azure DevOps has the ability to add multiple stage approval styles like human intervention, Azure Monitoring and schedules. However, there is not the ability to value if a source has changed before triggering a Stage. This can be done on the Pipeline level, but not the Stage level where it can be helpful to have sometimes. I have a PowerShell solution on how this can be done.

The problem to solve came when I had a Pipeline that build multiple Nuget Packages. Without having a human intervention approval button on each of the stages to publish the Nuget Package, they would Publish each time even if there was not changes. This would then cause the version number in the Artifacts to keep increasing for no reason.

Therefore, we wanted a method to automatically check if there has been a change in a particular source location, before allowing it to publish the Package. This has been done using PowerShell, to get the requested branch changes through Git. It can get all the names of the changed files, which included the full path, then we can check what it contains.

The code below gets the Diff files, looping through each of them while checking if it matches the provided path. If it does then it can set the global variable to say there has been changes. I did add an option to break on the first found item, but you could leave it to keep going and leave a log of all the files changed. In my case, I just wanted to know if there has or has not been a change.

$files = $(git diff HEAD HEAD~ --name-only)
$changedFiles = $files -split ' '
$changedCount = $changedFiles.Length
Write-Host("Total changed $changedCount")
$hasBeenChanged = $false
For ($i = 0; $i -lt $count; $i++) {
    $changedFile = $changedFiles[$i]
    if ($changedFile -like "${{parameters.projectPath}}") {
        Write-Host("CHANGED: $changedFile")
        $hasBeenChanged = $true
        if (${{parameters.breakOnChange}} -eq 'true')
            break
    }
}

You can then set the outcome as an output variable of the task for usage later as per this Template below:

- name: 'projectPath'
    type: string
  - name: 'name'
    type: string
  - name: 'breakOnChange'
    type: string
    default: 'true'
steps:
  - task: PowerShell@2
    name: ${{parameters.name}}
    displayName: 'Code change check for ${{parameters.name}}'
    inputs:
      targetType: 'inline'
      script: |
        $files = $(git diff HEAD HEAD~ --name-only)
        $changedFiles = $files -split ' '
        $changedCount = $changedFiles.Length
        Write-Host("Total changed $changedCount")
        $hasBeenChanged = $false
        For ($i = 0; $i -lt $count; $i++) {
            $changedFile = $changedFiles[$i]
            if ($changedFile -like "${{parameters.projectPath}}") {
                Write-Host("CHANGED: $changedFile")
                $hasBeenChanged = $true
                if (${{parameters.breakOnChange}} -eq 'true')
                break
            }
        }
        if ($hasBeenChanged -eq $true) {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]True"
            break
        }
        else {
            Write-Host "##vso[task.setvariable variable=IsContainFile;isOutput=true]False"
        }

Once you have that output variable, it can be used throughout your pipeline as a condition on Stages, Jobs and Task, plus more. Here I use it on a Stage to check before it is run:

- stage: '${{parameters.projectExtension}}Build'
    displayName: ' Nuget Stage'
    condition: eq(variables['${{parameters.name}}.IsContainFile'], 'true')

Auto Purge Azure Container Registry Images

When adding images into the Azure Container Registry you might start getting a backlog of images that need to be cleaned down. Azure CLI has some features, but you might want more…

With the Azure CLI you can use the ‘ACR’ commands, which contain the option of ‘purge’. This can be combined with a tag filter to narrow what images to remove, plus an ‘ago’ property to filter how old the images need to be.

This can then be run to clear down the images in the Registry as per the example below and you can get more information from the GitHub Repository.

acr purge --ago 5d --filter 'myRegistry:.*' --untagged 

This can then be merged with the ACR command to set an ACR Task. This can be a schedule task run on the ACR to trigger the Command, routinely cleaning down the repository. You can see from the example below and read more detail from the Microsoft Documentation.

$azureContainerRegistryName=""
$PURGE_CMD="acr purge --ago 5d --filter 'myRegistry:.*' --untagged "
az acr task create --name myRegistry-WeeklyPurgeTask --cmd "$PURGE_CMD" --schedule "0 1 * * Sun" --registry $azureContainerRegistryName --context /dev/null

The Purge method is really good, but unless you have good tag management, of making sure your running images are tagged differently to the older images, then it doesn’t work. This will in my case keep clearing out all images even ones that are in use.

Therefore, I created a PowerShell script to clean the images by the number of them in the registry. With this we can always be certain there are at leave X amount of images in the registries.

To make this more flexibly and reusable, it works on the Azure Container Registry level instead of the Registry level. First we set the ACR name and the maximum images we want left in the registry. With this we can use the Azure CLI to get an output of all the Registries.

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {

}

With this we can loop each registry to check their image count and remove the images. To do this we use the CLI to get all the image tags in the repository in date/time order descending, so our newer images come first. This means when we loop through them we can keep a counter until we reach the maximum images variable set earlier. Once we reach the set number and only then do we start deleting images.

To delete we can call the CLI action ‘az acr repository delete’, which requires the full name of the image, including the repository name.

Below is the full PowerShell example:

$AcrName = "myAcr"
$maxImages = 5
$repositories = (az acr repository list -n $AcrName --output tsv)
foreach ($repository in $repositories) {
        
    Write-Host("repo: $repository")
    $images = az acr repository show-tags -n $AcrName --repository $repository --orderby time_desc  --output tsv
    Write-Host("image: $images")
            
    $imageCount = 0
    foreach ($image in $images) {
        if ($imageCount -gt $maxImages) {
            $imageFullName = "$repository`:$image"
            Write-Host("image: $imageFullName")
            az acr repository delete -n $AcrName --image $imageFullName
        }
        $imageCount++
    }
}

You could then contain this code into a variable like the first example, to then put it into a Scheduled ACR Task, or just create an automated schedule with other technology like Azure DevOps Pipelines where you can add this into source control.

Setup Hyper Guest for SSH without IP Address

When setting up the Hyper-V Guest hosts, I found it a little tricky and hard to find documentation on how to easily set these up, so I thought I would share how I got them into a configuration with the most simple process. With this setup you can also SSH into the Guest Host even if you do not have an IP address exposed on the Guest Network Adaptor.

To make thing even more simple I am using the pre-selected OS versions from the Hyper-V quick create options, but the steps should also work on other versions.

Linux Virtual Machine

In these steps below you will create an Linux Virtual Machine(VM) with the version ‘Ubuntu 18.04.3 LTS’

  1. Install and Open Hyper-V.
  2. Click Quick Create from the menu on the right.
  3. Select ‘Ubuntu 18.04.3 LTS‘ from the menu and create it.
  4. Follow all the details from the wizard as requested with your chosen details.
  5. Once completed, start and login to your machine.
  6. Open the Terminal within the VM.
  7. Run the following commands
    1. Update installs
      sudo apt-get update
    2. Install open ssh server
      sudo apt-get install openssh-server
    3. Install linux azure
      sudo apt-get install linux-azure
    4. start services by running the below repacing SERVICE-NAME for each of these sshd, ssh, hv-kvp-daemon.service
      sudo systemctl start SERVICE-NAME
      sudo systemctl status SERVICE-NAME
    5. Allow SSH through the fire wall
      sudo ufw allow ssh

Windows Virtual Machine

In these steps below you will create an Windows Virtual Machine(VM) with the version ‘Windows 10 dev environment’

  1. Install and Open Hyper-V
  2. Click Quick Create from the menu on the right
  3. Select Windows 10 dev environment from the menu and create it
  4. Follow all the details from the wizard as requested.
  5. Once completed start and login to your machine
  6. Run these commands
    1. Install Open SSH
      Add-WindowsCapability -Online -Name OpenSSH.Client~~~~0.0.1.0

SSH Keys

If you would like to login to your Virtual Machine then you will need to install the SSH keys.

You can find out how to generate keys and what keys you need from the SSH website. (https://www.ssh.com/ssh/keygen/)

Here is some more information on where to store the Public Keys once generated.

Public Key Store

On Linux, you can store them in the users directory in .ssh/authorized_keys for example C:\Users\USERNAME\.ssh\authorized_keys

Unlike Linux there are one of two places you will need to add the keys. If you are admin add it to C:\ProgramData\ssh\administrators_authorized_keys If you are not admin add it to C:\Users\USERNAME\.ssh\authorized_keys

Check If Admin
  1. Run lusrmgr.msc
  2. Select Groups
  3. Select Admin
  4. Check if you are in the group.

Once these tasks are completed you should be able to SSH into your Virtual Machines via the Hyper-V Console(HVC).

I have written about how to use this in a previous post ‘SSH to Hyper-V Virtual Machine using SSH.NET without IP Address‘. Although this targets the SSH.NET, you can use the commands from it to SSH from the Terminal.

Microsoft Upgrade Assistant

Dotnet Core has been released for a very long time now and everyone is getting on the cutting edge of the SDK technology when it is realeased. However, there has been some assistance missing in helping, especially .Net Framework projects,  upgrade to the next version. Now 2021 they have brought the tooling out! and by they I mean Microsoft.

The funny part of this is although Microsoft built Dotnet Core, with the communities help, it was AWS that came to the recuse of developers first. Late 2020 AWS developed a porting assistance tool for moving from .Net Framework to Dotnet Core.

This tool brings a nice UI to download and review the actions required to upgrade, but being more developer I prefer the command line. The report is well detailed and does a good job at bringing the old technology to the latest version. I think it is great that this was around to help companies get onto Dotnet Core. However, it doesn’t assist with a change in Dotnet Core SDK versioning, which although it is simple, having something that will make sure your app is properly upgraded without any all nighter is nice to have.

There is also a document on migration in Dotnet Core from Microsoft, that goes through each of the steps required to action the upgrade. I did follow this guide and it was very simple, but then it doesn’t cater for more complex and large solutions. It is also specifically for a Dotnet Core migration from 3.1 to 5, which some developers might be coming from other versions. View the documentation here.

Finally we come to the new toy on the block, which is the Microsoft Upgrade Assistant. The reasons why this is such a good tool, is it covers where the other two methods fail. It is made to run with the Dotnet Core SDK, so it is something you will be or will get to be familiar with. It can assist in upgrading .Net Framework and Dotnet Core projects or solutions. This is very well automated to do the work for you with a simple numbering system to choose what steps to run, skip or explain.

Microsoft Dotnet Core Upgrading Assistance Tool

The guide to installation and use is very simple to follow as it is a few dotnet installations, then run the command.

A handy part is it offers the ability to backup your existing project, so you will not lose your work, but you should also be using a source control like Git to manage you project anyway so you will see the changes made.

If you get into any trouble with the installation or the upgrade, like all of Microsoft projects, it is open source on GitHub. I found one or two issue installing and found the help I needed extremely fast.

The reason why these tools and methods are so important now, is the upgrading cadence is speeding up. Before you would have a set version with some patches, but you would not look to upgrade as often. Now with Dotnet Core and the general speed of development, the changes have a lot of benefits with little impact to upgrade, that is if you keep up with the latest. Already they have release .Net 5 production ready and releasing .Net 6 preview so you can see the speed in change.

The upgrades are getting as simple as updating a Nuget package, so why would you not. You will of course still need to test and validate the upgrade, plus you are restricted by other resources you use being compatible like external Nuget Packages.