DotNet User Secrets Feature

A little unknown feature of dotnet is User Secrets. This is used to create local App Setting overrides to set values for local usage. This can be a handy and powerful tool for keeping your local setup separate from checking in code.

You can find more details on the Microsoft Documentation here https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets

The goal of this feature is to overwrite your App Settings with local values. For example if you have a connection string within your JSON, you are not going to want your local Username/Password stored in there to be checked. You also don’t want to be pulling other peoples settings down and having to keep changing them. Therefore, with this feature you set your values on your local machine in a different file and they get overridden, not overwritten, so they will work for your but never be checked in.

Within your project location you can run the ‘init’ command, which will create a new node in the project file. This is called ‘UserSecretId’, which contains the ID for this project. When this runs it will use the ID to match up with where the secrets are stored.

The secrets are stored in the folder ‘C:/Users/[UserName]/AppData/Roaming/Microsoft/UserSecrets’ then in a directory with the ID as the directory name. Within this folder there is then a file called ‘secrets.json’ where you will store all the secrets in a json format. You can get more detail on how to format the name properties of your App Settings on the documentation.

When you run the ‘init’ command it doesn’t create this directory and file for you, so I whipped together a script below to generate the User Secret ID and to also create the required directory/file. Before I talk about that I will also show have to use User Secrets with Dotnet Core Console Apps.

This could be something I have done wrong, but when I create a Web Application and use the feature it just works with no extra effort. However, when I created a Console App it did not just work out the box. I found I needed to do a few things to get it working, which Stafford Williams talks about here

One part he missed was when using Dependency Injection to inject where to find the User Secrets ID in the Builder as per:

varbuilder=newConfigurationBuilder()
.AddJsonFile("appsettings.json",optional:false,reloadOnChange:true)
.AddJsonFile(envJson,optional:false,reloadOnChange:true)
.AddEnvironmentVariables()
.AddUserSecrets<Program>();

Create User Secrets

In the below code it accepts a single project path and a directory with many projects. It will find the project files and check if they have the User Secrets ID in the project.

If it doesn’t then it will go to the directory, run the ‘init’ command and then get the ID as well.

From there it can check/create the folders and files for the User Secrets.

Param (
    [string]$projectPath
)
$projects;
$filesCount = 0
if ($projectPath.EndsWith('.csproj')) {
    $projects = Get-ChildItem -Path $projectPath
    $filesCount = 1
}
else {
    
    if ($projectPath.EndsWith('/') -eq $false -or $projectPath.EndsWith('\') -eq $false) {
        $projectPath += "/";
    }
    $projects = Get-ChildItem -Path "$projectPath*.csproj" -Recurse -Force
    $filesCount = $projects.Length
}
Write-Host("Files Found $filesCount")
if ($filesCount -gt 0) {
    $userSecretsPath = "$ENV:UserProfile/AppData/Roaming/Microsoft/UserSecrets"
    if (!(Test-Path $userSecretsPath)) { 
        Write-Host("Create User Secrets Path")
        New-Item -ItemType directory -Path $userSecretsPath
    }
    $currentDir = [System.IO.Path]::GetDirectoryName($myInvocation.MyCommand.Definition)
    foreach ($project in $projects) {
        Write-Host(" ")
        Write-Host("Current Project $project")
        [xml]$fileContents = Get-Content -Path $project
        if ($null -eq $fileContents.Project.PropertyGroup.UserSecretsId) { 
            Write-Host("User Secret ID node not found in project file")
            Set-Location $project.DirectoryName
            dotnet user-secrets init
            Set-Location $currentDir
            Write-Host("User Secret Create")
            [xml]$fileContents = Get-Content -Path $project
        }
        $userSecretId = $fileContents.Project.PropertyGroup.UserSecretsId
        Write-Host("User Secret ID $userSecretId")
        if ($userSecretId -ne ""){
            $userSecretPath = "$userSecretsPath/$userSecretId"
            if (!(Test-Path $userSecretPath)) { 
                New-Item -ItemType directory -Path $userSecretPath
                Write-Host("User Secret ID $userSecretId Path Created")
            }
            $secretFileName = "secrets.json"
            $secretPath = "$userSecretsPath/$userSecretId/$secretFileName"
            if (!(Test-Path $secretPath)) {   
                New-Item -path $userSecretPath -name $secretFileName -type "file" -value "{}"
                Write-Host("User Secret ID $userSecretId secrets file Created")
            }
            Write-Host("User Secrets path $secretPath")
        }
    }
}

Microsoft Upgrade Assistant

Dotnet Core has been released for a very long time now and everyone is getting on the cutting edge of the SDK technology when it is realeased. However, there has been some assistance missing in helping, especially .Net Framework projects,  upgrade to the next version. Now 2021 they have brought the tooling out! and by they I mean Microsoft.

The funny part of this is although Microsoft built Dotnet Core, with the communities help, it was AWS that came to the recuse of developers first. Late 2020 AWS developed a porting assistance tool for moving from .Net Framework to Dotnet Core.

This tool brings a nice UI to download and review the actions required to upgrade, but being more developer I prefer the command line. The report is well detailed and does a good job at bringing the old technology to the latest version. I think it is great that this was around to help companies get onto Dotnet Core. However, it doesn’t assist with a change in Dotnet Core SDK versioning, which although it is simple, having something that will make sure your app is properly upgraded without any all nighter is nice to have.

There is also a document on migration in Dotnet Core from Microsoft, that goes through each of the steps required to action the upgrade. I did follow this guide and it was very simple, but then it doesn’t cater for more complex and large solutions. It is also specifically for a Dotnet Core migration from 3.1 to 5, which some developers might be coming from other versions. View the documentation here.

Finally we come to the new toy on the block, which is the Microsoft Upgrade Assistant. The reasons why this is such a good tool, is it covers where the other two methods fail. It is made to run with the Dotnet Core SDK, so it is something you will be or will get to be familiar with. It can assist in upgrading .Net Framework and Dotnet Core projects or solutions. This is very well automated to do the work for you with a simple numbering system to choose what steps to run, skip or explain.

Microsoft Dotnet Core Upgrading Assistance Tool

The guide to installation and use is very simple to follow as it is a few dotnet installations, then run the command.

A handy part is it offers the ability to backup your existing project, so you will not lose your work, but you should also be using a source control like Git to manage you project anyway so you will see the changes made.

If you get into any trouble with the installation or the upgrade, like all of Microsoft projects, it is open source on GitHub. I found one or two issue installing and found the help I needed extremely fast.

The reason why these tools and methods are so important now, is the upgrading cadence is speeding up. Before you would have a set version with some patches, but you would not look to upgrade as often. Now with Dotnet Core and the general speed of development, the changes have a lot of benefits with little impact to upgrade, that is if you keep up with the latest. Already they have release .Net 5 production ready and releasing .Net 6 preview so you can see the speed in change.

The upgrades are getting as simple as updating a Nuget package, so why would you not. You will of course still need to test and validate the upgrade, plus you are restricted by other resources you use being compatible like external Nuget Packages.

Create Identity in Google Cloud Platform and GSuite

Compare to some other cloud providers, creating an identity via code in GCP is a little fragmented if your using GSuite for your identity storage. The Google Cloud Platform holds your users identity reference and permission, while the other system GSuite hold the security of the users authentication. This can also make the documentation feel a little fragmented and not so easy to follow. Hence this post to stick them together in how I used C# Dot Net Core to create a Google Cloud Platform Identity using their SDK.

This part is standard for any SDK access to GCP, which is to have a service account authentication. For this you will need to create a Service Account in GCP, which needs to be associated with a project. You can create it against the project where you are deploying, or to keep things separate, like I would recommend, you can create a  Service Management Project. This is just a standard project, but you can use this to keep all the SDK activity on this project while the usage activity happens on the other project.

Create a Project

  1. Go to the Manage resources page in the Cloud Console.
  2. On the Select organization drop-down list at the top of the page, select the organization in which you want to create a project. If you are a free trial user, skip this step, as this list does not appear.
  3. Click Create Project.
  4. In the New Project window that appears, enter a project name and select a billing account as applicable. A project name can contain only letters, numbers, single quotes, hyphens, spaces, or exclamation points, and must be between 4 and 30 characters.
  5. Enter the parent organization or folder in the Location box. That resource will be the hierarchical parent of the new project.
  6. When you’re finished entering new project details, click Create.

Reference: https://cloud.google.com/resource-manager/docs/creating-managing-projects#console

Create a Service Account

  1. In the Cloud Console, go to the Service accounts page.
  2. Select a project (your new Service Management Project).
  3. Click Create service account.
  4. Enter a service account name to display in the Cloud Console.
    The Cloud Console generates a service account ID based on this name. Edit the ID if necessary. You cannot change the ID later.
  5. Optional: Enter a description of the service account.
  6. If you do not want to set access controls now, click Done to finish creating the service account.
    To set access controls now, click Create and continue to the next step.
  7. Optional: Choose one or more IAM roles to grant to the service account on the project.
  8. When you are done adding roles, click Continue.
  9. Optional: In the Service account users role field, add members that can impersonate the service account.
  10. Optional: In the Service account admins role field, add members that can manage the service account.
  11. Click Done to finish creating the service account.

Reference: https://cloud.google.com/iam/docs/creating-managing-service-accounts#iam-service-accounts-create-console

You could then get more specific for the Identity Access Management (IAM) permissions, but to keep it simple you would just need to apply the Service Account ‘Owner’ and ‘Project IAM Admin’ access on the new Service Management Project. This will give the Service Account access to create the identities, but for more detail on the permissions, you can use this link to look them up. https://cloud.google.com/iam/docs/permissions-reference

Next we need the Service Account to have access to create the identities in the GSuite. The below sets the Service Account in GCP ready to give access in the Admin portal of GSuite.

  1. Locate the newly-created service account in the table. Under Actions, click more the 3 dots at the end, then Edit.
  2. In the service account details, click the down arrow to see more Show domain-wide delegation, then ensure the Enable G Suite Domain-wide Delegation checkbox is checked.
  3. If you haven’t yet configured your app’s OAuth consent screen, you must do so before you can enable domain-wide delegation. Follow the on-screen instructions to configure the OAuth consent screen, then repeat the above steps and re-check the checkbox.
  4. Click Save to update the service account, and return to the table of service accounts. A new column, Domain-wide delegation, can be seen. Click View Client ID, to obtain and make a note of the client ID.

Reference: https://developers.google.com/admin-sdk/directory/v1/guides/delegation#create_the_service_account_and_credentials

Now we connect these together, but giving the Service Account access in the GSuite Admin Portal.

  1. From your Google Workspace domain’s Admin console, go to Main menu menu> Security > API controls.
  2. In the Domain wide delegation pane, select Manage Domain Wide Delegation.
  3. Click Add new.
  4. In the Client ID field, enter the client ID obtained from the service account creation steps above.
  5. In the OAuth Scopes field, enter a comma-delimited list of the scopes required for your application (for a list of possible scopes, see Authorize requests).
    For example, if you require domain-wide access to Users and Groups enter: https://www.googleapis.com/auth/admin.directory.user, https://www.googleapis.com/auth/admin.directory.group
  6. Click Authorize.

Reference: https://developers.google.com/admin-sdk/directory/v1/guides/delegation#delegate_domain-wide_authority_to_your_service_account

At this point our Service Account has access to the GCP Account/Project and also has the access needed for the GSuite to create the identities. Therefore, we can start getting into the code to create these accounts.

To start with the SDK we need the Service Accounts JSON Key, which you can get by:

  1. In the Cloud Console, go to the Service Accounts page.
  2. Click Select a project, choose a project, and click Open.
  3. Find the row of the service account that you want to create a key for. In that row, click the More button, and then click Create key.
  4. Select a Key type and click Create.

Reference: https://cloud.google.com/iam/docs/creating-managing-service-account-keys#iam-service-account-keys-create-console

Once you have downloaded the JSON File we can move to the Authentication in C#.

You will need to install the Google.Apis.Auth Nuget package to your project. There are then multiple difference methods to do this depending on how you are storing your JSON Key, but for my example we are injecting the JSON straight into the method, which we need the GoogleCredential.  The method we need to call is:

GoogleCredential.FromJson(gcpAuthenticationJson);

With gcpAuthenticationJson being the JSON string from the downloaded file. We also need to add scope to the request of access, which we can string together like below with these scopes required:

GoogleCredential.FromJson(gcpAuthenticationJson)
.CreateScoped(new List<string>
{
"https://www.googleapis.com/auth/admin.directory.user",
"https://www.googleapis.com/auth/admin.directory.group",
"https://www.googleapis.com/auth/admin.directory.user.security"
});

Now although we have given the Service Account all the permissions it requires to do the job, it needs to be executed by a GSuite Admin. We of course cannot have the admin logging in every time, therefore we just need the code to act as the admin. We can do this by adding an addition command to the methods:

GoogleCredential.FromJson(gcpAuthenticationJson)
.CreateScoped(new List<string>
{
 "https://www.googleapis.com/auth/admin.directory.user",
 "https://www.googleapis.com/auth/admin.directory.group",
 "https://www.googleapis.com/auth/admin.directory.user.security"
}).CreateWithUser(adminEmail);

We can of course make this a little more flexable as it can be reused for other authentications, so this is the method I would recommend:

///<summary>
///GettheGCPCredentialviatheServiceAccount
///https://cloud.google.com/docs/authentication/production
///</summary>
///<paramname="authJson">DownloadedAuthenticationJSON</param>
///<paramname="apiScopes">CustomAPIScopes</param>
///<paramname="adminEmail">UserEmailAddresstoImpersonate</param>
///<returns>GCPCredentials</returns>
publicGoogleCredential GetGcpCredential(string authJson,List<string> apiScopes=null,string adminEmail="")
{
 var googleCredential = GoogleCredential.FromJson(authJson)
 .CreateScoped(apiScopes ?? new List<string>
 {
  "https://www.googleapis.com/auth/cloud-platform"
 });

 if(!string.IsNullOrEmpty(adminEmail))
  googleCredential=googleCredential.CreateWithUser(adminEmail);

 return googleCredential;
}

From this can then create users with the SDK using this simple bit of code:

var directoryService = new DirectoryService( new BaseClientService.Initializer
            {
                HttpClientInitializer = GetGcpCredential(authJson, apiScopes, userEmail)
            });
            try
            {
                var request = directoryService.Users.Insert(userData);
                return await request.ExecuteAsync();
            }
            finally
            {
                directoryService.Dispose();
            }

Setting Bearer tokens in PowerShell

This is a quick post to put out how to set a Bearer Token using PowerShell for Invoke-RestMethod and Invoke-WebRequest as it was something I could not find a clear explaining post.

This is simply set in the headers of the request as below, where ‘$bearer_token’ is the token’s variable. I have put it in the ‘$headers’ variable, which is then used in the Invoke-RestMethod.

$headers = @{Authorization = "Bearer $bearer_token"}
$response = Invoke-RestMethod -ContentType "$contentType" -Uri $url -Method $method -Headers $headers -UseBasicParsing

AppDynamics grouping Database Custom Metric Queries

When you create Custom Database Metrics in AppDynamics, you first thought is to create a new row for each metric, but if you have a lot to report on this can become messy. Not only that but in the metric view you will have a very long list of reports to go through. Therefore when we had the consultant down at work, we was shown how to group collections of metrics in one query, which then shows in the metric view in a sub folder. This tactic, we could not find anywhere on the internet, so I thought I would share this very handle insight for AppDynamics.

Your stand method to add custom queries and metrics would be to go to this view below configuration view in AppDynamics and add new queries for each of the metrics you wish to report on.

AppDynamics Databases

You can then go to the metric view and see the data coming in like below.

AppDyanmics Metric Browser

However, like I said above, this list can grow fast plus by default you are limited to only 20 of theses’ queries, which can disappear faster. Therefore this method gives you more bang for your buck on custom metrics, plus also the organisation of your data.

Instead of adding each query separate, what we can do is create a grouping of queries into sub folders of the ‘Custom Metric’ folder, to look like this.

  • Before
  • Custom Metric
    • Queue 1
    • Queue 2
    • Queue 3
  • After
  • Custom Metric
    • MessagingQueueMontioring
      • Queue 1
    • Queue 2
    • Queue 3

As we, at my company, completed this in Microsoft SQL I will use that as an example, but I would be confident it can be translated to other languages with the same outcome with some slight changes to syntax.

Say we start with the 3 queries that we would want to monitor and we will keep them simple:

SELECT count(id) FROM MessageQueueOne
SELECT count(id) FROM MessageQueueTwo
SELECT count(id) FROM MessageQueueThree

To create the top level folder, you simply create a single query item called ‘MessagingQueueMonitoring’. In this new custom metric query you need to add the above 3 SQL statements, but we need them to be a single query instead of 3. For this to work we will use the SQL command ‘UNION ALL’ to join them together:

SELECT count(id) FROM MessageQueueOne
UNION ALL
SELECT count(id) FROM MessageQueueTwo
UNION ALL
SELECT count(id) FROM MessageQueueThree

This will now create one table with 3 rows and their values, but for AppDynamics to recognise these in the metrics view we need to tell it what each of these rows mean. To tell AppDynamics what the nodes under it are called you add a column to each query for the name. This column should be called ‘Type’ and then for AppDyanmics to know what the value part of the table is, you call that column ‘Total’.

You should end up with a query like below:

SELECT 'Message Queue One' as Type, count(id) as Total FROM MessageQueueOne
UNION ALL

SELECT 'Message Queue Two' as Type, count(id) as Total FROM MessageQueueTwo
UNION ALL

SELECT 'Message Queue Three' as Type, count(id) as Total FROM MessageQueueThree

Then this should result in a table like this:

TypeTotal
Message Query One4
Message Query Two2
Message Query Three56