Deleting Recovery Service Vault Backup Server (MABS)

This is one of them quick shares to try help the world of Azure developers just a little bit.

While doing some Azure Development using the Microsoft Azure REST API, I got to a point to working in the Azure Recovery Service Vault. I was trying to delete a Back Up Server Agent using the API, which after a little bit of a hunt I found it is actually called ‘Registered Identities

This of course only has one endpoint to call, so it must just be that easy…
However when calling this API I would just get the following:

{
"error":
{
"code":"ServiceContainerNotEmptyWithBackendMessage",
"message":" There are backup items associated with the server. Stop protection and delete backups for each of the backup items to delete the server's registration. If the issue persists, contact support. Refer to https://aka.ms/DeleteBackupItems for more info. ",
"target":null,
"details":null,
"innerError":null
}
}

This was odd as I couldn’t find anything to imply a reason for this error. After then investigating the network traffic while deleting the resource in the Azure Portal, I found a slight difference which also fixed the issue. This was the version of the API.

In the documentation it is stated as ‘2016-06-01’ , but in the network traffic it had version ‘2019-05-13’. If you now change:

DELETE https://management.azure.com/Subscriptions/77777777-d41f-4550-9f70-7708a3a2283b/resourceGroups/BCDRIbzRG/providers/Microsoft.RecoveryServices/vaults/BCDRIbzVault/registeredIdentities/dpmcontainer01?api-version=2016-06-01

To

DELETE https://management.azure.com/Subscriptions/77777777-d41f-4550-9f70-7708a3a2283b/resourceGroups/BCDRIbzRG/providers/Microsoft.RecoveryServices/vaults/BCDRIbzVault/registeredIdentities/dpmcontainer01?api-version=2019-05-13

<p value="<amp-fit-text layout="fixed-height" min-font-size="6" max-font-size="72" height="80">It will now all work.It will now all work.

How to authenticate with Fortify Security with PowerShell

Fortify, a security scanning tool for code, has some great features but also some limiting features. Therefore I sought to use their open REST API to expand on its functionality to enhance how we are using it within the DevOps pipeline. This was of course step one to find how to authenticate with Fortify to start doing the requests to its services.

Fortify does have the Swagger page of the URL’s to show what endpoints it offers, but doesn’t detail the authentication endpoint. It then does have the documentation on how to authenticate, but it is not detailed out for easy use.

Therefore this is why I thought I would expand on the details to show other how to authenticate easily, while using PowerShell as the chosen language.

Fortify Swagger

The API Layer from Fortify provides the Swagger definitions. If you chose you provided Data Centre from the link below, you can then simply add ‘/swagger’ to the end to see the definitions, for example https://api.emea.fortify.com/swagger/ui/index

Data Centre URL: https://emea.fortify.com/Docs/en/Content/Additional_Services/API/API_About.htm

Authentication

As mentioned before Fortify does document how to authenticate with the API here https://emea.fortify.com/Docs/en/index.htm#Additional_Services/API/API_Auth.htm%3FTocPath%3DAPI%7C_____3

First thing is to find out what details you require for the request like it has mentioned in the documentation. We require the calling Data Centre URL, which you used above for the Swagger definitions, that is then suffixed with ‘/oauth/token’ e.g. ‘https://api.emea.fortify.com/oauth/token&#8217;

We then need scope of what you would like to request, which are both detailed out in this link in the documentation plus also on each of the Swagger definition under the ‘Implementation Notes’, it specifies what scope is required for each request. This value needs to be entered as lowercase to be accepted.

This is the same as the Grant Type, which is a fixed value of ‘client_credentials’ all in lowercase.

Final detail we need is the ‘client_id’ and the ‘client_secret’, but what I found is what we really need is the API Key and the API Secret that is managed in your Fortify portal. If you sign in to your portal, for the Data Centre and product I have access to, you can navigate to the ‘Administration’ then ‘Settings’ and finally ‘API’. From this section you can create the API details with the required set of permissions. Note that the permission are changeable post setting this up so you do not need to commit yet. You should then get all the details required for these two parameters where client_id = API Key and client_secret = API Secret.

Your details in PowerShell should look like this:

$body = @{
scope = "api-tenant"
grant_type = "client_credentials"
client_id = "a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa"
client_secret = "AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11"
}

From there we can do a simple ‘Invoke-RestMethod’ using PowerShell, with a key things to note. It is that the content type is ‘application/x-www-form-urlencoded’, without this you will keep getting an error saying the ‘Grant Type’ is not valid. With this as well you will notice as above the body is not in JSON, but are formatted as Parameters in the body of the request.

Below is the full example of the request using PowerShell, which I have also included the requests to set the default proxy so if you are requesting behind a proxy, this should still work.

## Set Proxy

[System.Net.WebRequest]::DefaultWebProxy = [System.Net.WebRequest]::GetSystemWebProxy()

[System.Net.WebRequest]::DefaultWebProxy.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials

## Create Details

$uri = “https://api.emea.fortify.com/oauth/token”

$body = @{

scope = “api-tenant”

grant_type = “client_credentials”

client_id = “a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa”

client_secret = “AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11”

}

## Request

$response = Invoke-RestMethod -ContentType “application/x-www-form-urlencoded” -Uri $uri -Method POST -Body $body -UseBasicParsing

## Response

Write-Host $response

How to setup AppDynamics for multiple .Net Core 2.0 applications

We have decided to go with App Dynamics to do monitoring on our infrastructure and code, which is great and even better they have released support for .Net Core 2.0. However when working with their product and consultant we found an issue with monitoring multiple .Net Core instances on one server, plus with console apps, but we found a way.

Currently their documentation, that is helpful, shows you have to set up monitoring for the .Net Core Application with environment variables. Follow the direction in the App Dynamics Documentation says to set the environment variable for the profilers path, which is in each of the applications, but of course we can’t set multiple environment variables. Therefore we copied the profiler DLL to a central source and used that as the environment variable, but quickly found out that it still didn’t work. For the profiler to start tracking, it needs to be set to point to the applications root folder for each application.

The consultants investigation then lend to looking at how we can set the environment variables for each application, to which we found the application can be set in the web.config using the node ‘environmentVariables’ under the ‘aspNetCore’ node as stated as part of the Microsoft Documentation. Of course using the ‘dotnet publish’ command generates this web.config, so you can’t just set this in the code. Therefore in the release of the code I wrote some PowerShell to set these parameters.

In the below PowerShell, I get the XML content of the web.config, then create each of the environment variable nodes I want to insert. Once I have these I can then insert them into the correct ‘aspNetCore’ node of the XML variable, which I then use to overwrite the contents of the existing file.

Example PowerShell:

$configFile = "web.config";
$sourceDir = "D://wwwroot";

## Get XML
$doc = New-Object System.Xml.XmlDocument
$doc.Load()
$environmentVariables = $doc.CreateElement("environmentVariables")

## Set 64 bit version
$Profiler64 = $doc.CreateElement("environmentVariable")
$Profiler64.SetAttribute("name", "CORECLR_PROFILER_PATH_64")
$Profiler64.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x64.dll")
$environmentVariables.AppendChild($Profiler64)

## Set 32 bit version
$Profiler32 = $doc.CreateElement("environmentVariable")
$Profiler32.SetAttribute("name", "CORECLR_PROFILER_PATH_32")
$Profiler32.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x86.dll")
$environmentVariables.AppendChild($Profiler32)

$doc.SelectSingleNode("configuration/system.webServer/aspNetCore").AppendChild($environmentVariables)

$doc.Save($configFile.FullName)

Example Web.config result:

<configuration>
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\SecurityService.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
<environmentVariables>
<environmentVariable name="CORECLR_PROFILER_PATH_64" value="D:\IIS\ServiceOne\AppDynamics.Profiler_x64.dll" />
<environmentVariable name="CORECLR_PROFILER_PATH_32" value="D:\IIS\ServiceTwo\AppDynamics.Profiler_x86.dll" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>

This will work for the application that have a web.config, but something like a Console App doesn’t have one, so what do we do?

The recommendation and solution is to create an organiser script. This script will set the Environment Variable, which will only effect the application triggered in the session. To do this you can use any script really like PowerShell or Command Line.

In this script you just need to set the environment variables and then run the Exe after.

For example in PowerShell:

Param(
[string] $TaskLocation,
[string] $Arguments
)

# Set Environment Variables for AppDynamics
Write-Host “Set Environment Variables in path $TaskLocation”
$env:CORECLR_PROFILER_PATH_64 = “$TaskLocation\AppDynamics.Profiler_x64.dll”
$env:CORECLR_PROFILER_PATH_32 = “$TaskLocation\AppDynamics.Profiler_x86.dll”

# Run Exe
Write-Host “Start  Script”
cmd.exe \c
exit

These two solutions will mean you can use AppDynamics with both .NetCore Web Apps and Console App, with multiple application on one box.

Resharper DotCover Analyse for Visual Studio Team Services

Do you use Visual Studio Team Services (VSTS) for Builds and/or Releases? Do you use Resharper DotCover? Do you want to use them together? Then boy do I have an extension for you!

That might be a corny introduction, but it is exactly what I have here.

In my current projects we use Resharpers, or also know as Jet Brains, DotCover to run code coverage on all our code. However to run this in VSTS there is a bit of a process to install DotCover on the server and then write a Batch command to execute it with settings. This isn’t the most complex task, but it does give you a dependency to always install this on a server, and have the written Batch script in source control or in the definitions on VSTS. This can cause issues if you forget to get it installed or you need to update the script for every project.

Therefore I got all that magic of the program and cramed it into a pretty package for VSTS. This tool is not reinventing the wheel, but putting some greese on it to run faster. The Build/Release extension simply gives you all the input parameters the program normally offers and then runs them with the packaged version of DotCover that comes with the extension. See simply.

There is however one extra bit of spirit fingers I added into the extension. When researching and running my own tests, I found that some times it is helpful to only run the coverage on certain projects, but to do this you need to specify every project path in the command. Now I don’t know about you, but that sounds boring, so I added an extra field.

Instead of in the Target Arguments passing each project separately and manually, you can pass wildcards in the Project Pattern. If you pass anything in the Project Pattern parameter it will detect you want to use this feature. It then uses the Target Working Directory as the base to recursively search for projects.

For Example: Project Pattern = “*Test.dll” and Target Working Directory = “/Source”

This will search for all DLL that end with ‘Test’ in the ‘Source’ directory and then prepend it to any other arguments in the Target Arguments.

For Example: “/Source/MockTest.dll;/Source/UnitTest.dll”

You can download the extension from the VSTS Marketplace
Here are is a helpful link for Resharper DotCover Analyse – JetBrains
Then this is the GitHub Repository for any issues or some advancements you would like – Pure Random Code GitHub

Update 20-07-2018

There was a recent issue raise on the GitHub Repository that addressed a problem I have also seen before. When running the DotCover from Visual Studio Team Services an error appears as below:

Failed to verify x64 COM object registration: Empty path to COM object.

From the issue raise, the user had linked to a Community Article about “DotCover console runner fails when running as VSTS task“, which in the comments they discussed how to fix this.

To correct it we simply add the following command to the request, that specifies what profiled process bitness to use as they say.

/CoreInstructionSet=[x86|x64]

Therefore the task has now been updated with this field and feature to accomadate this issue and fix. It has been run and tested by myself plus the user that raised the issue, so please enjoy.

What is in a projects builds and releases

While working with other companies I have seen multiple builds and releases, plus also reading books like ‘Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation’. Through this I have learnt more and more about what should really be in the builds and releases of code applications. I would like to describe how I think they should both be used to create a scalable, reliable and repeatable process to bring confidence to your projects.

In showing these I will using Visual Studio Team Services(VSTS) and C#.NET as examples. These are the day to day parts I use and know how to represent what I would like to demo.

Continuous Integration Build

A Continuous Integration Build, or also known as CI Build, is the first build that your code should see. In the normal process I follow with code, you create a feature branch of your code which is where you write the new feature. Once you are happy with the product, it can be checked in to the development branch, where the current developing code is held before releasing. The CI build will sit in between this process to review what you are about to check in.

The goal of the CI Build is to protect the development branch and in turn all other developers that want to clone the code. Therefore we want to process the code in every way that it will be done in the release, but don’t move it. The general actions would be to get all assets, compile the code and then do unit tests.

By getting all the assets, I mean not just getting the code from the repository, but also other required assets like for ASP.NET projects, Nuget packages. This could also be things like Node Package Manager(NPM) packages and processing tasks like Grunt which manipulates the code before compiling. This is basically the setting up process for the code to be built.

We then compile the code into the state it will be used in, then run the unit tests. These unit tests are for checking if there are any errors in the code and should be testing your new change with the current state of the code, but a balance for this build is for speed as well and reliability. You and others will be checking into this branch multiple times during the day, so you don’t want to be waiting all day to find out if your code is ok. If your tests are taking a long time, then it might be an idea to do enough unit tests for you to be confident with the merge and then run all the longer and indepth tests overnight.

Nightly Build

This build is optional to how well your daily CI build behaves. If you feel the CI is taking too long and you want to run some extensive tests on the project, then this is the build you will need. However not all projects are as large and detailed so might not need this.

The Nightly Build is the same process as the CI, as with Continuous Integration it should be a repeatable process, so it will get the resources and compile the code if required in the exact same method as the CI Build. Then at this point you can run all the same CI Unit Tests just as a confidence test that they still pass. You wouldn’t want to run the whole build and find out something failed in the small amount of tests you missed.

You can now run any lengthy Unit Tests, but what would also be good is to run any integration tests. These tests will stop using the stubbed versions of services and databases, to then use the real thing. The purpose of these tests are to make sure that when working with the real end points everything still works. When you use stubs for Unit Tests, you are practically configuring the end points to work as you would like. Even though you should be configuring them to be the same as the real deal, you can never 100% know they are working the same unless you just use the real thing. However just a point to be clear, when we say the real end points we do not mean the production ones, but the development versions instead.

After the build is complete, you should be confident that the code compiles fine, it works correctly by itself and works fine with the real systems as well. With this confidence there should be no hesitation to be happy to merge this into the next stage of the branching.

Release Build

At this point you have compiled the code, tested the code, tested the integration and had human testers check the system. There is now 100% confidence that the project will work when it gets to its destination, so we move to packaging up the project and moving it to its destination.

However we don’t want to jus trust what was check a few days ago will be ok. What we do want is to trust that what we are packaging up at this point will be the working, tested and complete code. Therefore we do the repeatable process by getting the resources, compile the code and testing as much as what gives you confidence but as minimal as the Unit Tests. This now gives you the product that you should be happy to put on a server. It is also the same product you was happy with at the CI stage and the Nightly Build stage, so it is what you have practiced with throughout the process.

With the resulting product you can package it as required for the language and/or framework, which will be placed on the build server with a version number ready for the release. It is important that the package is accessible by the release for obvious reasons to pick the package up, but also the version number is very important. When the release picks up the package, we want to make sure it is the exact one we happily built, configured and tested. Most build tools like Visual Studio Team Services will automatically add this build id to the package and manage the collection of it.

Release

We now have a confident deployable package to release, so there is no more building required, but there is still some configuration. When building an application that will be going to multiple location, you don’t want to use the same credentials for things like databases. This would be unsecure as if one of the servers was compromised then all of them are. There are also things like the database location as this would be different for each environment. There shouldn’t be one central system for all the environment, as this then can cause issues with that system goes down. If it is the development environment, then all systems should be applicable for just development. Nothing worst then testers bugging you because your development took down their testing.

What we will need to do is update the code to use the specific environment variables. This should be stored in the code base, so if the same application was deployed to multiple development environment there is minimal to no set up. Another example is a load balanced system where you want to deploy the same configuration to all servers. The way to do this will depend on the language, framework and system you are deploying to, but for an .NET Core project the best ways to have an ‘appsettings.config’ file for each environment. This would then be converted on deployment to its specific environment, so settings in the ‘appsettings.development.config’ would be merged in and the settings for ‘appsettings.production.config’ would not be touched until required.

Now the code is ready for the environment, but is the environment ready for the code. Part of the DevOps movement is Infrastructure As Code, where you not only configure the code for each environment, but also configure the environment. In a perfect cloud environment you would have the servers image with all the setting up instructions saved in the code base to keep all required assets in the same location. With this image you can target a server, install the image, then configure anything required for the environment, for example an environment variable, and finally deploy the code. This method would mean we could create and deploy any of the environments at will, for instance if the development server went down or was corrupted, you would point then fire to result is a perfect set up. An example of this would be using Azure with the JSON configuration details.

However we don’t all live in perfect world and our infrastructure is not always perfect, but we can still make it as good as we can. For instance I have worked on a managed OnPremise server where it has been created to a basic specification including Windows Operating System, user accounts and other basic details. This gives me a base to start with and an certain level of confidence that if I asked for another server to be created, it will be in the same format. Now I need to make sure it is fit for what I require it for, so we can use PowerShell that will run on the target machine to install things like IIS. This can be a script stored in the code base and then the environment variables pulled in from another file or the release configuration. This would give a level of Infrastructure As Code, by the requirements of the project being installed at each environment. This process could also check everything is in working order, so before you put your project on the server you are happy it is ready for it.

We should be all set to put the code and the server together now with a deployment, but once we have done that we have lost some confidence. Like the Integration Tests, we know the package is ok on its own and we know the server is ok on its own, but how do we know they are going to work together? At this point there should be some small, as to not increase the release time, but required tests to make sure that it has been installed correctly. This can depend on the project type and the environment etc, but should give you a certain level of confidence that everything will be ok. For an example you could have a URL endpoint that once called responds with the new codes version number. If the correct version is installed and set up on IIS, then it should be able to do this. There is now confidence it is in the correct place on the server, with the correct build version and working correctly with the environments set up. Of course this doesn’t test every endpoint of the project is working with no errors, but you would need to take some of that confidence from all the previous builds and testing.

Result

With the CI Build every commit, the Nightly Build every night and the Release Build before all releases, then the configuration at each environment for both the server and the code, we end with a secure, resilient and well established product. This should result in you and your team being happy to fire a build or release off and not worrying about if it will work. An example of this confidence is once a developers code base was showing errors after a merge and didn’t know where the issue was. However because we had confidence in our CI build, we knew it would not be the base version but something on their machine, which closed the range of where the problem could be. In this instance it removed the question of is the base version stable and so sped up the process of finding the error.

I strongly suggest following this process or one relevant to your project as although it might take some time to set up and get developer comfortable with it, the time and assurance gain is much better.

Feel free to share any extra processes you do in your projects to create the safest process.