Where to find Azure Tenant ID in Azure Portal?

Some of the documentation about Azure from Microsoft can be confusing and missing, including one I get ask ‘Where is the Tenant ID’. Below I give 3 locations, which there is probably, on where to find the Tenant ID in the portal. I have also added how to get the Tenant ID with the Azure CLI.

The Tenant is  basically the Azure AD instance where you can store and configure users, apps and other security permissions. This is also referred to as the Directory in some of the menu items and documentation. Within the Tenant you can only have a single Azure AD instance, but you can have many Subscriptions associated with it. You can get further information from here https://docs.microsoft.com/en-us/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide

Azure Portal

Azure Active Directory

If you use the Portal menu, once signed in, then you can select the ‘Azure Active Directory’ option.

This will load the Overview page with the summary of your Directory including the Tenant ID.

You can also go to this URL when signed in: https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview

1

Azure AD App Registrations

When configuring external applications or internal products to talk, you can use App Registrations or also know as Service Principal accounts. I know when using the REST API or the Azure SDK you will need the Tenant ID for the authentication, so within the registered app you also get the Tenant ID.

When in the Azure AD, select the ‘App registrations’ from the side menu. Find or add your App then select it.

From the App Overview page you can then find the Tenant ID or also known here as the Directory ID.

Switch Directory

If you have multiple Tenants then you can switch between the Tenants you have access to by switching Directory.

You can do this by selecting your Avatar/Email from the top right of the Portal, which should open a dropdown with your details. There will then be a link call ‘Switch directory’, and by clicking this you can see all the directories you have access to, what your default directory is and switch which one you are on.

As mentioned before the Directory is another word used my Azure for Tenant, so the ID you the see in this view is not just the Directory ID but also the Tenant ID.

Directory +

Azure CLI

From the Azure CLI you can get most every bit of information that is in the Portal depending on your permission.

If you don’t have the CLI then you can install it here: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

You can sign into the CLI by running:

az login

More information on logging in can be found here: https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli

Once you are signed into the Azure CLI, then you can use this command below to get a list of the Subscriptions you have access to, which intern will report back the Tenant ID. Remove everything after ‘–query’ to get the full details.

(https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az_account_list)

 az account list --query '[].{TenantId:tenantId}'

You can also get the current Tenant ID used to authenticate to Azure, by running this command and again remove after the ‘–query’ to get the full information.

(https://docs.microsoft.com/en-us/cli/azure/account?view=azure-cli-latest#az_account_get_access_token)

 az account get-access-token --query tenant --output tsv

What do you consider when building a new application?

When you’re starting a new project and thinking about what you’re going to use in your application, what factors do you consider? Sometimes this depends on what your role is, like a developer might jump straight in with just use X coding language and continue on their way. Whereas others might want to play with whatever the new and latest technology is. Then there is people like myself, that likes to think about the whole picture, and so here are some of the key factors I consider when building a new application.

 

Code Repository

This one should just come hand in hand with your company, as there should already be a standard of where and how you store your code. However there’s a lot of ‘should’ in that sentence, as some junior companies don’t have this thought through yet, or you could be doing it alone, or even the company might have something in place but you are thinking of exploring new technologies and new grounds.

The big factor to consider with a repository is the company that is holding that information. It starts with where the code will be held for legal laws, security and for access. Now you might think access is a silly thing to think about in this, as it is just all done over https on your computer isn’t it?, but you might need to consider if you are going through a proxy so security might lock you down unless it is a secure root. You also might put the repository on premise due to the value of the code you are storing, which might also be the reason for your choice on the company to store your code. If you think that the company storing your code will be going after 2 years, then you might want to think about either a different company or a good get out plan just in case. These days there a few big players that just make clear sense, so after this it would come down to the cost of that companies services for the level you require.

The other factor is how it is stored and retrieved from the repository with things like GIT, as this is another technology that you will depend on. You will need to consider what learning curve will others need to undertake if they are to use this Version Control System and again like the storage factor, will they still be around in as few years’ time?

Linked from this would be what tools you are thinking of using later in the thought process for build, test and deployment, as these might be harder work for you to move code between locations and tools. For example if your repository is on premised behind a firewall and security, but your build tool is in the cloud with one company and then the test scripts are stored in another companies repository.

 

Language

You might have an easy job with choosing a language if you are a pure Java house or PHP only then that is what you will be using, as you can only do what you know. However, if you want to branch out or you do have more possibilities then the world can open up for you.

A bit higher level than choosing the language you want, but design patterns do come into this. I have seen where someone would choose a .NET MVC language for their back end system, but then put a AngularJS front end  framework on top. What you are doing here is putting an MVC language design on top of an MVC language design, which causes all types of issues. Therefore you need to consider, if you are using more than one language then how do they complement each other. For instance in this circumstance you could either go for the AngularJS MVC with a micro service .NET backend system, or have the .NET MVC application with a ReactJS front end to enrich the users experience.

As I said before, you might already know what languages you are going to use as that is your bread and butter now, but if it is not then you need to think about the learning curve for yourself and other developers. If you are throwing new technologies into the mix then you need to be sure everyone can keep up with what you intend on using, or you will become the Single Point Of Failure and cause support issue when someone is off.

As well as thinking about who will be developing the technology, you need to think about who will be using the technology. This can either be from an end users experience or even the people controlling the data like content editors, if this is that type of system. If you would like a fast and interactive application then you will want to push more of the feature to the client side technologies to improve the users experience, but you might not need to make it all singing and dancing if it is an console application running internally you want to just do the job. Therefore the use case of the language has an importance to the choice.

 

Testing

Testing is another choice in itself, as once you know your language you know what testing tools are available to use, but they then have all the same consideration as what coding language you want to use, as you will still need to develop these tests and trust in their results.

I add this section in though, as it is a consideration you need to have and also how it factors into giving you, the developer, feedback on your test results. These might run as part of your check in or they might be part of a nightly build that reports back to you in the morning, so how they are reported instantly to the develop depends on how fast they can react to them.

As part of the tooling for the tests you will need to recognize what level of testing they go down to, for example unit tests, integration tests, UI tests or even security testing. These then need to consist of what tools you can integrate into your local building of the application, to give you instant feedback, for example a linter for JavaScript which will tell you instantly if there is a conflict or error. This will save you time of checking in and waiting for a build result, which might clog up the pipeline for other checking in.

 

Continuous Integration(CI) and Continuous Delivery(CD)

This is a little out of touch with what application you are building as another person in the DevOps roll might be doing this and it should have no major impact on your code, as this is abstract to what you are developing. However the link can be made through how you are running this application on your local machine. You could be using a task runner like Gulp in your application to build and deploy you code on your local machine, which then makes sense to use the same task runner in the CI/CD.

Therefore you need to think about what tooling can and will be used between the your local machine and the CI/CD system to have a single method of build and deployment. You want to be able to mirror what the pipeline will be doing, so you can replicate any issue, plus also the other way round as it will help that DevOps person build the pipeline for you application.

 

Monitoring and logging

Part of the journey of your code, is not just what you are building and deploying, but also what your code is doing after that in the real world. The best thing to help with this is logging for reviewing past issues and monitoring to detect current or coming issues.

For your logging I would always encourage 3 levels of logging Information, Debug and Error, which are configurable to turn on or off in production. Information will help when trying to source where the issue happens and what kind of data is being passed through. It will be medium level of output as to not fill up your drive fast, but to give you a lot of information to help with your investigation. Debug is then the full level down, giving you everything that is happening with the application and all the details, but be careful of printing GDRP data that will sit in the logs and to not crash your drives from over filling. Errors are then what they say on the tin, they will only get reported out when there is an error in the application, which you should constantly check to make sure your remove all potential issue with the code. The considering factor with this for your application is technology and implementation to your code. We have recently changed a logging technology, but how it was implemented made it a longer task then it should have been, which can be made easier with abstraction.

Monitoring depends on what your application is doing, but can also expand past your code monitoring. If you have something like message queue’s you can monitor the levels or you could be monitoring the errors in the logs folder remotely. These will help pre-warn you if there is something going wrong before it hits the peak issue. However the issue might not be coming from your code, so you should also be monitoring things like the machine it is sitting on and the network traffic in case there is an issue there. These have an impact on the code because some monitoring tools do not support some languages, like .NetCore which we have found hard in some places.

 

Documentation

Document everything is the simple way to put it. Of course you need to do it in a sensible manner and format, but you should have documentation before even the first character of code is written to give you and others the information you have decided above. Then you will need to be documenting any processes or changes during the building for others to see. If you know exactly how it all work then someone else takes over while you are away, then you put that person is a rubbish position unless they have something to reference to.

These need to have a common location that everyone can have access to read, write and edit. However a thought you could try is using automated documentation draw from the codes comments and formatting, so you would need to bear this in mind when writing out your folder structure and naming convention.

You can go over board by documenting to much as somethings like in the code or the CI/CD process should be clear from the comments and naming. However even if documentation for tools like GIT have already been written, it is helpful to create a document saying what tooling you are using from a high level, why you are using this and then reference their documentation. It gives the others on the project a single point of truth to get all the information they require, plus if the tooling changes you can update that one document to reference the new tooling’s, and everyone will already know where to find that new information.

 

DevOps

In the end what we have just gone through is the DevOps process of Design, Build, Test, Deploy, Report and Learn.

  • You are currently looking at the design point while looking at what languages and tools you would like to use.
  • We are going to get a language to build the new feature or application.
  • There will be a few levels of testing through the process of building the new project.
  • The consideration of CI and CD gets our product deployed to new locations in a repeatable and easy method.
  • Between the Logging and Monitoring we are both reporting information back to developers and business owners, who can learn from the metrics to repeat the cycle again.

DevOps

Reference: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0

Resharper DotCover Analyse for Visual Studio Team Services

Do you use Visual Studio Team Services (VSTS) for Builds and/or Releases? Do you use Resharper DotCover? Do you want to use them together? Then boy do I have an extension for you!

That might be a corny introduction, but it is exactly what I have here.

In my current projects we use Resharpers, or also know as Jet Brains, DotCover to run code coverage on all our code. However to run this in VSTS there is a bit of a process to install DotCover on the server and then write a Batch command to execute it with settings. This isn’t the most complex task, but it does give you a dependency to always install this on a server, and have the written Batch script in source control or in the definitions on VSTS. This can cause issues if you forget to get it installed or you need to update the script for every project.

Therefore I got all that magic of the program and cramed it into a pretty package for VSTS. This tool is not reinventing the wheel, but putting some greese on it to run faster. The Build/Release extension simply gives you all the input parameters the program normally offers and then runs them with the packaged version of DotCover that comes with the extension. See simply.

There is however one extra bit of spirit fingers I added into the extension. When researching and running my own tests, I found that some times it is helpful to only run the coverage on certain projects, but to do this you need to specify every project path in the command. Now I don’t know about you, but that sounds boring, so I added an extra field.

Instead of in the Target Arguments passing each project separately and manually, you can pass wildcards in the Project Pattern. If you pass anything in the Project Pattern parameter it will detect you want to use this feature. It then uses the Target Working Directory as the base to recursively search for projects.

For Example: Project Pattern = “*Test.dll” and Target Working Directory = “/Source”

This will search for all DLL that end with ‘Test’ in the ‘Source’ directory and then prepend it to any other arguments in the Target Arguments.

For Example: “/Source/MockTest.dll;/Source/UnitTest.dll”

You can download the extension from the VSTS Marketplace
Here are is a helpful link for Resharper DotCover Analyse – JetBrains
Then this is the GitHub Repository for any issues or some advancements you would like – Pure Random Code GitHub

Update 20-07-2018

There was a recent issue raise on the GitHub Repository that addressed a problem I have also seen before. When running the DotCover from Visual Studio Team Services an error appears as below:

Failed to verify x64 COM object registration: Empty path to COM object.

From the issue raise, the user had linked to a Community Article about “DotCover console runner fails when running as VSTS task“, which in the comments they discussed how to fix this.

To correct it we simply add the following command to the request, that specifies what profiled process bitness to use as they say.

/CoreInstructionSet=[x86|x64]

Therefore the task has now been updated with this field and feature to accomadate this issue and fix. It has been run and tested by myself plus the user that raised the issue, so please enjoy.

How to merge multiple images into one with C#

Due to a requirement we needed to layer multiple images into one image. This needed to be faster and efficient plus we didn’t want to use any third party software as that would increase maintenance. Through some fun research and testing I found a neat and effective method to get the outcome required by only using C#.NET 4.6.

So the simple result was to use the C# class ‘Graphics’ to collect the images as Bitmaps and layer them, then produce a single resulting Bitmap.

As you can see from below, we first create the instance of the end Bitmap by create a new type with the width and height of the resulting image passed in. Using that Bitmap we create an instance of the Graphics, which we use in our loop of each image. For each of the images, they are added to the graphic with the starting X/Y co-ordinates of 0.

This solution solves the requirement I had as they all needed to be layered from the starting point of the top left corner, but you could also get imaginative with the settings to place the layers in different places, or even with the Bitmaps width you could create a full length banner.

// merge images
var bitmap = new Bitmap(width, height);
using (var g = Graphics.FromImage(bitmap)) {
foreach (var image in enumerable)     {      
g.DrawImage(image, 0, 0);
    }
}

This is of course handy and simple, so I thought to share and help I would create a full class to handle the processing. With the class below you do not need to create an instance as it is static, so that it can be used as a tool like it is.

You can find the full code on my Github at https://github.com/PureRandom/CSharpImageMerger

The aim of this class which can be expanded, is to layer an array of images into one. You can do this by passing an array of links, of bitmaps or a single folder directory.

When you pass the array of links, you also have the option of providing proxy settings depending what your security is like. It then uses an inside method to loop each link to attempt to download it and return them in a bitmap list.

private static List ConvertUrlsToBitmaps(List imageUrls, WebProxy proxy = null) {
    List bitmapList = new List();
    // Loop URLs
    foreach (string imgUrl in imageUrls)
    {
      try
      {
        WebClient wc = new WebClient();
        // If proxy setting then set
        if (proxy != null)
        wc.Proxy = proxy;
        // Download image
        byte[] bytes = wc.DownloadData(imgUrl);
        MemoryStream ms = new MemoryStream(bytes);
        Image img = Image.FromStream(ms);
        bitmapList.Add((Bitmap)img);
      }
      catch (Exception ex)
      {
        Console.Write(ex.Message);
      }
      }
    return bitmapList;
}

When you pass the array of bitmaps it is the same as the above, but it doesn’t have to download anything.

Finally the file system method can be used by passing the folder directory you wish it to search, then the image extension type. So if you was looking to merge all png’s in the directory ‘src/images/png’, then that is what you pass.

private static List ConvertUrlsToBitmaps(string folderPath, ImageFormat imageFormat)
{
    List bitmapList = new List();
    List imagesFromFolder = Directory.GetFiles(folderPath, "*." + imageFormat, SearchOption.AllDirectories).ToList();
    // Loop Files
    foreach (string imgPath in imagesFromFolder)
    {
      try
      {
        var bmp = (Bitmap) Image.FromFile(imgPath);
        bitmapList.Add(bmp);
      }
      catch (Exception ex)
    {
      Console.Write(ex.Message);
      }
      }
    return bitmapList;
}

With all of these it then uses the common method to loop each item in the array of bitmaps for find the biggest width and height, so the images don’t over or under run the results size. As explained above, each bitmap is looped through to merge the images to the top left of the result Bitmap to create the final image.

private static Bitmap Merge(IEnumerable images)
{
  var enumerable = images as IList ?? images.ToList();
  var width = 0;
  var height = 0;
  // Get max width and height of the image
  foreach (var image in enumerable)
  {
    width = image.Width > width ? image.Width : width;
    height = image.Height > height ? image.Height : height;
  }
  // merge images
  var bitmap = new Bitmap(width, height);
  using (var g = Graphics.FromImage(bitmap))
  {
    foreach (var image in enumerable) {
      g.DrawImage(image, 0, 0);
    }
  }
  return bitmap;
}

Feel free to comment, expand and share this code to help others.

https://github.com/PureRandom/CSharpImageMerger

Azure Container with PowerShell

When I was trying to use PowerShell to action some Azure functionality, I found it very scattered and hard to get one answer, so here I give you the golden goose for Adding, Removing, Emptying and Copying files to an Azure Container using PowerShell.

The small print of this is of course there are probably more method of doing the same thing, but this is how it worked for me. Also this is not a demo of all the options and parameters the PowerShell commands can do, but what we need them to do. These scripts are set up to run with parameters passed in, but I have also put comments in there so you can run them hardcoded.

How to add a Azure Container?

The parameters required for this script are the Resource Group Name and Storage Account Name for the already built account, plus the new Container’s Name. You can see from below where we pass in the parameters, however in the static version we also need to Login to the required account and pass in the Subscription ID for the account as well.

You can get the Subscription ID by following the steps on this post.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName
)

## Static Parameters
#Login-AzureRmAccount
#Set-AzureRmContext -SubscriptionID 11111111-1111-1111-1111-111111111111
#$ResourceGroupName = "GroupName"
#$StorageAccountName = "AccountName"
#$StorageContainerName = "ContainerName"

Now we have all the details we can get the storage details from the account from the code below. This gets the storage Key to access the account details, then gets the storage account.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

You need the Storage Context for the future calls to create the container. Before we do create the new container, it is best to check if it already exists. In the circumstance I was in, I only wanted a warning flag so if it was already their then great I don’t need to create it, but just flag that detail to the console.

The first part then is an IF statement that attempts to get the Container, however if it does get anything then it falls into the else and writes a warning to the console. If it doesn’t then we use the parameters passed in to create the new Container, also note the ‘Permission’ argument I have set to ‘Container’, but this can also be set to the other options instead or created as a new parameter passed in.

if (!(Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $StorageContainerName })){
New-AzureStorageContainer -Context $StorageContext -Name $StorageContainerName -Permission Container;  
}
 else {
Write-Warning "Container $StorageContainerName already exists."
}

This is then all you need for creating a new Azure Container, and for the full example you can go here.

How to copy files to an Azure Container?

Following the life cycle after you create an Azure Container, you will want files into it. So we start as before, with all the parameters that are required. The additional one here is the ‘ArtifactStagingDirectory’, which will be the directory of where the contents is contained.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName,
    [string] $ArtifactStagingDirectory
)

Again we get the Storage Account context for future commands and then also get the paths for the files from the passed in directory.

$storageAccount = ( Get-AzureRmStorageAccount | Where-Object{$_.StorageAccountName -eq $StorageAccountName} )

$ArtifactFilePaths = Get-ChildItem -Path "$ArtifactStagingDirectory\**" -Recurse -File | ForEach-Object -Process {$_.FullName}

With the files paths we can then loop through each directory location to add to the Container. Within each loop we will set up the source path and pass it in, which you might notice we are using the ‘Force’ argument as we do not want a permission dialog box popping up especially if we are automating.

foreach ($SourcePath in $ArtifactFilePaths) {

$SourcePath
$SourcePath.Substring($ArtifactStagingDirectory.length)
    Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length) `
        -Container $StorageContainerName -Context $StorageAccount.Context -Force

}

This will get all the found files and folders into the Azure Container you have created. If you want to see the full version of how to copy files to an Azure Container go here.

How to empty an Azure Container?

Like in most cases, if in doubt then restart, so this is a script to do just that by emptying the Container of its contents. The set to this has one difference, which is the Containers are a comma separated string of the names instead. This is so you can empty or many Containers at the same time, like if you are cleaning out a whole deployment pipeline.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

As usual we get the Azure Storage Accounts context for later commands.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

For this one I am going to break it down by line instead of by statement. To get the full picture click on the link at the bottom to see the full version of this code.

We kick it off by looping each of the Container names:

$StorageContainerNames.Split(",") | ForEach {

We then need to check if the Container exists else we will try to delete content from a none existent Container.

if ((Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $currentContainer })){

If there is a Container, then we also need to check if there is a Blob to delete the content from.

$blobs = Get-AzureStorageBlob -Container $currentContainer -Context $StorageContext

if ($blobs -ne $null)
{

If all of these do come through then we get the go ahead to delete the contents, however we need to loop through each of the Blobs in the array to clear each Blob item.

foreach ($blob in $blobs) {

                    Write-Output ("Removing Blob: {0}" -f $blob.Name)
                    Remove-AzureStorageBlob -Blob $blob.Name -Container $currentContainer -Context $StorageContext

                }

In the result of this all the contents of the named Containers will be cleared out. As said before these are just snippets, but the full version of Emptying the Azure Container is here.

How to remove an Azure Container?

Just like the previous script, we have the same parameters as the rest and one of them that contains a comma separated string of Container Name. With these parameters we are looking to clear the whole thing out by deleting the Azure Container.

We start with the parameters, get the Storage Account context and loop through the Containers.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

$Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

$StorageContainerNames.Split(",") | ForEach {

For each container, you check if the Container exists before deleting it. Next is the final command to delete the Container, which you will also notice we use the ‘Force’ argument to prevent the authorization pop up showing and get the Container deleted.

Remove-AzureStorageContainer -Context $StorageContext -Name $currentContainer -Force;

The full layout of removing an Azure Container can be seen here.