What is in a projects builds and releases

While working with other companies I have seen multiple builds and releases, plus also reading books like ‘Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation’. Through this I have learnt more and more about what should really be in the builds and releases of code applications. I would like to describe how I think they should both be used to create a scalable, reliable and repeatable process to bring confidence to your projects.

In showing these I will using Visual Studio Team Services(VSTS) and C#.NET as examples. These are the day to day parts I use and know how to represent what I would like to demo.

Continuous Integration Build

A Continuous Integration Build, or also known as CI Build, is the first build that your code should see. In the normal process I follow with code, you create a feature branch of your code which is where you write the new feature. Once you are happy with the product, it can be checked in to the development branch, where the current developing code is held before releasing. The CI build will sit in between this process to review what you are about to check in.

The goal of the CI Build is to protect the development branch and in turn all other developers that want to clone the code. Therefore we want to process the code in every way that it will be done in the release, but don’t move it. The general actions would be to get all assets, compile the code and then do unit tests.

By getting all the assets, I mean not just getting the code from the repository, but also other required assets like for ASP.NET projects, Nuget packages. This could also be things like Node Package Manager(NPM) packages and processing tasks like Grunt which manipulates the code before compiling. This is basically the setting up process for the code to be built.

We then compile the code into the state it will be used in, then run the unit tests. These unit tests are for checking if there are any errors in the code and should be testing your new change with the current state of the code, but a balance for this build is for speed as well and reliability. You and others will be checking into this branch multiple times during the day, so you don’t want to be waiting all day to find out if your code is ok. If your tests are taking a long time, then it might be an idea to do enough unit tests for you to be confident with the merge and then run all the longer and indepth tests overnight.

Nightly Build

This build is optional to how well your daily CI build behaves. If you feel the CI is taking too long and you want to run some extensive tests on the project, then this is the build you will need. However not all projects are as large and detailed so might not need this.

The Nightly Build is the same process as the CI, as with Continuous Integration it should be a repeatable process, so it will get the resources and compile the code if required in the exact same method as the CI Build. Then at this point you can run all the same CI Unit Tests just as a confidence test that they still pass. You wouldn’t want to run the whole build and find out something failed in the small amount of tests you missed.

You can now run any lengthy Unit Tests, but what would also be good is to run any integration tests. These tests will stop using the stubbed versions of services and databases, to then use the real thing. The purpose of these tests are to make sure that when working with the real end points everything still works. When you use stubs for Unit Tests, you are practically configuring the end points to work as you would like. Even though you should be configuring them to be the same as the real deal, you can never 100% know they are working the same unless you just use the real thing. However just a point to be clear, when we say the real end points we do not mean the production ones, but the development versions instead.

After the build is complete, you should be confident that the code compiles fine, it works correctly by itself and works fine with the real systems as well. With this confidence there should be no hesitation to be happy to merge this into the next stage of the branching.

Release Build

At this point you have compiled the code, tested the code, tested the integration and had human testers check the system. There is now 100% confidence that the project will work when it gets to its destination, so we move to packaging up the project and moving it to its destination.

However we don’t want to jus trust what was check a few days ago will be ok. What we do want is to trust that what we are packaging up at this point will be the working, tested and complete code. Therefore we do the repeatable process by getting the resources, compile the code and testing as much as what gives you confidence but as minimal as the Unit Tests. This now gives you the product that you should be happy to put on a server. It is also the same product you was happy with at the CI stage and the Nightly Build stage, so it is what you have practiced with throughout the process.

With the resulting product you can package it as required for the language and/or framework, which will be placed on the build server with a version number ready for the release. It is important that the package is accessible by the release for obvious reasons to pick the package up, but also the version number is very important. When the release picks up the package, we want to make sure it is the exact one we happily built, configured and tested. Most build tools like Visual Studio Team Services will automatically add this build id to the package and manage the collection of it.

Release

We now have a confident deployable package to release, so there is no more building required, but there is still some configuration. When building an application that will be going to multiple location, you don’t want to use the same credentials for things like databases. This would be unsecure as if one of the servers was compromised then all of them are. There are also things like the database location as this would be different for each environment. There shouldn’t be one central system for all the environment, as this then can cause issues with that system goes down. If it is the development environment, then all systems should be applicable for just development. Nothing worst then testers bugging you because your development took down their testing.

What we will need to do is update the code to use the specific environment variables. This should be stored in the code base, so if the same application was deployed to multiple development environment there is minimal to no set up. Another example is a load balanced system where you want to deploy the same configuration to all servers. The way to do this will depend on the language, framework and system you are deploying to, but for an .NET Core project the best ways to have an ‘appsettings.config’ file for each environment. This would then be converted on deployment to its specific environment, so settings in the ‘appsettings.development.config’ would be merged in and the settings for ‘appsettings.production.config’ would not be touched until required.

Now the code is ready for the environment, but is the environment ready for the code. Part of the DevOps movement is Infrastructure As Code, where you not only configure the code for each environment, but also configure the environment. In a perfect cloud environment you would have the servers image with all the setting up instructions saved in the code base to keep all required assets in the same location. With this image you can target a server, install the image, then configure anything required for the environment, for example an environment variable, and finally deploy the code. This method would mean we could create and deploy any of the environments at will, for instance if the development server went down or was corrupted, you would point then fire to result is a perfect set up. An example of this would be using Azure with the JSON configuration details.

However we don’t all live in perfect world and our infrastructure is not always perfect, but we can still make it as good as we can. For instance I have worked on a managed OnPremise server where it has been created to a basic specification including Windows Operating System, user accounts and other basic details. This gives me a base to start with and an certain level of confidence that if I asked for another server to be created, it will be in the same format. Now I need to make sure it is fit for what I require it for, so we can use PowerShell that will run on the target machine to install things like IIS. This can be a script stored in the code base and then the environment variables pulled in from another file or the release configuration. This would give a level of Infrastructure As Code, by the requirements of the project being installed at each environment. This process could also check everything is in working order, so before you put your project on the server you are happy it is ready for it.

We should be all set to put the code and the server together now with a deployment, but once we have done that we have lost some confidence. Like the Integration Tests, we know the package is ok on its own and we know the server is ok on its own, but how do we know they are going to work together? At this point there should be some small, as to not increase the release time, but required tests to make sure that it has been installed correctly. This can depend on the project type and the environment etc, but should give you a certain level of confidence that everything will be ok. For an example you could have a URL endpoint that once called responds with the new codes version number. If the correct version is installed and set up on IIS, then it should be able to do this. There is now confidence it is in the correct place on the server, with the correct build version and working correctly with the environments set up. Of course this doesn’t test every endpoint of the project is working with no errors, but you would need to take some of that confidence from all the previous builds and testing.

Result

With the CI Build every commit, the Nightly Build every night and the Release Build before all releases, then the configuration at each environment for both the server and the code, we end with a secure, resilient and well established product. This should result in you and your team being happy to fire a build or release off and not worrying about if it will work. An example of this confidence is once a developers code base was showing errors after a merge and didn’t know where the issue was. However because we had confidence in our CI build, we knew it would not be the base version but something on their machine, which closed the range of where the problem could be. In this instance it removed the question of is the base version stable and so sped up the process of finding the error.

I strongly suggest following this process or one relevant to your project as although it might take some time to set up and get developer comfortable with it, the time and assurance gain is much better.

Feel free to share any extra processes you do in your projects to create the safest process.

How to build Azure Service Bus Relay Sender and Listener?

This is one of them, I tried to do and found it hard so here is how I did it, post. I was assigned to look into how to build a Sender and Listener using the Azure Service Bus Relay, so we could send data from Azure to On Premise securely. Now there might be debates on is this is secure and compared to other methods, but that is not for what I was asked and what this post is about.

Therefore I will demo how to create the Net TCP Relay in Azure, the code to a listener and the code for the sender in C#.net, but remember this is what worked for me and there are other protocols, technologies and languages this can be done in.

How to build the Service Bus Relay

First you need to get to the Azure Portal using ‘https://portal.azure.com‘. This will take you to the dashboard or to the login page, which will then take you there. You can create a new dashboard to put all your resources in one place, which is advised for organisation.

Click on the ‘New’ button in the side navigation, then search for ‘Relay’. This should then show in the results the Relay service with the blue icon. Click ‘Create’ on this and you will be promoted for the details of the relay.

v1

Add in the Azure name for the relay, this is the base URL for the service. Select your preferred Subscription, Resource group and Location as you see fit. Once the details are in and the fields have a green tick in for being ok, then press the ‘Create’ button. If you want this pinned on your dashboard, then don’t forget to check the ‘Pin to dashboard’ box.

v2

Once this is created then you can go to the Relay and you will see the Overview page of the new Relay as below.

v3

Now the method that I create this was using the ‘WCF Relay’ and it was a ‘NetTcp’ version. To do this click on the ‘WCF Relay’ menu in the side navigation below the ‘Overview’ item. This will load the list view of all the WCF Relays you have, which is none yet. Click on the ‘WCF Relay’ button at the top with the big plus symbol next to it.

Enter the name of the Relay, remember that you can have many of these so it doesn’t have to be to generic. The other details I left as they were and you will notice that ‘NetTcp’ is selected for ‘Relay Type’. Click ‘Create’ and now you have a Relay.

v4

Note that if you can’t see the Relay after pressing the button, then reload the screen and it will load in this time.

v5

Now you can move on to the code.

 

How to build a Relay Sender in C#.Net

A key part to the two code segments working together is that the interface they both use must match or the data will not get received or sent.

We start by creating the 3 variables that are needed for each Relay account. This is the Service Bus Key, the Namespace and the Relay name.

To get the Service Bus Key, go to the Relay account page and under ‘Properties’ on the side navigation there should be ‘Shared access policies’, click on this. You will know if you are on the correct page as there will already be a ‘RootManageShareAccessKey’, which new keys can be made to separate security, but for this POC I just used this one.
If you click on this you will see the keys associated with the policy. You need the ‘Primary key’, which you can copy and put into the variable below:

private string _serviceBusKey = "[RootManageShareAccessKey-PrimaryKey]";

The other two you can get from the WCF Relay Overview page. The Namespace is the name of the Relay Account and the Relay name is what the WCF Relay is called. These can also be taken from the ‘WCF Relay Url’ on the overview page.

http:// [NAMESPACE] .servicebus.windows.net/ [WCF RELAY NAME]

private string _namespace = "[Namespace]";
private string _relayName = "[WcfRelayName]";

Next we create the variable for the connection to the Relay, by creating a new Net TCP binding and the Endpoint. The scheme I used was ‘sb’ but this again can be changed.

var binding = new NetTcpRelayBinding();
var endpoint =
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", _namespace, _relayName));

Visual Studio should help you import the correct variable, but if not then you need the following
• NetTcpRelayBinding
• Microsoft.Servicebus
• EndpointAddress

Now we connect these to the interface that is the same as the Listener and create the tunnel between them.

// Factory
var factory = new ChannelFactory<IMyService>(binding, endpoint);
factory.Endpoint.Behaviors.Add(
new TransportClientEndpointBehavior
{
TokenProvider =
TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey",
_serviceBusKey)
}
);

IMyService client = factory.CreateChannel();

From now on when you want to call a method to the listener, you use ‘client’ dot, the method or variable, for example.

client.CallMyService();

How to build a Relay Listener in C#.Net

Now to get this side working is very simple as it is all managed from the Web Configuration file (Web.config).

Step 1 is under the ‘Configuration > system.seriveModel > behaviors > endpointBehavoirs’
In this node add a new behavior called ‘ServiceBusBehavior’ and inside this you need a ‘transportClientEndpointBehavior’ with a sub node of a ‘tokenProvider’. In this you will have the ‘sharedAccessSignature’ which is the ‘RootManageSahredAccessKey’ mentioned before.

You can get this from the Service Bus Key, go to the Relay account page and under ‘Properties’ on the side navigation there should be ‘Shared access policies’, click on this. You will know if you are on the correct page as there will already be a ‘RootManageShareAccessKey’, which new keys can be made to separate security, but for this POC I just used this one.
If you click on this you will see the keys associated with the policy. You need the ‘Primary key’, which you can copy and put into the variable below:

<endpointBehaviors>
<behavior name="ServiceBusBehavior">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedAccessSignature keyName="RootManageSharedAccessKey" key="PRIMARY KEY"/>
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>

Step 2 is to create the new binding for the Net TCP connection under ‘Configuration > system.seriveModel > bindings’. Add in this a ‘netTcpRelayBinding’ node, with a ‘binding’ node. The name of this will be called ‘ServiceBinding’, but can be custom if you would like.

<bindings>
<basicHttpBinding>
<binding name="DefaultBinding" />
</basicHttpBinding>
<netTcpRelayBinding>
<binding name="ServiceBusBinding" />
</netTcpRelayBinding>
</bindings>

Step 3 is the connection settings for the Service Bus. Again you will need the ‘RootManageSahredAccessKey’ and also the Relay Namespace or the URL. The below sits under ‘Configuration > appSettings’, replacing the items in [] with the correct values.

<appSettings>
<!-- Service Bus specific app setings for messaging connections -->
<add key="Microsoft.ServiceBus.ConnectionString"
value="Endpoint=sb://[Namespace].servicebus.windows.net;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[PrimaryKey]"/>
</appSettings>

Step 4 is the last on, ish. This is to bind the service to all the configurations we just made. To complete this step you would have needed to create the WCF service and the above bindings. Under the ‘Configuration > system.seriveModel > services’ add a new service node as below:

<service name="[WCF_Service]">
<endpoint address="" binding="basicHttpBinding" bindingConfiguration="DefaultBinding" contract="[WCF_Interface]"/>
<endpoint address="sb://[Namespace].servicebus.windows.net/[WCF_Relay]" binding="netTcpRelayBinding" behaviorConfiguration="ServiceBusBehavior"
contract="[WCF_Interface]" />
</service>

Replace the above variables as below:
• [WCF_Service] = the WCF service class
• [WCF_Interface] = the WCF service Interface
• [Namespace] = the Relay name
• [WCF_Relay] = the WCF Relay name

This one is an optional step, or more of a put it in if you want all the functionality. I would advise unless you know what you are playing with, then don’t touch it. In the ‘Configuration > system.seriveModel > extensions’ node you need to add the below, which are all the service bus extensions.

<extensions>
<!-- In this extension section we are introducing all known service bus extensions. User can remove the ones they don't need. -->
<behaviorExtensions>
<add name="connectionStatusBehavior"
type="Microsoft.ServiceBus.Configuration.ConnectionStatusElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="transportClientEndpointBehavior"
type="Microsoft.ServiceBus.Configuration.TransportClientEndpointBehaviorElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="serviceRegistrySettings"
type="Microsoft.ServiceBus.Configuration.ServiceRegistrySettingsElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</behaviorExtensions>
<bindingElementExtensions>
<add name="netMessagingTransport"
type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingTransportExtensionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="tcpRelayTransport"
type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpRelayTransport"
type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpsRelayTransport"
type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="onewayRelayTransport"
type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingElementExtensions>
<bindingExtensions>
<add name="basicHttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="webHttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="ws2007HttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netTcpRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netOnewayRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netEventRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netMessagingBinding"
type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingExtensions>
</extensions>

Side Note

To pass custom classes between the two you need to decorate the class with the data attributes, but also make sure you have a unique namespace on the contract. It doesn’t have to be a valid running namespace, just as long as they match and are unique.

[DataContract(Name= "CarClass",Namespace = "http://MyDomain.com/namespace/CarClass")]
public class CarClass
{
[DataMember]
public string CarName { get; set; }
[DataMember]
public string CarType { get; set; }
[DataMember]
public string CarSize { get; set; }
}

 

Azure Container with PowerShell

When I was trying to use PowerShell to action some Azure functionality, I found it very scattered and hard to get one answer, so here I give you the golden goose for Adding, Removing, Emptying and Copying files to an Azure Container using PowerShell.

The small print of this is of course there are probably more method of doing the same thing, but this is how it worked for me. Also this is not a demo of all the options and parameters the PowerShell commands can do, but what we need them to do. These scripts are set up to run with parameters passed in, but I have also put comments in there so you can run them hardcoded.

How to add a Azure Container?

The parameters required for this script are the Resource Group Name and Storage Account Name for the already built account, plus the new Container’s Name. You can see from below where we pass in the parameters, however in the static version we also need to Login to the required account and pass in the Subscription ID for the account as well.

You can get the Subscription ID by following the steps on this post.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName
)

## Static Parameters
#Login-AzureRmAccount
#Set-AzureRmContext -SubscriptionID 11111111-1111-1111-1111-111111111111
#$ResourceGroupName = "GroupName"
#$StorageAccountName = "AccountName"
#$StorageContainerName = "ContainerName"

Now we have all the details we can get the storage details from the account from the code below. This gets the storage Key to access the account details, then gets the storage account.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

You need the Storage Context for the future calls to create the container. Before we do create the new container, it is best to check if it already exists. In the circumstance I was in, I only wanted a warning flag so if it was already their then great I don’t need to create it, but just flag that detail to the console.

The first part then is an IF statement that attempts to get the Container, however if it does get anything then it falls into the else and writes a warning to the console. If it doesn’t then we use the parameters passed in to create the new Container, also note the ‘Permission’ argument I have set to ‘Container’, but this can also be set to the other options instead or created as a new parameter passed in.

if (!(Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $StorageContainerName })){
New-AzureStorageContainer -Context $StorageContext -Name $StorageContainerName -Permission Container;  
}
 else {
Write-Warning "Container $StorageContainerName already exists."
}

This is then all you need for creating a new Azure Container, and for the full example you can go here.

How to copy files to an Azure Container?

Following the life cycle after you create an Azure Container, you will want files into it. So we start as before, with all the parameters that are required. The additional one here is the ‘ArtifactStagingDirectory’, which will be the directory of where the contents is contained.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName,
    [string] $ArtifactStagingDirectory
)

Again we get the Storage Account context for future commands and then also get the paths for the files from the passed in directory.

$storageAccount = ( Get-AzureRmStorageAccount | Where-Object{$_.StorageAccountName -eq $StorageAccountName} )

$ArtifactFilePaths = Get-ChildItem -Path "$ArtifactStagingDirectory\**" -Recurse -File | ForEach-Object -Process {$_.FullName}

With the files paths we can then loop through each directory location to add to the Container. Within each loop we will set up the source path and pass it in, which you might notice we are using the ‘Force’ argument as we do not want a permission dialog box popping up especially if we are automating.

foreach ($SourcePath in $ArtifactFilePaths) {

$SourcePath
$SourcePath.Substring($ArtifactStagingDirectory.length)
    Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length) `
        -Container $StorageContainerName -Context $StorageAccount.Context -Force

}

This will get all the found files and folders into the Azure Container you have created. If you want to see the full version of how to copy files to an Azure Container go here.

How to empty an Azure Container?

Like in most cases, if in doubt then restart, so this is a script to do just that by emptying the Container of its contents. The set to this has one difference, which is the Containers are a comma separated string of the names instead. This is so you can empty or many Containers at the same time, like if you are cleaning out a whole deployment pipeline.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

As usual we get the Azure Storage Accounts context for later commands.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

For this one I am going to break it down by line instead of by statement. To get the full picture click on the link at the bottom to see the full version of this code.

We kick it off by looping each of the Container names:

$StorageContainerNames.Split(",") | ForEach {

We then need to check if the Container exists else we will try to delete content from a none existent Container.

if ((Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $currentContainer })){

If there is a Container, then we also need to check if there is a Blob to delete the content from.

$blobs = Get-AzureStorageBlob -Container $currentContainer -Context $StorageContext

if ($blobs -ne $null)
{

If all of these do come through then we get the go ahead to delete the contents, however we need to loop through each of the Blobs in the array to clear each Blob item.

foreach ($blob in $blobs) {

                    Write-Output ("Removing Blob: {0}" -f $blob.Name)
                    Remove-AzureStorageBlob -Blob $blob.Name -Container $currentContainer -Context $StorageContext

                }

In the result of this all the contents of the named Containers will be cleared out. As said before these are just snippets, but the full version of Emptying the Azure Container is here.

How to remove an Azure Container?

Just like the previous script, we have the same parameters as the rest and one of them that contains a comma separated string of Container Name. With these parameters we are looking to clear the whole thing out by deleting the Azure Container.

We start with the parameters, get the Storage Account context and loop through the Containers.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

$Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

$StorageContainerNames.Split(",") | ForEach {

For each container, you check if the Container exists before deleting it. Next is the final command to delete the Container, which you will also notice we use the ‘Force’ argument to prevent the authorization pop up showing and get the Container deleted.

Remove-AzureStorageContainer -Context $StorageContext -Name $currentContainer -Force;

The full layout of removing an Azure Container can be seen here. 

Find Fully Qualified Azure SQL Database Name

This is a step by step guide on how to find you Fully Qualified Azure SQL Database Name in the new Azure Portal. This can be used for the Scale Azure SQL Database with PowerShell program.

1) Browse to https://portal.azure.com and sign in with you credentials.

2) On the left menu click ‘Browse’ then ‘SQL Servers’

3) In the first panel click on the server name you want the name for. This will bring up the next panel that you should click the ‘Settings’ tile. Finally click the ‘Properties’ button in the last tile.

4) The tile should now change and you can find the ‘Fully Qualified Server Name’ under ‘SERVER NAME’

Scale Azure SQL Database with PowerShell

In the pursuit to move to Azure we have had the need to scale the SQL databases. Your company may only have high or any traffic during certain times or even certain season. We need the best solution during working hours so this is what I have based it off.

The program will take the database you want to scale and then you can set it to be whichever setting you require. So lets begin…

The first step is to install the necessary programs and plugins. These are

  • Install Azure Power Shell
  • Install Command-Line tools

These can be downloaded from The Azure Downloads Section.

Now we get into the PowerShell scripting part. The first section is to get the credentials for the database server. There are two method to this that I have found.

The simplest and securist method is to use ‘Get-Credential’. This will promote you for the username and the password for the database, that will be used later for the authentication.
#Varibles
$DBname = "protech"
$creds = Get-Credential

The other method is less secure as you will have your username and password stored in the file unencrypted. Though this is the best method to then have it automated with out the need for user interaction.
#Varibles
$DBname = "database"
$username = "username"
$password = "password"


#Credentials
$secstr = New-Object -TypeName System.Security.SecureString
$password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}
$creds = new-object -typename System.Management.Automation.PSCredential -argumentlist $username, $secstr

This section then gets the connection to the database. This is where it will use your ‘creds’ variable and also the full name to the database. You can get this by following the step guide on How to find the Fully Qualified Azure SQL Database Name. In the example below I have put ‘DBname.database.windows.net’
$serverContext = New-AzureSqlDatabaseServerContext -Credential $creds -FullyQualifiedServerName DBname.database.windows.net
Other methods to get the Server Context are on the Microsoft Website. https://msdn.microsoft.com/en-us/library/dn546736.aspx

Finally you can connect to the database using you ‘serverContext’ and the chosen ‘DBname’.
$db = Get-AzureSqlDatabase $serverContext –DatabaseName $DBname
Below are how to change to the different Azure SQL Database Tiers. I have added how to change to each one, so you will just have to chosen which you would like.


Scale Azure SQL Database to Basic

$b = Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "Basic"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $b –Edition Basic


Scale Azure SQL Database to tandard 1

$S1 = Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "S1"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $S1 –Edition Standard


Scale Azure SQL Database to Standard 2

$S2 = Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "S2"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $S2 –Edition Standard


Scale Azure SQL Database to Standard 3

$S3 = Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "S3"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $S3 –Edition Standard


Scale Azure SQL Database to Premium 1

$P1= Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "P1"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $P1 –Edition Premium


Scale Azure SQL Database to Premium 2

$P2= Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "P2"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $P2 –Edition Premium


Scale Azure SQL Database to Premium 3

$P3= Get-AzureSqlDatabaseServiceObjective $serverContext -ServiceObjectiveName "P3"
Set-AzureSqlDatabase $serverContext –Database $db –ServiceObjective $P3 –Edition Premium


An issue I found was the once I had saved the PowerShell file it wouldn’t run. There would be permissions issue to run the program, so to get round this if you run the below script first it will give you the permissions to run the code.

Error:

Set-ExecutionPolicy : Access to the registry key
‘HKEY_LOCAL_MACHINESOFTWAREMicrosoftPowerShell1ShellIdsMicrosoft.PowerShell’
is denied.

Solution:

Set-ExecutionPolicy Unrestricted