How to build Azure Service Bus Relay Sender and Listener?

This is one of them, I tried to do and found it hard so here is how I did it, post. I was assigned to look into how to build a Sender and Listener using the Azure Service Bus Relay, so we could send data from Azure to On Premise securely. Now there might be debates on is this is secure and compared to other methods, but that is not for what I was asked and what this post is about.

Therefore I will demo how to create the Net TCP Relay in Azure, the code to a listener and the code for the sender in C#.net, but remember this is what worked for me and there are other protocols, technologies and languages this can be done in.

How to build the Service Bus Relay

First you need to get to the Azure Portal using ‘https://portal.azure.com‘. This will take you to the dashboard or to the login page, which will then take you there. You can create a new dashboard to put all your resources in one place, which is advised for organisation.

Click on the ‘New’ button in the side navigation, then search for ‘Relay’. This should then show in the results the Relay service with the blue icon. Click ‘Create’ on this and you will be promoted for the details of the relay.

v1

Add in the Azure name for the relay, this is the base URL for the service. Select your preferred Subscription, Resource group and Location as you see fit. Once the details are in and the fields have a green tick in for being ok, then press the ‘Create’ button. If you want this pinned on your dashboard, then don’t forget to check the ‘Pin to dashboard’ box.

v2

Once this is created then you can go to the Relay and you will see the Overview page of the new Relay as below.

v3

Now the method that I create this was using the ‘WCF Relay’ and it was a ‘NetTcp’ version. To do this click on the ‘WCF Relay’ menu in the side navigation below the ‘Overview’ item. This will load the list view of all the WCF Relays you have, which is none yet. Click on the ‘WCF Relay’ button at the top with the big plus symbol next to it.

Enter the name of the Relay, remember that you can have many of these so it doesn’t have to be to generic. The other details I left as they were and you will notice that ‘NetTcp’ is selected for ‘Relay Type’. Click ‘Create’ and now you have a Relay.

v4

Note that if you can’t see the Relay after pressing the button, then reload the screen and it will load in this time.

v5

Now you can move on to the code.

 

How to build a Relay Sender in C#.Net

A key part to the two code segments working together is that the interface they both use must match or the data will not get received or sent.

We start by creating the 3 variables that are needed for each Relay account. This is the Service Bus Key, the Namespace and the Relay name.

To get the Service Bus Key, go to the Relay account page and under ‘Properties’ on the side navigation there should be ‘Shared access policies’, click on this. You will know if you are on the correct page as there will already be a ‘RootManageShareAccessKey’, which new keys can be made to separate security, but for this POC I just used this one.
If you click on this you will see the keys associated with the policy. You need the ‘Primary key’, which you can copy and put into the variable below:

private string _serviceBusKey = "[RootManageShareAccessKey-PrimaryKey]";

The other two you can get from the WCF Relay Overview page. The Namespace is the name of the Relay Account and the Relay name is what the WCF Relay is called. These can also be taken from the ‘WCF Relay Url’ on the overview page.

http:// [NAMESPACE] .servicebus.windows.net/ [WCF RELAY NAME]

private string _namespace = "[Namespace]";
private string _relayName = "[WcfRelayName]";

Next we create the variable for the connection to the Relay, by creating a new Net TCP binding and the Endpoint. The scheme I used was ‘sb’ but this again can be changed.

var binding = new NetTcpRelayBinding();
var endpoint =
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", _namespace, _relayName));

Visual Studio should help you import the correct variable, but if not then you need the following
• NetTcpRelayBinding
• Microsoft.Servicebus
• EndpointAddress

Now we connect these to the interface that is the same as the Listener and create the tunnel between them.

// Factory
var factory = new ChannelFactory<IMyService>(binding, endpoint);
factory.Endpoint.Behaviors.Add(
new TransportClientEndpointBehavior
{
TokenProvider =
TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey",
_serviceBusKey)
}
);

IMyService client = factory.CreateChannel();

From now on when you want to call a method to the listener, you use ‘client’ dot, the method or variable, for example.

client.CallMyService();

How to build a Relay Listener in C#.Net

Now to get this side working is very simple as it is all managed from the Web Configuration file (Web.config).

Step 1 is under the ‘Configuration > system.seriveModel > behaviors > endpointBehavoirs’
In this node add a new behavior called ‘ServiceBusBehavior’ and inside this you need a ‘transportClientEndpointBehavior’ with a sub node of a ‘tokenProvider’. In this you will have the ‘sharedAccessSignature’ which is the ‘RootManageSahredAccessKey’ mentioned before.

You can get this from the Service Bus Key, go to the Relay account page and under ‘Properties’ on the side navigation there should be ‘Shared access policies’, click on this. You will know if you are on the correct page as there will already be a ‘RootManageShareAccessKey’, which new keys can be made to separate security, but for this POC I just used this one.
If you click on this you will see the keys associated with the policy. You need the ‘Primary key’, which you can copy and put into the variable below:

<endpointBehaviors>
<behavior name="ServiceBusBehavior">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedAccessSignature keyName="RootManageSharedAccessKey" key="PRIMARY KEY"/>
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
</endpointBehaviors>

Step 2 is to create the new binding for the Net TCP connection under ‘Configuration > system.seriveModel > bindings’. Add in this a ‘netTcpRelayBinding’ node, with a ‘binding’ node. The name of this will be called ‘ServiceBinding’, but can be custom if you would like.

<bindings>
<basicHttpBinding>
<binding name="DefaultBinding" />
</basicHttpBinding>
<netTcpRelayBinding>
<binding name="ServiceBusBinding" />
</netTcpRelayBinding>
</bindings>

Step 3 is the connection settings for the Service Bus. Again you will need the ‘RootManageSahredAccessKey’ and also the Relay Namespace or the URL. The below sits under ‘Configuration > appSettings’, replacing the items in [] with the correct values.

<appSettings>
<!-- Service Bus specific app setings for messaging connections -->
<add key="Microsoft.ServiceBus.ConnectionString"
value="Endpoint=sb://[Namespace].servicebus.windows.net;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=[PrimaryKey]"/>
</appSettings>

Step 4 is the last on, ish. This is to bind the service to all the configurations we just made. To complete this step you would have needed to create the WCF service and the above bindings. Under the ‘Configuration > system.seriveModel > services’ add a new service node as below:

<service name="[WCF_Service]">
<endpoint address="" binding="basicHttpBinding" bindingConfiguration="DefaultBinding" contract="[WCF_Interface]"/>
<endpoint address="sb://[Namespace].servicebus.windows.net/[WCF_Relay]" binding="netTcpRelayBinding" behaviorConfiguration="ServiceBusBehavior"
contract="[WCF_Interface]" />
</service>

Replace the above variables as below:
• [WCF_Service] = the WCF service class
• [WCF_Interface] = the WCF service Interface
• [Namespace] = the Relay name
• [WCF_Relay] = the WCF Relay name

This one is an optional step, or more of a put it in if you want all the functionality. I would advise unless you know what you are playing with, then don’t touch it. In the ‘Configuration > system.seriveModel > extensions’ node you need to add the below, which are all the service bus extensions.

<extensions>
<!-- In this extension section we are introducing all known service bus extensions. User can remove the ones they don't need. -->
<behaviorExtensions>
<add name="connectionStatusBehavior"
type="Microsoft.ServiceBus.Configuration.ConnectionStatusElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="transportClientEndpointBehavior"
type="Microsoft.ServiceBus.Configuration.TransportClientEndpointBehaviorElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="serviceRegistrySettings"
type="Microsoft.ServiceBus.Configuration.ServiceRegistrySettingsElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</behaviorExtensions>
<bindingElementExtensions>
<add name="netMessagingTransport"
type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingTransportExtensionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="tcpRelayTransport"
type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpRelayTransport"
type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpsRelayTransport"
type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="onewayRelayTransport"
type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingElementExtensions>
<bindingExtensions>
<add name="basicHttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="webHttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="ws2007HttpRelayBinding"
type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netTcpRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netOnewayRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netEventRelayBinding"
type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netMessagingBinding"
type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingExtensions>
</extensions>

Side Note

To pass custom classes between the two you need to decorate the class with the data attributes, but also make sure you have a unique namespace on the contract. It doesn’t have to be a valid running namespace, just as long as they match and are unique.

[DataContract(Name= "CarClass",Namespace = "http://MyDomain.com/namespace/CarClass")]
public class CarClass
{
[DataMember]
public string CarName { get; set; }
[DataMember]
public string CarType { get; set; }
[DataMember]
public string CarSize { get; set; }
}

 

Azure Container with PowerShell

When I was trying to use PowerShell to action some Azure functionality, I found it very scattered and hard to get one answer, so here I give you the golden goose for Adding, Removing, Emptying and Copying files to an Azure Container using PowerShell.

The small print of this is of course there are probably more method of doing the same thing, but this is how it worked for me. Also this is not a demo of all the options and parameters the PowerShell commands can do, but what we need them to do. These scripts are set up to run with parameters passed in, but I have also put comments in there so you can run them hardcoded.

How to add a Azure Container?

The parameters required for this script are the Resource Group Name and Storage Account Name for the already built account, plus the new Container’s Name. You can see from below where we pass in the parameters, however in the static version we also need to Login to the required account and pass in the Subscription ID for the account as well.

You can get the Subscription ID by following the steps on this post.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName
)

## Static Parameters
#Login-AzureRmAccount
#Set-AzureRmContext -SubscriptionID 11111111-1111-1111-1111-111111111111
#$ResourceGroupName = "GroupName"
#$StorageAccountName = "AccountName"
#$StorageContainerName = "ContainerName"

Now we have all the details we can get the storage details from the account from the code below. This gets the storage Key to access the account details, then gets the storage account.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

You need the Storage Context for the future calls to create the container. Before we do create the new container, it is best to check if it already exists. In the circumstance I was in, I only wanted a warning flag so if it was already their then great I don’t need to create it, but just flag that detail to the console.

The first part then is an IF statement that attempts to get the Container, however if it does get anything then it falls into the else and writes a warning to the console. If it doesn’t then we use the parameters passed in to create the new Container, also note the ‘Permission’ argument I have set to ‘Container’, but this can also be set to the other options instead or created as a new parameter passed in.

if (!(Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $StorageContainerName })){
New-AzureStorageContainer -Context $StorageContext -Name $StorageContainerName -Permission Container;  
}
 else {
Write-Warning "Container $StorageContainerName already exists."
}

This is then all you need for creating a new Azure Container, and for the full example you can go here.

How to copy files to an Azure Container?

Following the life cycle after you create an Azure Container, you will want files into it. So we start as before, with all the parameters that are required. The additional one here is the ‘ArtifactStagingDirectory’, which will be the directory of where the contents is contained.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerName,
    [string] $ArtifactStagingDirectory
)

Again we get the Storage Account context for future commands and then also get the paths for the files from the passed in directory.

$storageAccount = ( Get-AzureRmStorageAccount | Where-Object{$_.StorageAccountName -eq $StorageAccountName} )

$ArtifactFilePaths = Get-ChildItem -Path "$ArtifactStagingDirectory\**" -Recurse -File | ForEach-Object -Process {$_.FullName}

With the files paths we can then loop through each directory location to add to the Container. Within each loop we will set up the source path and pass it in, which you might notice we are using the ‘Force’ argument as we do not want a permission dialog box popping up especially if we are automating.

foreach ($SourcePath in $ArtifactFilePaths) {

$SourcePath
$SourcePath.Substring($ArtifactStagingDirectory.length)
    Set-AzureStorageBlobContent -File $SourcePath -Blob $SourcePath.Substring($ArtifactStagingDirectory.length) `
        -Container $StorageContainerName -Context $StorageAccount.Context -Force

}

This will get all the found files and folders into the Azure Container you have created. If you want to see the full version of how to copy files to an Azure Container go here.

How to empty an Azure Container?

Like in most cases, if in doubt then restart, so this is a script to do just that by emptying the Container of its contents. The set to this has one difference, which is the Containers are a comma separated string of the names instead. This is so you can empty or many Containers at the same time, like if you are cleaning out a whole deployment pipeline.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

As usual we get the Azure Storage Accounts context for later commands.

    $Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

    $StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

For this one I am going to break it down by line instead of by statement. To get the full picture click on the link at the bottom to see the full version of this code.

We kick it off by looping each of the Container names:

$StorageContainerNames.Split(",") | ForEach {

We then need to check if the Container exists else we will try to delete content from a none existent Container.

if ((Get-AzureStorageContainer -Context $StorageContext | Where-Object { $_.Name -eq $currentContainer })){

If there is a Container, then we also need to check if there is a Blob to delete the content from.

$blobs = Get-AzureStorageBlob -Container $currentContainer -Context $StorageContext

if ($blobs -ne $null)
{

If all of these do come through then we get the go ahead to delete the contents, however we need to loop through each of the Blobs in the array to clear each Blob item.

foreach ($blob in $blobs) {

                    Write-Output ("Removing Blob: {0}" -f $blob.Name)
                    Remove-AzureStorageBlob -Blob $blob.Name -Container $currentContainer -Context $StorageContext

                }

In the result of this all the contents of the named Containers will be cleared out. As said before these are just snippets, but the full version of Emptying the Azure Container is here.

How to remove an Azure Container?

Just like the previous script, we have the same parameters as the rest and one of them that contains a comma separated string of Container Name. With these parameters we are looking to clear the whole thing out by deleting the Azure Container.

We start with the parameters, get the Storage Account context and loop through the Containers.

## Get Parameters
Param(
    [string] $ResourceGroupName,
    [string] $StorageAccountName,
    [string] $StorageContainerNames
)

$Keys = Get-AzureRmStorageAccountKey -ResourceGroupName $ResourceGroupName -Name $StorageAccountName;

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $Keys[0].Value;

$StorageContainerNames.Split(",") | ForEach {

For each container, you check if the Container exists before deleting it. Next is the final command to delete the Container, which you will also notice we use the ‘Force’ argument to prevent the authorization pop up showing and get the Container deleted.

Remove-AzureStorageContainer -Context $StorageContext -Name $currentContainer -Force;

The full layout of removing an Azure Container can be seen here. 

Kentico the mule of development

I have talked before about how Content Management Systems(CMS) are getting so good, that it means they are doing more of the work so you don’t have too. Instead of doing the donkey work, you can be improving the whole application and doing the most advance features. However over my time of working with such a CMS called Kentico I have found some pro’s and con’s of this mode of working, so I wanted to share my thoughts of if it is a favourable idea to be working on these CMS’.

Kentico’s power

First off an overview of what Kentico is, in case you have not heard of it. Kentico is a C#.NET Swiss army knife of a CMS as it can do Content Management, E-commerce, Online Marketing and basically most of the work for a developer.

Some of the features are:

Kentico uses the name ‘Page Types’ for its content templates. In these you describe the format of how this content should be held, like a database table. You tell it what the field is called in the database, the size, the data format and things like the caption the Content Editor sees. When the Content Editor then adds new content and chooses that Page Type, they are presented with a form with these fields to enter the content. What it then means for the developer, is they have a standard format of how to pull different content from the CMS and not just get a blob of data. (Find out more http://www.kentico.com/product/all-features/web-content-management/custom-pages)

As well as just getting well formatted content, you can use ‘Web Parts’. Web Parts are like functional modules that get used on a template. These can be things like a navigation bar that is on each page, but you can also have different types on each template. Also these can pull content from the database using the Page Type like a news feed or list of blog posts. (Find out more http://www.kentico.com/product/all-features/web-content-management/webparts)

However the Web Parts are added by the developer and are only a single instance of the Web Parts. What we really want is the Content Editor to be able to choose what pages have what modules and for this there are ‘Widgets’. These are instances of the Web Parts, which means you create a Web Part and then the Widget references it. When the Content Editor uses the Widget it takes a copy of the Web Part that it stores on the Page. The control it gives is for the Content Editor to decide what module shows where, when and how. These can get very complex to give more control to the Content Editor or the Developer can keep some control by limiting the functionality of the module. (Find out more http://www.kentico.com/product/all-features/web-content-management/widgets)

The other great content editor control is to build forms, yes full forms. With the CMS you can use a bit of a WYSIWYG to construct a form with various types of fields and then get them submitted to the database. These can also have customisation to send emails to the user or the administrator, create A/B split testing of forms and the editor can even customise the layout of the form. This will spare some hours building custom forms each time with the same validation, same submitting pattern and same design. (Find out more http://www.kentico.com/product/all-features/web-content-management/on-line-forms)

You can read more about all the features in depth and download a demo from the Kentico website. [http://www.kentico.com/product/all-features]

Tell me why?

Other than just showing you the brochure, I wanted to explain what makes using a full customisable CMS like Kentico brings.

In my opinion DevOps is all about empowering the next person to do more with less work, for example an Operations Engineer could make it easier for a Developer to spin up development environment. This kind of this means the Operations Engineer can keep doing other work and the Developer can get on with their work faster. This is the same thing with Kentico and the Content Editors. The more generic and bespoke Web Parts you make, the more the Content Editor can do without the assistance of the Developer, which then leaves the Developer to get on with other work like improving the systems.

When you have a bespoke website that you need to do all the changes within the code, then the Developer needs to do all the leg work for even the smallest change. If the Content Editor wants a box moved to another location of the page, then that’s a boring task for the Developer. However with a CMS like Kentico, the Content Editor will be able to move this by themselves.

I would rather this kind of work pattern as for both Front End and Back End development, as I want to be working on the next great and latest thing, while also looking to improve my code and the testing. This work pattern removes them small tasks that interrupt your work, plus also if you work in Scrum like myself then it takes up your sprint points working in the more Developer heavy pattern.

As mentioned above its not just moving Widgets and custom Web Parts that make this CMS great. It is also the fact that the Content Editors can create their own forms. I remember having to built every simple form from scratch each time, but this now puts the work in their hand, but also in a simple way. I also say simple forms, but it is as simple as you the Developer wants to make it. As you can customise or custom build the Form Controls that build up the form and the Form Web Part that is the base of loading plus saving the form, then you can make them as complex as you want. If you want the form to be in different style, then build different Form Widgets. If you want multiple fields that result in a single field of content, like an Address, then build a custom Form Control. The ideas are only limited by you the Developer or the Content Editors ideas.

The downsides I have seen are where the Content Editors have a lot of great simple ideas. I have been given tasks of adding a new type of button or adding a new content area to a Widget. Although we are empowering them, we also still need to provide the tools to them, which aren’t always the most inventive ones. There is also a balance between empowering them and giving them the code. You could expose all the customisable features of the Web Part, like a button so the colour, size, wording, features but then it’s a huge amount of work for one button. This would then put them off using it, however the other way of closing it down can then put more tasks on you.

Another challenge is what you can customise and what you should. Kentico’s recommendation for when you are customising anything, is if it is default to Kentico then clone it and customise the clone. This is so if or when you need to upgrade the CMS, you haven’t broken anything that the upgrade will use, plus it could get over written when the upgrade is then place. Even though Kentico is full customisable, the method in which it performs some task might not be how you like and at best practice you need to leave them how they are.

Final thoughts?

Although there are downsides to using a CMS like Kentico, I think any method of using a CMS will have its downsides. I feel with this set up I am more looking at improving the code, myself and the product, rather than doing the same task each time.

What CMS do you use and do you think it helps you become a better developer, comment below?

Galen Framework Hover in JavaScript

A bit of a small audience for this one, but a project I was put on wanted us to use the Galen Framework to test their UI. For people who do not know what this framework is, it is a method of writing acceptance code that matches up to the output in the browser window. This can be coded to work for all different device, sizes and URLs. One hard space I came to was to action the hover state of an element and then test it.

If you would like to read up more about the Galen Framework first then you can find all the information you need on the website.

To test you can either run it in the Galen syntax to run in the Command Prompt or you can also run it using JavaScript. I opted of the Galen Syntax for no real reason, but there was a method to action the hover in the JavaScript version.

Ivan Shubin, the creator of the Galen Framework, point me here for the example.

load("init.js");

load("pages/WelcomePage.js");

 

testOnAllDevices("Welcome page", "/", function (driver, device) {

new WelcomePage(driver).waitForIt();

checkLayout(driver, "specs/welcomePage.gspec", device.tags);

});

 

testOnDevice($galen.devices.desktop, "Menu Highlight", "/", function (driver, device) {

var welcomePage = new WelcomePage(driver).waitForIt();

logged("Checking color for menu item", function () {

checkLayout(driver, "specs/menuHighlight.gspec", ["usual"]);

})

 

logged("Checking color for highlighted menu item", function () {

welcomePage.hoverFirstMenuItem();

checkLayout(driver, "specs/menuHighlight.gspec", ["hovered"]);

});

});

You can see the ‘hoverFirstMenuItem’ that actions the hover. This is built into the Framework that comes with the JavaScript. You can find more about the API and the ‘hover’ method on the Reference Guide for JavaScript.

However I was trying to use the Galen Framework in the command prompt, which can inject JavaScript into the page that runs once the page has loaded. The first idea I had then was to run a ‘hover’ action, which because the site I was running it on has Jquery loaded I could use like below:

$(document).ready(function () {

//hover links

$('a').trigger('mouseenter');

$('a').trigger('mouseover');

$('a').trigger('hover');

$('a').hover();

});

Though as you can see I went a bit overboard trying these all out as none of them seemed to work. I knew the Jquery was working as I was able to run other method to do click events and other thing. After some searching I found a method.

I found a way to get all the style sheets and then search through them. Once I could search them I could find out what the ‘hovered’ styling was for that element. Then I could extract the styling and inject it onto the element, so now its standard style becomes its hovered style.

You can see the below method that I use ‘document.styleSheet’ to loop through each style sheet, then get each rule and loop them. If the class I am looking for is found the I user ‘cssText’ to get all the styling from the rule. This does come back as ‘{background-color:blue}’ so the response text is stripped down to just the CSS.

function getStyle(className) {

            var c = '';

            for (var i = 0; i < document.styleSheets.length; i++) {

try{

                var classes = document.styleSheets[i].rules || document.styleSheets[i].cssRules;

                if (classes) {

                    for (var x = 0; x < classes.length; x++) {

                        try {

                            if (classes[x].selectorText.indexOf(className) > -1) {

                                var s = (classes[x].cssText) ? classes[x].cssText : classes[x].style.cssText;

                                s = s.substring(s.indexOf("{") + 1);

                                c += s.substring(0, s.indexOf("}"));

                            }

                        } catch (ex) {

                        }

                    }

                }

                        } catch (ex) {

                        }

            }

            return c;

        }

You will notice there is a few try catches in the loops, these are because some style sheets it picks up are not accessible so they create an error in the JavaScript. The try catches mean it can keep looping on through the ones it can access instead of breaking out. Another bug fix is where it uses ‘rules’ or ‘cssRules’ and ‘.cssText’ or ‘.style.cssText’. This is due to different browser standards, some of the browsers use one set and some use the other, this prepares for either case when your tests run.

Below is how the full code sits together and can be used. You can see I invoke the method by passing ‘a:hover’ as it is the hover state I need to access to the link.

$(document).ready(function () {

function getStyle(className) {

            var c = '';

            for (var i = 0; i < document.styleSheets.length; i++) {

try{

                var classes = document.styleSheets[i].rules || document.styleSheets[i].cssRules;

                if (classes) {

                    for (var x = 0; x < classes.length; x++) {

                        try {

                            if (classes[x].selectorText.indexOf(className) > -1) {

                                var s = (classes[x].cssText) ? classes[x].cssText : classes[x].style.cssText;

                                s = s.substring(s.indexOf("{") + 1);

                                c += s.substring(0, s.indexOf("}"));

                            }

                        } catch (ex) {

                        }

                    }

                }

                        } catch (ex) {

                        }

            }

            return c;

        }

        //all links

        $('a').attr('style', getStyle('a:hover'));

});

How to Force clear cache

As all developers know our best friend and worst enemy is the cache. You can make all your changes and be confident they will work, then they don’t change just when you want them to. It is great that you can clear your own cache, but how can you refresh all the users cache? Here are some of the methods I know of and used.

Bit of a FYI part, this is focused to JavaScript on a Windows server running IIS, but some of these methods can be used on multiple languages.

Query string

First up is the query string method. When the browser caches the file it caches the URL for it location. This included the query string of the URL. It makes it like the key to the value, so if you change the key then it needs to get a new value.

For Example:
http://www.myurl.com/myjavascript.js?v=1.2

The query string can be anything, so you could even just put the number.

For Example:
http://www.myurl.com/myjavascript.js?1.2

You can then make this dynamic my appending the build number of your deployment. This however is dependent on who, what and where your file is. Unfortunately as well, some browsers don’t use the whole URL. They cut the Query string off for the Key part of the cache. This means this method doesn’t work for some browsers.

IIS URL Rewrite

This can be done in IIS and also using the .htaccess on an apache server. For this you will need to install the IIS URL Rewrite tool from Microsoft.

The use of this is to rewrite the directory of the URL instead, so unlike the previous method it would work in every browser.  The only issue I have found with this is I don’t know how to add a version number on there without having to physically add a number on the end for each deployment. You can however also use this in the web.config of the application, where you might be able to dynamically get the number of the build.

Here is how to use the URL Rewriter:
http://www.iis.net/learn/extensions/url-rewrite-module/creating-rewrite-rules-for-the-url-rewrite-module

Restart Application Pool

This is a more indirect method. You can force user to get a new copy of the site if you restart you application pool in IIS, but this would clear the whole site so if you just changed one JavaScript file they will be getting more than they needed. This can then be worst for the users that are on their mobile phones as it then takes more data to get the whole site.

If you are ok with this then you could also script a PowerShell script to restart the applications at will. This can be done with dunning iisreset from an elevated (on Vista/Win7/Win2008) command prompt, which will restart IIS and in turn restart all the application pools.

These are just some method that can used and there are also other way to use the premises, but in maybe better ways for your application. If you have any other methods please comment them below and share the knowledge.