AppDynamics grouping Database Custom Metric Queries

When you create Custom Database Metrics in AppDynamics, you first thought is to create a new row for each metric, but if you have a lot to report on this can become messy. Not only that but in the metric view you will have a very long list of reports to go through. Therefore when we had the consultant down at work, we was shown how to group collections of metrics in one query, which then shows in the metric view in a sub folder. This tactic, we could not find anywhere on the internet, so I thought I would share this very handle insight for AppDynamics.

Your stand method to add custom queries and metrics would be to go to this view below configuration view in AppDynamics and add new queries for each of the metrics you wish to report on.

AppDynamics Databases

You can then go to the metric view and see the data coming in like below.

AppDyanmics Metric Browser

However, like I said above, this list can grow fast plus by default you are limited to only 20 of theses’ queries, which can disappear faster. Therefore this method gives you more bang for your buck on custom metrics, plus also the organisation of your data.

Instead of adding each query separate, what we can do is create a grouping of queries into sub folders of the ‘Custom Metric’ folder, to look like this.

  • Before
  • Custom Metric
    • Queue 1
    • Queue 2
    • Queue 3
  • After
  • Custom Metric
    • MessagingQueueMontioring
      • Queue 1
    • Queue 2
    • Queue 3

As we, at my company, completed this in Microsoft SQL I will use that as an example, but I would be confident it can be translated to other languages with the same outcome with some slight changes to syntax.

Say we start with the 3 queries that we would want to monitor and we will keep them simple:

SELECT count(id) FROM MessageQueueOne
SELECT count(id) FROM MessageQueueTwo
SELECT count(id) FROM MessageQueueThree

To create the top level folder, you simply create a single query item called ‘MessagingQueueMonitoring’. In this new custom metric query you need to add the above 3 SQL statements, but we need them to be a single query instead of 3. For this to work we will use the SQL command ‘UNION ALL’ to join them together:

SELECT count(id) FROM MessageQueueOne
UNION ALL
SELECT count(id) FROM MessageQueueTwo
UNION ALL
SELECT count(id) FROM MessageQueueThree

This will now create one table with 3 rows and their values, but for AppDynamics to recognise these in the metrics view we need to tell it what each of these rows mean. To tell AppDynamics what the nodes under it are called you add a column to each query for the name. This column should be called ‘Type’ and then for AppDyanmics to know what the value part of the table is, you call that column ‘Total’.

You should end up with a query like below:

SELECT 'Message Queue One' as Type, count(id) as Total FROM MessageQueueOne
UNION ALL

SELECT 'Message Queue Two' as Type, count(id) as Total FROM MessageQueueTwo
UNION ALL

SELECT 'Message Queue Three' as Type, count(id) as Total FROM MessageQueueThree

Then this should result in a table like this:

TypeTotal
Message Query One4
Message Query Two2
Message Query Three56

What do you consider when building a new application?

When you’re starting a new project and thinking about what you’re going to use in your application, what factors do you consider? Sometimes this depends on what your role is, like a developer might jump straight in with just use X coding language and continue on their way. Whereas others might want to play with whatever the new and latest technology is. Then there is people like myself, that likes to think about the whole picture, and so here are some of the key factors I consider when building a new application.

 

Code Repository

This one should just come hand in hand with your company, as there should already be a standard of where and how you store your code. However there’s a lot of ‘should’ in that sentence, as some junior companies don’t have this thought through yet, or you could be doing it alone, or even the company might have something in place but you are thinking of exploring new technologies and new grounds.

The big factor to consider with a repository is the company that is holding that information. It starts with where the code will be held for legal laws, security and for access. Now you might think access is a silly thing to think about in this, as it is just all done over https on your computer isn’t it?, but you might need to consider if you are going through a proxy so security might lock you down unless it is a secure root. You also might put the repository on premise due to the value of the code you are storing, which might also be the reason for your choice on the company to store your code. If you think that the company storing your code will be going after 2 years, then you might want to think about either a different company or a good get out plan just in case. These days there a few big players that just make clear sense, so after this it would come down to the cost of that companies services for the level you require.

The other factor is how it is stored and retrieved from the repository with things like GIT, as this is another technology that you will depend on. You will need to consider what learning curve will others need to undertake if they are to use this Version Control System and again like the storage factor, will they still be around in as few years’ time?

Linked from this would be what tools you are thinking of using later in the thought process for build, test and deployment, as these might be harder work for you to move code between locations and tools. For example if your repository is on premised behind a firewall and security, but your build tool is in the cloud with one company and then the test scripts are stored in another companies repository.

 

Language

You might have an easy job with choosing a language if you are a pure Java house or PHP only then that is what you will be using, as you can only do what you know. However, if you want to branch out or you do have more possibilities then the world can open up for you.

A bit higher level than choosing the language you want, but design patterns do come into this. I have seen where someone would choose a .NET MVC language for their back end system, but then put a AngularJS front end  framework on top. What you are doing here is putting an MVC language design on top of an MVC language design, which causes all types of issues. Therefore you need to consider, if you are using more than one language then how do they complement each other. For instance in this circumstance you could either go for the AngularJS MVC with a micro service .NET backend system, or have the .NET MVC application with a ReactJS front end to enrich the users experience.

As I said before, you might already know what languages you are going to use as that is your bread and butter now, but if it is not then you need to think about the learning curve for yourself and other developers. If you are throwing new technologies into the mix then you need to be sure everyone can keep up with what you intend on using, or you will become the Single Point Of Failure and cause support issue when someone is off.

As well as thinking about who will be developing the technology, you need to think about who will be using the technology. This can either be from an end users experience or even the people controlling the data like content editors, if this is that type of system. If you would like a fast and interactive application then you will want to push more of the feature to the client side technologies to improve the users experience, but you might not need to make it all singing and dancing if it is an console application running internally you want to just do the job. Therefore the use case of the language has an importance to the choice.

 

Testing

Testing is another choice in itself, as once you know your language you know what testing tools are available to use, but they then have all the same consideration as what coding language you want to use, as you will still need to develop these tests and trust in their results.

I add this section in though, as it is a consideration you need to have and also how it factors into giving you, the developer, feedback on your test results. These might run as part of your check in or they might be part of a nightly build that reports back to you in the morning, so how they are reported instantly to the develop depends on how fast they can react to them.

As part of the tooling for the tests you will need to recognize what level of testing they go down to, for example unit tests, integration tests, UI tests or even security testing. These then need to consist of what tools you can integrate into your local building of the application, to give you instant feedback, for example a linter for JavaScript which will tell you instantly if there is a conflict or error. This will save you time of checking in and waiting for a build result, which might clog up the pipeline for other checking in.

 

Continuous Integration(CI) and Continuous Delivery(CD)

This is a little out of touch with what application you are building as another person in the DevOps roll might be doing this and it should have no major impact on your code, as this is abstract to what you are developing. However the link can be made through how you are running this application on your local machine. You could be using a task runner like Gulp in your application to build and deploy you code on your local machine, which then makes sense to use the same task runner in the CI/CD.

Therefore you need to think about what tooling can and will be used between the your local machine and the CI/CD system to have a single method of build and deployment. You want to be able to mirror what the pipeline will be doing, so you can replicate any issue, plus also the other way round as it will help that DevOps person build the pipeline for you application.

 

Monitoring and logging

Part of the journey of your code, is not just what you are building and deploying, but also what your code is doing after that in the real world. The best thing to help with this is logging for reviewing past issues and monitoring to detect current or coming issues.

For your logging I would always encourage 3 levels of logging Information, Debug and Error, which are configurable to turn on or off in production. Information will help when trying to source where the issue happens and what kind of data is being passed through. It will be medium level of output as to not fill up your drive fast, but to give you a lot of information to help with your investigation. Debug is then the full level down, giving you everything that is happening with the application and all the details, but be careful of printing GDRP data that will sit in the logs and to not crash your drives from over filling. Errors are then what they say on the tin, they will only get reported out when there is an error in the application, which you should constantly check to make sure your remove all potential issue with the code. The considering factor with this for your application is technology and implementation to your code. We have recently changed a logging technology, but how it was implemented made it a longer task then it should have been, which can be made easier with abstraction.

Monitoring depends on what your application is doing, but can also expand past your code monitoring. If you have something like message queue’s you can monitor the levels or you could be monitoring the errors in the logs folder remotely. These will help pre-warn you if there is something going wrong before it hits the peak issue. However the issue might not be coming from your code, so you should also be monitoring things like the machine it is sitting on and the network traffic in case there is an issue there. These have an impact on the code because some monitoring tools do not support some languages, like .NetCore which we have found hard in some places.

 

Documentation

Document everything is the simple way to put it. Of course you need to do it in a sensible manner and format, but you should have documentation before even the first character of code is written to give you and others the information you have decided above. Then you will need to be documenting any processes or changes during the building for others to see. If you know exactly how it all work then someone else takes over while you are away, then you put that person is a rubbish position unless they have something to reference to.

These need to have a common location that everyone can have access to read, write and edit. However a thought you could try is using automated documentation draw from the codes comments and formatting, so you would need to bear this in mind when writing out your folder structure and naming convention.

You can go over board by documenting to much as somethings like in the code or the CI/CD process should be clear from the comments and naming. However even if documentation for tools like GIT have already been written, it is helpful to create a document saying what tooling you are using from a high level, why you are using this and then reference their documentation. It gives the others on the project a single point of truth to get all the information they require, plus if the tooling changes you can update that one document to reference the new tooling’s, and everyone will already know where to find that new information.

 

DevOps

In the end what we have just gone through is the DevOps process of Design, Build, Test, Deploy, Report and Learn.

  • You are currently looking at the design point while looking at what languages and tools you would like to use.
  • We are going to get a language to build the new feature or application.
  • There will be a few levels of testing through the process of building the new project.
  • The consideration of CI and CD gets our product deployed to new locations in a repeatable and easy method.
  • Between the Logging and Monitoring we are both reporting information back to developers and business owners, who can learn from the metrics to repeat the cycle again.

DevOps

Reference: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0

How to setup AppDynamics for multiple .Net Core 2.0 applications

We have decided to go with App Dynamics to do monitoring on our infrastructure and code, which is great and even better they have released support for .Net Core 2.0. However when working with their product and consultant we found an issue with monitoring multiple .Net Core instances on one server, plus with console apps, but we found a way.

Currently their documentation, that is helpful, shows you have to set up monitoring for the .Net Core Application with environment variables. Follow the direction in the App Dynamics Documentation says to set the environment variable for the profilers path, which is in each of the applications, but of course we can’t set multiple environment variables. Therefore we copied the profiler DLL to a central source and used that as the environment variable, but quickly found out that it still didn’t work. For the profiler to start tracking, it needs to be set to point to the applications root folder for each application.

The consultants investigation then lend to looking at how we can set the environment variables for each application, to which we found the application can be set in the web.config using the node ‘environmentVariables’ under the ‘aspNetCore’ node as stated as part of the Microsoft Documentation. Of course using the ‘dotnet publish’ command generates this web.config, so you can’t just set this in the code. Therefore in the release of the code I wrote some PowerShell to set these parameters.

In the below PowerShell, I get the XML content of the web.config, then create each of the environment variable nodes I want to insert. Once I have these I can then insert them into the correct ‘aspNetCore’ node of the XML variable, which I then use to overwrite the contents of the existing file.

Example PowerShell:

$configFile = "web.config";
$sourceDir = "D://wwwroot";

## Get XML
$doc = New-Object System.Xml.XmlDocument
$doc.Load()
$environmentVariables = $doc.CreateElement("environmentVariables")

## Set 64 bit version
$Profiler64 = $doc.CreateElement("environmentVariable")
$Profiler64.SetAttribute("name", "CORECLR_PROFILER_PATH_64")
$Profiler64.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x64.dll")
$environmentVariables.AppendChild($Profiler64)

## Set 32 bit version
$Profiler32 = $doc.CreateElement("environmentVariable")
$Profiler32.SetAttribute("name", "CORECLR_PROFILER_PATH_32")
$Profiler32.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x86.dll")
$environmentVariables.AppendChild($Profiler32)

$doc.SelectSingleNode("configuration/system.webServer/aspNetCore").AppendChild($environmentVariables)

$doc.Save($configFile.FullName)

Example Web.config result:

<configuration>
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\SecurityService.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
<environmentVariables>
<environmentVariable name="CORECLR_PROFILER_PATH_64" value="D:\IIS\ServiceOne\AppDynamics.Profiler_x64.dll" />
<environmentVariable name="CORECLR_PROFILER_PATH_32" value="D:\IIS\ServiceTwo\AppDynamics.Profiler_x86.dll" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>

This will work for the application that have a web.config, but something like a Console App doesn’t have one, so what do we do?

The recommendation and solution is to create an organiser script. This script will set the Environment Variable, which will only effect the application triggered in the session. To do this you can use any script really like PowerShell or Command Line.

In this script you just need to set the environment variables and then run the Exe after.

For example in PowerShell:

Param(
[string] $TaskLocation,
[string] $Arguments
)

# Set Environment Variables for AppDynamics
Write-Host “Set Environment Variables in path $TaskLocation”
$env:CORECLR_PROFILER_PATH_64 = “$TaskLocation\AppDynamics.Profiler_x64.dll”
$env:CORECLR_PROFILER_PATH_32 = “$TaskLocation\AppDynamics.Profiler_x86.dll”

# Run Exe
Write-Host “Start  Script”
cmd.exe \c
exit

These two solutions will mean you can use AppDynamics with both .NetCore Web Apps and Console App, with multiple application on one box.

How to get a PDF from Server Side to Client Side?

So you generate a PDF in the C# code or any back end code, but need to send it back to the JavaScript doing the AJAX request. The trouble is how do you return a file back to the client side, which is the same problem I faced. There are loads of methods of how to do this that I have seen on the line, but these are the ways I found worked with C#.NET and JavaScript consistently.

Returning the File

First method I found was very simple and the most direct route to the user. In this scenario you do an AJAX request from the JavaScript straight to the back end system that is generating the PDF file and return that file in the request.

First get the file from the file system into a ‘FileStream’:

Var filesFullPath = "C:\documents\pdfDocument.pdf";
Var fileStream = new FileStream(filesFullPath , FileMode.Open, FileAccess.Read, FileShare.Delete, 4096, fileOptions);

Then use the ‘ControllerBase.File’ class, part of the Controllers inherited class, to convert the ‘FileStream’ into a ‘FileResult’ with a PDF content Type:

Var fileResult = ControllerBase.File(fileStream, "application/pdf");

‘ControllerBase’ is part of the inherited ‘Controller’ Class from ‘Microsoft.AspNetCore.Mvc’.

You can then return this object back to the requesting JavaScript. On the client side it is just a standard ‘XMLHttpRequest’, but the key part is the ‘responseType’ is set to ‘arrayBuffer’.

This means when the response comes back in that format we can create a new ‘Blob’ from the object with the correct content type. This Blob is then converted to a URL and opened in a new window.

var url = "<a href="http://www.GetMyPdf.com/GetPdf">http://www.GetMyPdf.com/GetPdf</a>";

// Send Request
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'arraybuffer';
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = function (e) {

// Check status
if (this.status == 200) {

// convert response to Blob, then create URL from blob.
var blob = new Blob([this.response], { type: 'application/pdf' }),
fileURL = URL.createObjectURL(blob);

// open new URL
window.open(fileURL, '_blank');

} else {

console.error('response: ', this);

}
};

xhr.send(JSON.stringify(data));

Now this works great in the scenario above, but a situation that we came against was the back end code to the client side was not doing the generating there was a middle tier, then also it was handling the response from all APIs as a json response which is a string type. Therefore we couldn’t return the file directly and not in an Array Buffer format.

A better solution would be to store the PDF file somewhere the client side can request it after it has been generated. This will save on the large packet of data being sent back and less worry of data loss, but let us continue on this path anyway.

The solution to the weird request is to use Base64 as the string based response. In the C# code we can convert the file into ‘bytes’, which can then be converted to Base64 string:

var fileFullPath = "C:\documents\MyPdf.pdf";

// Convert to Bytes Array
bytes[] pdfBytes = File.ReadAllBytes(fileFullPath );

// Convert to Base64
var base64Str = Convert.ToBase64String(pdfBytes);

This is then saved to return back as a string to the middle tier and then back to the client side.

We now have a string response that we can to convert the Baset64 into a PDF file for the client, which can work both of these ways.

If you first prefix the Base64 with the PDF Base64 DataURI as below:

var base64Str = "****";
var base64DataUri = 'data:application/pdf;base64,' + base64Str;

You can then open the Data URI in a window:

// open new URL
window.open(base64DataUri , '_blank');

However I didn’t find this consistently working, so instead I do the same as before with the ‘XMLHttpRequest’, but instead the URL is the Data URI.

var url = base64DataUri;

// Send Request
var xhr = new XMLHttpRequest();
xhr.open('POST', url, true);
xhr.responseType = 'arraybuffer';
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = function (e) {

// Check status
if (this.status == 200) {
// convert response to Blob, then create URL from blob.
var blob = new Blob([this.response], { type: 'application/pdf' }),
fileURL = URL.createObjectURL(blob);

// open new URL
window.open(fileURL, '_blank');

} else {

console.error('response: ', this);
}
};

xhr.send();

This now uses the base64 string as the URL to request the PDF as a web request so we can return the PDF as a Byte Array, then just like before it is converted to a Blob, to URL and opened in a new window.

So far this has worked and consistently, so try it out and give me any feedback.

Resharper DotCover Analyse for Visual Studio Team Services

Do you use Visual Studio Team Services (VSTS) for Builds and/or Releases? Do you use Resharper DotCover? Do you want to use them together? Then boy do I have an extension for you!

That might be a corny introduction, but it is exactly what I have here.

In my current projects we use Resharpers, or also know as Jet Brains, DotCover to run code coverage on all our code. However to run this in VSTS there is a bit of a process to install DotCover on the server and then write a Batch command to execute it with settings. This isn’t the most complex task, but it does give you a dependency to always install this on a server, and have the written Batch script in source control or in the definitions on VSTS. This can cause issues if you forget to get it installed or you need to update the script for every project.

Therefore I got all that magic of the program and cramed it into a pretty package for VSTS. This tool is not reinventing the wheel, but putting some greese on it to run faster. The Build/Release extension simply gives you all the input parameters the program normally offers and then runs them with the packaged version of DotCover that comes with the extension. See simply.

There is however one extra bit of spirit fingers I added into the extension. When researching and running my own tests, I found that some times it is helpful to only run the coverage on certain projects, but to do this you need to specify every project path in the command. Now I don’t know about you, but that sounds boring, so I added an extra field.

Instead of in the Target Arguments passing each project separately and manually, you can pass wildcards in the Project Pattern. If you pass anything in the Project Pattern parameter it will detect you want to use this feature. It then uses the Target Working Directory as the base to recursively search for projects.

For Example: Project Pattern = “*Test.dll” and Target Working Directory = “/Source”

This will search for all DLL that end with ‘Test’ in the ‘Source’ directory and then prepend it to any other arguments in the Target Arguments.

For Example: “/Source/MockTest.dll;/Source/UnitTest.dll”

You can download the extension from the VSTS Marketplace
Here are is a helpful link for Resharper DotCover Analyse – JetBrains
Then this is the GitHub Repository for any issues or some advancements you would like – Pure Random Code GitHub

Update 20-07-2018

There was a recent issue raise on the GitHub Repository that addressed a problem I have also seen before. When running the DotCover from Visual Studio Team Services an error appears as below:

Failed to verify x64 COM object registration: Empty path to COM object.

From the issue raise, the user had linked to a Community Article about “DotCover console runner fails when running as VSTS task“, which in the comments they discussed how to fix this.

To correct it we simply add the following command to the request, that specifies what profiled process bitness to use as they say.

/CoreInstructionSet=[x86|x64]

Therefore the task has now been updated with this field and feature to accomadate this issue and fix. It has been run and tested by myself plus the user that raised the issue, so please enjoy.