Setting Bearer tokens in PowerShell

This is a quick post to put out how to set a Bearer Token using PowerShell for Invoke-RestMethod and Invoke-WebRequest as it was something I could not find a clear explaining post.

This is simply set in the headers of the request as below, where ‘$bearer_token’ is the token’s variable. I have put it in the ‘$headers’ variable, which is then used in the Invoke-RestMethod.

$headers = @{Authorization = "Bearer $bearer_token"}
$response = Invoke-RestMethod -ContentType "$contentType" -Uri $url -Method $method -Headers $headers -UseBasicParsing

AppDynamics grouping Database Custom Metric Queries

When you create Custom Database Metrics in AppDynamics, you first thought is to create a new row for each metric, but if you have a lot to report on this can become messy. Not only that but in the metric view you will have a very long list of reports to go through. Therefore when we had the consultant down at work, we was shown how to group collections of metrics in one query, which then shows in the metric view in a sub folder. This tactic, we could not find anywhere on the internet, so I thought I would share this very handle insight for AppDynamics.

Your stand method to add custom queries and metrics would be to go to this view below configuration view in AppDynamics and add new queries for each of the metrics you wish to report on.

AppDynamics Databases

You can then go to the metric view and see the data coming in like below.

AppDyanmics Metric Browser

However, like I said above, this list can grow fast plus by default you are limited to only 20 of theses’ queries, which can disappear faster. Therefore this method gives you more bang for your buck on custom metrics, plus also the organisation of your data.

Instead of adding each query separate, what we can do is create a grouping of queries into sub folders of the ‘Custom Metric’ folder, to look like this.

  • Before
  • Custom Metric
    • Queue 1
    • Queue 2
    • Queue 3
  • After
  • Custom Metric
    • MessagingQueueMontioring
      • Queue 1
    • Queue 2
    • Queue 3

As we, at my company, completed this in Microsoft SQL I will use that as an example, but I would be confident it can be translated to other languages with the same outcome with some slight changes to syntax.

Say we start with the 3 queries that we would want to monitor and we will keep them simple:

SELECT count(id) FROM MessageQueueOne
SELECT count(id) FROM MessageQueueTwo
SELECT count(id) FROM MessageQueueThree

To create the top level folder, you simply create a single query item called ‘MessagingQueueMonitoring’. In this new custom metric query you need to add the above 3 SQL statements, but we need them to be a single query instead of 3. For this to work we will use the SQL command ‘UNION ALL’ to join them together:

SELECT count(id) FROM MessageQueueOne
UNION ALL
SELECT count(id) FROM MessageQueueTwo
UNION ALL
SELECT count(id) FROM MessageQueueThree

This will now create one table with 3 rows and their values, but for AppDynamics to recognise these in the metrics view we need to tell it what each of these rows mean. To tell AppDynamics what the nodes under it are called you add a column to each query for the name. This column should be called ‘Type’ and then for AppDyanmics to know what the value part of the table is, you call that column ‘Total’.

You should end up with a query like below:

SELECT 'Message Queue One' as Type, count(id) as Total FROM MessageQueueOne
UNION ALL

SELECT 'Message Queue Two' as Type, count(id) as Total FROM MessageQueueTwo
UNION ALL

SELECT 'Message Queue Three' as Type, count(id) as Total FROM MessageQueueThree

Then this should result in a table like this:

TypeTotal
Message Query One4
Message Query Two2
Message Query Three56

What do you consider when building a new application?

When you’re starting a new project and thinking about what you’re going to use in your application, what factors do you consider? Sometimes this depends on what your role is, like a developer might jump straight in with just use X coding language and continue on their way. Whereas others might want to play with whatever the new and latest technology is. Then there is people like myself, that likes to think about the whole picture, and so here are some of the key factors I consider when building a new application.

 

Code Repository

This one should just come hand in hand with your company, as there should already be a standard of where and how you store your code. However there’s a lot of ‘should’ in that sentence, as some junior companies don’t have this thought through yet, or you could be doing it alone, or even the company might have something in place but you are thinking of exploring new technologies and new grounds.

The big factor to consider with a repository is the company that is holding that information. It starts with where the code will be held for legal laws, security and for access. Now you might think access is a silly thing to think about in this, as it is just all done over https on your computer isn’t it?, but you might need to consider if you are going through a proxy so security might lock you down unless it is a secure root. You also might put the repository on premise due to the value of the code you are storing, which might also be the reason for your choice on the company to store your code. If you think that the company storing your code will be going after 2 years, then you might want to think about either a different company or a good get out plan just in case. These days there a few big players that just make clear sense, so after this it would come down to the cost of that companies services for the level you require.

The other factor is how it is stored and retrieved from the repository with things like GIT, as this is another technology that you will depend on. You will need to consider what learning curve will others need to undertake if they are to use this Version Control System and again like the storage factor, will they still be around in as few years’ time?

Linked from this would be what tools you are thinking of using later in the thought process for build, test and deployment, as these might be harder work for you to move code between locations and tools. For example if your repository is on premised behind a firewall and security, but your build tool is in the cloud with one company and then the test scripts are stored in another companies repository.

 

Language

You might have an easy job with choosing a language if you are a pure Java house or PHP only then that is what you will be using, as you can only do what you know. However, if you want to branch out or you do have more possibilities then the world can open up for you.

A bit higher level than choosing the language you want, but design patterns do come into this. I have seen where someone would choose a .NET MVC language for their back end system, but then put a AngularJS front end  framework on top. What you are doing here is putting an MVC language design on top of an MVC language design, which causes all types of issues. Therefore you need to consider, if you are using more than one language then how do they complement each other. For instance in this circumstance you could either go for the AngularJS MVC with a micro service .NET backend system, or have the .NET MVC application with a ReactJS front end to enrich the users experience.

As I said before, you might already know what languages you are going to use as that is your bread and butter now, but if it is not then you need to think about the learning curve for yourself and other developers. If you are throwing new technologies into the mix then you need to be sure everyone can keep up with what you intend on using, or you will become the Single Point Of Failure and cause support issue when someone is off.

As well as thinking about who will be developing the technology, you need to think about who will be using the technology. This can either be from an end users experience or even the people controlling the data like content editors, if this is that type of system. If you would like a fast and interactive application then you will want to push more of the feature to the client side technologies to improve the users experience, but you might not need to make it all singing and dancing if it is an console application running internally you want to just do the job. Therefore the use case of the language has an importance to the choice.

 

Testing

Testing is another choice in itself, as once you know your language you know what testing tools are available to use, but they then have all the same consideration as what coding language you want to use, as you will still need to develop these tests and trust in their results.

I add this section in though, as it is a consideration you need to have and also how it factors into giving you, the developer, feedback on your test results. These might run as part of your check in or they might be part of a nightly build that reports back to you in the morning, so how they are reported instantly to the develop depends on how fast they can react to them.

As part of the tooling for the tests you will need to recognize what level of testing they go down to, for example unit tests, integration tests, UI tests or even security testing. These then need to consist of what tools you can integrate into your local building of the application, to give you instant feedback, for example a linter for JavaScript which will tell you instantly if there is a conflict or error. This will save you time of checking in and waiting for a build result, which might clog up the pipeline for other checking in.

 

Continuous Integration(CI) and Continuous Delivery(CD)

This is a little out of touch with what application you are building as another person in the DevOps roll might be doing this and it should have no major impact on your code, as this is abstract to what you are developing. However the link can be made through how you are running this application on your local machine. You could be using a task runner like Gulp in your application to build and deploy you code on your local machine, which then makes sense to use the same task runner in the CI/CD.

Therefore you need to think about what tooling can and will be used between the your local machine and the CI/CD system to have a single method of build and deployment. You want to be able to mirror what the pipeline will be doing, so you can replicate any issue, plus also the other way round as it will help that DevOps person build the pipeline for you application.

 

Monitoring and logging

Part of the journey of your code, is not just what you are building and deploying, but also what your code is doing after that in the real world. The best thing to help with this is logging for reviewing past issues and monitoring to detect current or coming issues.

For your logging I would always encourage 3 levels of logging Information, Debug and Error, which are configurable to turn on or off in production. Information will help when trying to source where the issue happens and what kind of data is being passed through. It will be medium level of output as to not fill up your drive fast, but to give you a lot of information to help with your investigation. Debug is then the full level down, giving you everything that is happening with the application and all the details, but be careful of printing GDRP data that will sit in the logs and to not crash your drives from over filling. Errors are then what they say on the tin, they will only get reported out when there is an error in the application, which you should constantly check to make sure your remove all potential issue with the code. The considering factor with this for your application is technology and implementation to your code. We have recently changed a logging technology, but how it was implemented made it a longer task then it should have been, which can be made easier with abstraction.

Monitoring depends on what your application is doing, but can also expand past your code monitoring. If you have something like message queue’s you can monitor the levels or you could be monitoring the errors in the logs folder remotely. These will help pre-warn you if there is something going wrong before it hits the peak issue. However the issue might not be coming from your code, so you should also be monitoring things like the machine it is sitting on and the network traffic in case there is an issue there. These have an impact on the code because some monitoring tools do not support some languages, like .NetCore which we have found hard in some places.

 

Documentation

Document everything is the simple way to put it. Of course you need to do it in a sensible manner and format, but you should have documentation before even the first character of code is written to give you and others the information you have decided above. Then you will need to be documenting any processes or changes during the building for others to see. If you know exactly how it all work then someone else takes over while you are away, then you put that person is a rubbish position unless they have something to reference to.

These need to have a common location that everyone can have access to read, write and edit. However a thought you could try is using automated documentation draw from the codes comments and formatting, so you would need to bear this in mind when writing out your folder structure and naming convention.

You can go over board by documenting to much as somethings like in the code or the CI/CD process should be clear from the comments and naming. However even if documentation for tools like GIT have already been written, it is helpful to create a document saying what tooling you are using from a high level, why you are using this and then reference their documentation. It gives the others on the project a single point of truth to get all the information they require, plus if the tooling changes you can update that one document to reference the new tooling’s, and everyone will already know where to find that new information.

 

DevOps

In the end what we have just gone through is the DevOps process of Design, Build, Test, Deploy, Report and Learn.

  • You are currently looking at the design point while looking at what languages and tools you would like to use.
  • We are going to get a language to build the new feature or application.
  • There will be a few levels of testing through the process of building the new project.
  • The consideration of CI and CD gets our product deployed to new locations in a repeatable and easy method.
  • Between the Logging and Monitoring we are both reporting information back to developers and business owners, who can learn from the metrics to repeat the cycle again.

DevOps

Reference: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0

How to setup AppDynamics for multiple .Net Core 2.0 applications

We have decided to go with App Dynamics to do monitoring on our infrastructure and code, which is great and even better they have released support for .Net Core 2.0. However when working with their product and consultant we found an issue with monitoring multiple .Net Core instances on one server, plus with console apps, but we found a way.

Currently their documentation, that is helpful, shows you have to set up monitoring for the .Net Core Application with environment variables. Follow the direction in the App Dynamics Documentation says to set the environment variable for the profilers path, which is in each of the applications, but of course we can’t set multiple environment variables. Therefore we copied the profiler DLL to a central source and used that as the environment variable, but quickly found out that it still didn’t work. For the profiler to start tracking, it needs to be set to point to the applications root folder for each application.

The consultants investigation then lend to looking at how we can set the environment variables for each application, to which we found the application can be set in the web.config using the node ‘environmentVariables’ under the ‘aspNetCore’ node as stated as part of the Microsoft Documentation. Of course using the ‘dotnet publish’ command generates this web.config, so you can’t just set this in the code. Therefore in the release of the code I wrote some PowerShell to set these parameters.

In the below PowerShell, I get the XML content of the web.config, then create each of the environment variable nodes I want to insert. Once I have these I can then insert them into the correct ‘aspNetCore’ node of the XML variable, which I then use to overwrite the contents of the existing file.

Example PowerShell:

$configFile = "web.config";
$sourceDir = "D://wwwroot";

## Get XML
$doc = New-Object System.Xml.XmlDocument
$doc.Load()
$environmentVariables = $doc.CreateElement("environmentVariables")

## Set 64 bit version
$Profiler64 = $doc.CreateElement("environmentVariable")
$Profiler64.SetAttribute("name", "CORECLR_PROFILER_PATH_64")
$Profiler64.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x64.dll")
$environmentVariables.AppendChild($Profiler64)

## Set 32 bit version
$Profiler32 = $doc.CreateElement("environmentVariable")
$Profiler32.SetAttribute("name", "CORECLR_PROFILER_PATH_32")
$Profiler32.SetAttribute("value", "$sourceDir\$subFolderName\AppDynamics.Profiler_x86.dll")
$environmentVariables.AppendChild($Profiler32)

$doc.SelectSingleNode("configuration/system.webServer/aspNetCore").AppendChild($environmentVariables)

$doc.Save($configFile.FullName)

Example Web.config result:

<configuration>
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\SecurityService.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout">
<environmentVariables>
<environmentVariable name="CORECLR_PROFILER_PATH_64" value="D:\IIS\ServiceOne\AppDynamics.Profiler_x64.dll" />
<environmentVariable name="CORECLR_PROFILER_PATH_32" value="D:\IIS\ServiceTwo\AppDynamics.Profiler_x86.dll" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</configuration>

This will work for the application that have a web.config, but something like a Console App doesn’t have one, so what do we do?

The recommendation and solution is to create an organiser script. This script will set the Environment Variable, which will only effect the application triggered in the session. To do this you can use any script really like PowerShell or Command Line.

In this script you just need to set the environment variables and then run the Exe after.

For example in PowerShell:

Param(
[string] $TaskLocation,
[string] $Arguments
)

# Set Environment Variables for AppDynamics
Write-Host “Set Environment Variables in path $TaskLocation”
$env:CORECLR_PROFILER_PATH_64 = “$TaskLocation\AppDynamics.Profiler_x64.dll”
$env:CORECLR_PROFILER_PATH_32 = “$TaskLocation\AppDynamics.Profiler_x86.dll”

# Run Exe
Write-Host “Start  Script”
cmd.exe \c
exit

These two solutions will mean you can use AppDynamics with both .NetCore Web Apps and Console App, with multiple application on one box.

Quick Tips for the Sitecore Package Deployer

There is a package for Sitecore called the Sitecore Package Deployer, that updates the Sitecore Content Management System(CMS) with packages from Team Development for Sitecore(TDS). While working with this package extension I have been told and shown two tips that can help with your development and deployment.

Admin Update Installation Wizard

With this tool you can analyse as well as install new/existing update packages to the Sitecore instance. If you browse to ‘sitecore/admin/UpdateInstallationWizard.aspx’ on your Sitecore instance, you should be presented with a login page.

If you login with your admin credentials and sign in you should be presented with the welcome page.

 

s1.png

Once you click ‘Select a package >’ you will go to a page to select a new package, which should be an update file. Once you have selected your package you can press the ‘Package Information >’ button.

 

s2

 

When you go to the next page you can then see the package detail and go on to ‘analyse the package >’. This will display the analyse page, which if you select the ‘Analyse’ button, it will as it says, analyse the package to identify potential conflict. Once you have reviewed you can then install the package safely and securely.

Sitecore Package Deployer URL

So back to the TDS side, that once you have put your packages in the ‘SitecoreDeployerPackages’ folder you want them to update, but now.

In our release process after putting the files in the location, we don’t want to wait for the timer to trigger, so a method is to request this URL:

[YourSite]/sitecore/admin/SartStiecorePackageDeployer.aspx

This will trigger the deployer to start processing the update files, but it doesn’t stop there. While do this we started to get failed processes as the deployer was busy. This is caused by a clash between the timer and the request, so there is a way to force this by adding the query string name/value of ‘force=1’

This makes the URL look like this:

[YourSite]/sitecore/admin/SartStiecorePackageDeployer.aspx?force=1

 

Anymore?

If you have any tips on using the Sitecore Package Deployer or other Sitecore related tips then please share.