How to authenticate with Fortify Security with PowerShell

Fortify, a security scanning tool for code, has some great features but also some limiting features. Therefore I sought to use their open REST API to expand on its functionality to enhance how we are using it within the DevOps pipeline. This was of course step one to find how to authenticate with Fortify to start doing the requests to its services.

Fortify does have the Swagger page of the URL’s to show what endpoints it offers, but doesn’t detail the authentication endpoint. It then does have the documentation on how to authenticate, but it is not detailed out for easy use.

Therefore this is why I thought I would expand on the details to show other how to authenticate easily, while using PowerShell as the chosen language.

Fortify Swagger

The API Layer from Fortify provides the Swagger definitions. If you chose you provided Data Centre from the link below, you can then simply add ‘/swagger’ to the end to see the definitions, for example https://api.emea.fortify.com/swagger/ui/index

Data Centre URL: https://emea.fortify.com/Docs/en/Content/Additional_Services/API/API_About.htm

Authentication

As mentioned before Fortify does document how to authenticate with the API here https://emea.fortify.com/Docs/en/index.htm#Additional_Services/API/API_Auth.htm%3FTocPath%3DAPI%7C_____3

First thing is to find out what details you require for the request like it has mentioned in the documentation. We require the calling Data Centre URL, which you used above for the Swagger definitions, that is then suffixed with ‘/oauth/token’ e.g. ‘https://api.emea.fortify.com/oauth/token’

We then need scope of what you would like to request, which are both detailed out in this link in the documentation plus also on each of the Swagger definition under the ‘Implementation Notes’, it specifies what scope is required for each request. This value needs to be entered as lowercase to be accepted.

This is the same as the Grant Type, which is a fixed value of ‘client_credentials’ all in lowercase.

Final detail we need is the ‘client_id’ and the ‘client_secret’, but what I found is what we really need is the API Key and the API Secret that is managed in your Fortify portal. If you sign in to your portal, for the Data Centre and product I have access to, you can navigate to the ‘Administration’ then ‘Settings’ and finally ‘API’. From this section you can create the API details with the required set of permissions. Note that the permission are changeable post setting this up so you do not need to commit yet. You should then get all the details required for these two parameters where client_id = API Key and client_secret = API Secret.

Your details in PowerShell should look like this:

$body = @{
scope = "api-tenant"
grant_type = "client_credentials"
client_id = "a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa"
client_secret = "AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11"
}

From there we can do a simple ‘Invoke-RestMethod’ using PowerShell, with a key things to note. It is that the content type is ‘application/x-www-form-urlencoded’, without this you will keep getting an error saying the ‘Grant Type’ is not valid. With this as well you will notice as above the body is not in JSON, but are formatted as Parameters in the body of the request.

Below is the full example of the request using PowerShell, which I have also included the requests to set the default proxy so if you are requesting behind a proxy, this should still work.

## Set Proxy

[System.Net.WebRequest]::DefaultWebProxy = [System.Net.WebRequest]::GetSystemWebProxy()

[System.Net.WebRequest]::DefaultWebProxy.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials

## Create Details

$uri = “https://api.emea.fortify.com/oauth/token”

$body = @{

scope = “api-tenant”

grant_type = “client_credentials”

client_id = “a1aa1111-11a1-1111-aaa1-aa1a11a1aaaa”

client_secret = “AAxAbAA1AAdrAA1AAAkyAAAwAAArA11uAzArA1A11”

}

## Request

$response = Invoke-RestMethod -ContentType “application/x-www-form-urlencoded” -Uri $uri -Method POST -Body $body -UseBasicParsing

## Response

Write-Host $response

Setting Bearer tokens in PowerShell

This is a quick post to put out how to set a Bearer Token using PowerShell for Invoke-RestMethod and Invoke-WebRequest as it was something I could not find a clear explaining post.

This is simply set in the headers of the request as below, where ‘$bearer_token’ is the token’s variable. I have put it in the ‘$headers’ variable, which is then used in the Invoke-RestMethod.

$headers = @{Authorization = "Bearer $bearer_token"}
$response = Invoke-RestMethod -ContentType "$contentType" -Uri $url -Method $method -Headers $headers -UseBasicParsing

Resharper DotCover Analyse for Visual Studio Team Services

Do you use Visual Studio Team Services (VSTS) for Builds and/or Releases? Do you use Resharper DotCover? Do you want to use them together? Then boy do I have an extension for you!

That might be a corny introduction, but it is exactly what I have here.

In my current projects we use Resharpers, or also know as Jet Brains, DotCover to run code coverage on all our code. However to run this in VSTS there is a bit of a process to install DotCover on the server and then write a Batch command to execute it with settings. This isn’t the most complex task, but it does give you a dependency to always install this on a server, and have the written Batch script in source control or in the definitions on VSTS. This can cause issues if you forget to get it installed or you need to update the script for every project.

Therefore I got all that magic of the program and cramed it into a pretty package for VSTS. This tool is not reinventing the wheel, but putting some greese on it to run faster. The Build/Release extension simply gives you all the input parameters the program normally offers and then runs them with the packaged version of DotCover that comes with the extension. See simply.

There is however one extra bit of spirit fingers I added into the extension. When researching and running my own tests, I found that some times it is helpful to only run the coverage on certain projects, but to do this you need to specify every project path in the command. Now I don’t know about you, but that sounds boring, so I added an extra field.

Instead of in the Target Arguments passing each project separately and manually, you can pass wildcards in the Project Pattern. If you pass anything in the Project Pattern parameter it will detect you want to use this feature. It then uses the Target Working Directory as the base to recursively search for projects.

For Example: Project Pattern = “*Test.dll” and Target Working Directory = “/Source”

This will search for all DLL that end with ‘Test’ in the ‘Source’ directory and then prepend it to any other arguments in the Target Arguments.

For Example: “/Source/MockTest.dll;/Source/UnitTest.dll”

You can download the extension from the VSTS Marketplace
Here are is a helpful link for Resharper DotCover Analyse – JetBrains
Then this is the GitHub Repository for any issues or some advancements you would like – Pure Random Code GitHub

Update 20-07-2018

There was a recent issue raise on the GitHub Repository that addressed a problem I have also seen before. When running the DotCover from Visual Studio Team Services an error appears as below:

Failed to verify x64 COM object registration: Empty path to COM object.

From the issue raise, the user had linked to a Community Article about “DotCover console runner fails when running as VSTS task“, which in the comments they discussed how to fix this.

To correct it we simply add the following command to the request, that specifies what profiled process bitness to use as they say.

/CoreInstructionSet=[x86|x64]

Therefore the task has now been updated with this field and feature to accomadate this issue and fix. It has been run and tested by myself plus the user that raised the issue, so please enjoy.

How to Self Authenticate in .NET MVC C#

When you build your website application and you have a sign in section, most people use their own cookies to store the user information to validate the users is logged in, but with .NET MVC you can use the built in authentication to sign in, sign out and control there role.

This feature enables you to neatly store the users information to validate them without all the fuss of using your own cookies, but better yet it is also encrypted for you as well. With the MVC concept you can then put the Authorized Attribute on your View Action Results to block them going to that page without being logged in. You can read more about this attribute and how to use the Authorization feature on Microsoft website here. This is all amazing and great, but this is built in with MVC so a bit restricted to use their set up using the details and roles of the current PC.

However after a bit of hunting and testing I found out how to use their authentication, but at your own will. After you have done your post back with the information of the user and you have done your checks to make sure they are an existing user etc. You can then sign them in with this piece of code below:

using System.Web.Security;

FormsAuthentication.SetAuthCookie(userName, false);

userName is of course the username of the person signing in. That is all you need to do with no hassle. You can then retrieve is the user is valid in your view and other pages via this:

User.Identity.IsAuthenticated

This will return a boolean if the user is signed in.

You can then also sign them out with the below:

FormsAuthentication.SignOut();

The problem I then also found what how do I then see what role the user is as I am not using the built in Authentication. I hunted again and found the way…

userName = the user name
userRole = the role you would like to assign

FormsAuthentication.SetAuthCookie(userName, false);

            FormsAuthenticationTicket authTicket = new FormsAuthenticationTicket(1, userName, DateTime.Now, DateTime.Now.AddDays(1), false, userRole, "/");

            HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName,
                               FormsAuthentication.Encrypt(authTicket));

            HttpContext.Current.Response.Cookies.Add(cookie);

This creates a FormsAuthenticationTicketwhich is basically a collection of information of the user. This is then store in the same place as where your standard encrypted username would be stored. This can thenhold the string of what the users role is, plus any other information you would like to store.

To get the role out gets a bit longer though as you need to check the cookie and then un-encrypt it like below:

String userRole = "";
            var httpCookie = Request.Cookies[FormsAuthentication.FormsCookieName];
            if (httpCookie != null)
            {
                FormsAuthenticationTicket formsAuthenticationTicket = FormsAuthentication.Decrypt(httpCookie.Value);
                if (formsAuthenticationTicket != null)
                {
                    userRole = formsAuthenticationTicket.UserData;
                }
            }

fter checking the cookie exists you can then get the FormsAuthenticationTicketby Decrypting the cookie value. This will then put it in to the class object that you can get the data.

Dont be a fool and use the back up tools

Backing up your data is the most vital part of any business and individual person can do. This is the biggest must, but people still don’t think about it. Due to recent events I think it would be good to share my views and tips on the best solutions.

People don’t think they need back up theses days I believe because of companies like Dropbox, Apple and Google. They promote that all you data is backed up and that you can never be forgotten. The view they seem to push is that your data is eternal and can never be lost, but this is of course wrong. The worst thing is the majority of these people are the uneducationed people about technology and about the procedures you should take to safe guard your data.

//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js

(adsbygoogle = window.adsbygoogle || []).push({});

There is only so much you can do of course, as there is always a fall in something, but then your should always try your best. I have a back up by my host and I take a daily back up as well, but as it happens the server I am on gave out before both of them took place. This results in loss of 24 hours of data, which is not except able to anyones standards. Even though I had things in place to protect myself, there was many loop holes that left us exposed.

So what should you do you ask? Well I have been looking to upgrade my system and so here are my solutions for every kind of person.

The less technical

The best thing for the less technical people would be an external disk. I would be led to assume that if you are not that technical then you won’t have very high level data in your PC. For this reason I would say you would only really need a local copy of your back ups. You can purchase a descent external drive from here and all you would have to do would be plug it in every day or so, then move all you data you want over. This will then make sure if your PC dies for any reason or crashes you then have the fail safe of all your data back up.

Samsung M3 1TB USB 3.0 Slimline Portable Hard Drive – Black 

More than enough storage for any person and all there data. With the latest USB version and a high speed transfer.

This is for the most simple of technical people, but before you get to the most technical of things you can also move on from that. You can get external drives like below that are over the network. This way you don’t have to plug in the disk and it will back up all your system data. The only thing is it needs to be set up first, which for most is not an easy set up or maintenance after, plus you then need to make sure you have connectivity to the device or then it is useless.

WD 4TB My Cloud Personal Cloud Storage – NAS 

Bigger and better storage, plus it one of the top brands to delivery this product.

//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js

(adsbygoogle = window.adsbygoogle || []).push({});

Getting technical

So from there we get to paid services as well as local copies. The worst thing to have is all you data locally and then your house is robbed or burnt down. This would of course mean both your data and the back up have been destored or taken. So the next level would be a paid service like KNOWHOW.

KNOWHOW Cloud Storage

Reliable and secured data, one of the best in the business.

With this kind of service, your data is backed up automatically to their storage facliaties. This then means your house could blow away like wizard of oz, but your data is somewhere else safe. Though as I have said about my event recently, I would still do a local backup just incase there storage gets blown away before you. Although you should feel safe a these centers are like bomb bunkers and if you choose the right on they will always safe guard your data.

The technical way

This is when you are getting into the minor big leagues. I have never been higher than this so I am sure there are more robust solutions than the following, but these would be for the likes of Microsoft. For this way I would suggest you store all your system on a VM. If you run your machine on a VM, then you can take snap shot back ups of your machine. This can be done locally by yourself, by the system that is hosting it and even one step further is to get a paid service as well. This way you have multiple back ups in multiple places and they are all fast to install, so you loose nothing. This of course is a bit over the top, but you can be sure your data is not lost. For the smaller kind of people you can also do snap shots with other third party tools, like Apple has the Time Machine that takes snap shots of your Mac.

If we was getting to the higher leagues, I would guess they don’t loose data. They would have your information on hundreds of disks, in lots of different countries, so not even war could get rid of your data.

So don’t forget to do you back ups or you to could loose data and so memories