Password Regex with optional Special Characters Symbols

After number of failed attempts I have gotten to the Regex that validates your password with optional symbols. This example will validate it has 1 lowercase letter, 1 uppercase letter, 1 number and then the symbols are optional, but it also validates the symbols added in are acceptable.

For example if I entered a password ‘Password1!’ this would pass as I have 1 of each and my symbol is valid. However if I put in ‘Password1£’ this would then fail.

The Guide to the success

I have broken down the completed regex into what each part does, so you can further understand the mechanics.

To validate the password is between 8 and 12 characters. In order, it states that the following is mandatory(‘?=’), from the beginning of the string(‘^’), check length minimal 8 and longest 12(‘{8,12}’), to the end of string($).


This will then check that the string has a number. The following is mandatory (‘?=’) and must contain 1 or more (‘*’) digits in the string (‘d’)


This will then check that the string has a lowercase letter. The following is mandatory (‘?=’) and must contain 1 or more (‘*’) lowercase letter from a-z in the string (‘[a-z]’)


This will then check that the string has an uppercase letter. The following is mandatory (‘?=’) and must contain 1 or more (‘*’) uppercase letter from A-Z in the string (‘[A-Z]’)


Finally it will check there is no whitespaces in the string either. There must not (‘?!’) contain 1 or more (‘*’) whitespaces in the string (‘s’)


The last part then basically has everything that is valid in the string you are passing. Within the range of 0-9, a-z, A-Z and finally you have the valid symbols you can use. These are all backslash escaped just in chase for security. There are these symbols ‘! * ^ ? ] [ + – _ @ # $ % &’  but you can remove or add any you wish.


The Completed Stage


Visual Studio Team Services Gulp Task Setup

While working on a project I required to use the Visual Studio Team Services Gulp task. This is a free pre-built module in the VSTS build and release section that runs, yes you guessed it, gulp tasks. As much as it is very good and the directions on use are good, there is one part I found hard on how to pass the arguments into the task. Therefore let me guide you to how I found worked perfectly.

You can get more notes from the Microsoft page on the task


Above is a screenshot of what the current task variables/values look like. From top to bottom:

  • Display name – The name you want to call your task for example ‘Build Solution’
  • Gulp File Path – The directory location to your ‘gulpfile.js’ which can be in the build directory or located on the receiving server.
  • Gulp Task(s) –  The name of the task or tasks to run. You enter these just like you would on the command line e.g. ‘buildSolution’ or for multiple tasks ‘getPackages buildSolution’
  • Arguments – These are the argument you wish to pass to the gulp tasks.


So at first you might just enter your arguments straight in, which I found didn’t work. When I do this I go the below result:

[command]C:\Program Files\nodejs\node.exe C:\agent\_work\1\s\Source\node_modules\gulp\bin\gulp.js buildSolution --gulpfile arg1 arg2 arg3 arg4

However this resulted in:

Task 'C:\agent\_work\1' is not in your gulpfile

Weird I know and I am sure better gulp experts might be able to identify what I did wrong, but other than this error there is another issue. When you retrieve these variables you use ‘process.arg’, which returns all the arguments as an array. This includes though everything in the command line so you result with:

process.arg[0] = C:\Program Files\nodejs\node.exe
process.arg[1] = C:\agent\_work\1\s\Source\node_modules\gulp\bin\gulp.js
process.arg[2] = buildSolution
process.arg[3] = --gulpfile
process.arg[4] = arg1
process.arg[5] = arg2
process.arg[6] = arg3
process.arg[7] = arg4

So in your gulp script your going to need to know the exact location in the array or know the value, which isn’t very helpful or easy. However I read this great post on Sitepoint How to Pass Command Line Parameters to Gulp Tasks

In it they explain and show the code on how you can pass the arguments in a name/value manner, for example ‘–arg1 argVal’. On the initiation of the JavaScript document the code will parse the ‘process.arg’ into an object by detecting values with ‘–‘ as the name and the next value as the parameter.

Therefore turning this:

[command]C:\Program Files\nodejs\node.exe C:\agent\_work\1\s\Source\node_modules\gulp\bin\gulp.js buildSolution --gulpfile  --arg1 arg1Val --arg2 arg2Val --arg3 arg3Val

Into this in the file:

var args = {'arg1' : 'arg1Val','arg2' : 'arg2Val','arg3' : 'arg3Val'}

And then any of your arguments can be used as :

var myArg = args.arg1;

This making it 100% much easier to pass and then access the arguments in your gulp files. As well as in the link above you can also see the code below:

// fetch command line arguments
const arg = (argList => {let arg = {}, a, opt, thisOpt, curOpt;
  for (a = 0; a < argList.length; a++) {thisOpt = argList[a].trim();
    opt = thisOpt.replace(/^\-+/, '');if (opt === thisOpt) {// argument value
      if (curOpt) arg[curOpt] = opt;
      curOpt = null;}
    else {// argument name
      curOpt = opt;
      arg[curOpt] = true;}}return arg;})(process.argv);

What is in a projects builds and releases

While working with other companies I have seen multiple builds and releases, plus also reading books like ‘Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation’. Through this I have learnt more and more about what should really be in the builds and releases of code applications. I would like to describe how I think they should both be used to create a scalable, reliable and repeatable process to bring confidence to your projects.

In showing these I will using Visual Studio Team Services(VSTS) and C#.NET as examples. These are the day to day parts I use and know how to represent what I would like to demo.

Continuous Integration Build

A Continuous Integration Build, or also known as CI Build, is the first build that your code should see. In the normal process I follow with code, you create a feature branch of your code which is where you write the new feature. Once you are happy with the product, it can be checked in to the development branch, where the current developing code is held before releasing. The CI build will sit in between this process to review what you are about to check in.

The goal of the CI Build is to protect the development branch and in turn all other developers that want to clone the code. Therefore we want to process the code in every way that it will be done in the release, but don’t move it. The general actions would be to get all assets, compile the code and then do unit tests.

By getting all the assets, I mean not just getting the code from the repository, but also other required assets like for ASP.NET projects, Nuget packages. This could also be things like Node Package Manager(NPM) packages and processing tasks like Grunt which manipulates the code before compiling. This is basically the setting up process for the code to be built.

We then compile the code into the state it will be used in, then run the unit tests. These unit tests are for checking if there are any errors in the code and should be testing your new change with the current state of the code, but a balance for this build is for speed as well and reliability. You and others will be checking into this branch multiple times during the day, so you don’t want to be waiting all day to find out if your code is ok. If your tests are taking a long time, then it might be an idea to do enough unit tests for you to be confident with the merge and then run all the longer and indepth tests overnight.

Nightly Build

This build is optional to how well your daily CI build behaves. If you feel the CI is taking too long and you want to run some extensive tests on the project, then this is the build you will need. However not all projects are as large and detailed so might not need this.

The Nightly Build is the same process as the CI, as with Continuous Integration it should be a repeatable process, so it will get the resources and compile the code if required in the exact same method as the CI Build. Then at this point you can run all the same CI Unit Tests just as a confidence test that they still pass. You wouldn’t want to run the whole build and find out something failed in the small amount of tests you missed.

You can now run any lengthy Unit Tests, but what would also be good is to run any integration tests. These tests will stop using the stubbed versions of services and databases, to then use the real thing. The purpose of these tests are to make sure that when working with the real end points everything still works. When you use stubs for Unit Tests, you are practically configuring the end points to work as you would like. Even though you should be configuring them to be the same as the real deal, you can never 100% know they are working the same unless you just use the real thing. However just a point to be clear, when we say the real end points we do not mean the production ones, but the development versions instead.

After the build is complete, you should be confident that the code compiles fine, it works correctly by itself and works fine with the real systems as well. With this confidence there should be no hesitation to be happy to merge this into the next stage of the branching.

Release Build

At this point you have compiled the code, tested the code, tested the integration and had human testers check the system. There is now 100% confidence that the project will work when it gets to its destination, so we move to packaging up the project and moving it to its destination.

However we don’t want to jus trust what was check a few days ago will be ok. What we do want is to trust that what we are packaging up at this point will be the working, tested and complete code. Therefore we do the repeatable process by getting the resources, compile the code and testing as much as what gives you confidence but as minimal as the Unit Tests. This now gives you the product that you should be happy to put on a server. It is also the same product you was happy with at the CI stage and the Nightly Build stage, so it is what you have practiced with throughout the process.

With the resulting product you can package it as required for the language and/or framework, which will be placed on the build server with a version number ready for the release. It is important that the package is accessible by the release for obvious reasons to pick the package up, but also the version number is very important. When the release picks up the package, we want to make sure it is the exact one we happily built, configured and tested. Most build tools like Visual Studio Team Services will automatically add this build id to the package and manage the collection of it.


We now have a confident deployable package to release, so there is no more building required, but there is still some configuration. When building an application that will be going to multiple location, you don’t want to use the same credentials for things like databases. This would be unsecure as if one of the servers was compromised then all of them are. There are also things like the database location as this would be different for each environment. There shouldn’t be one central system for all the environment, as this then can cause issues with that system goes down. If it is the development environment, then all systems should be applicable for just development. Nothing worst then testers bugging you because your development took down their testing.

What we will need to do is update the code to use the specific environment variables. This should be stored in the code base, so if the same application was deployed to multiple development environment there is minimal to no set up. Another example is a load balanced system where you want to deploy the same configuration to all servers. The way to do this will depend on the language, framework and system you are deploying to, but for an .NET Core project the best ways to have an ‘appsettings.config’ file for each environment. This would then be converted on deployment to its specific environment, so settings in the ‘appsettings.development.config’ would be merged in and the settings for ‘appsettings.production.config’ would not be touched until required.

Now the code is ready for the environment, but is the environment ready for the code. Part of the DevOps movement is Infrastructure As Code, where you not only configure the code for each environment, but also configure the environment. In a perfect cloud environment you would have the servers image with all the setting up instructions saved in the code base to keep all required assets in the same location. With this image you can target a server, install the image, then configure anything required for the environment, for example an environment variable, and finally deploy the code. This method would mean we could create and deploy any of the environments at will, for instance if the development server went down or was corrupted, you would point then fire to result is a perfect set up. An example of this would be using Azure with the JSON configuration details.

However we don’t all live in perfect world and our infrastructure is not always perfect, but we can still make it as good as we can. For instance I have worked on a managed OnPremise server where it has been created to a basic specification including Windows Operating System, user accounts and other basic details. This gives me a base to start with and an certain level of confidence that if I asked for another server to be created, it will be in the same format. Now I need to make sure it is fit for what I require it for, so we can use PowerShell that will run on the target machine to install things like IIS. This can be a script stored in the code base and then the environment variables pulled in from another file or the release configuration. This would give a level of Infrastructure As Code, by the requirements of the project being installed at each environment. This process could also check everything is in working order, so before you put your project on the server you are happy it is ready for it.

We should be all set to put the code and the server together now with a deployment, but once we have done that we have lost some confidence. Like the Integration Tests, we know the package is ok on its own and we know the server is ok on its own, but how do we know they are going to work together? At this point there should be some small, as to not increase the release time, but required tests to make sure that it has been installed correctly. This can depend on the project type and the environment etc, but should give you a certain level of confidence that everything will be ok. For an example you could have a URL endpoint that once called responds with the new codes version number. If the correct version is installed and set up on IIS, then it should be able to do this. There is now confidence it is in the correct place on the server, with the correct build version and working correctly with the environments set up. Of course this doesn’t test every endpoint of the project is working with no errors, but you would need to take some of that confidence from all the previous builds and testing.


With the CI Build every commit, the Nightly Build every night and the Release Build before all releases, then the configuration at each environment for both the server and the code, we end with a secure, resilient and well established product. This should result in you and your team being happy to fire a build or release off and not worrying about if it will work. An example of this confidence is once a developers code base was showing errors after a merge and didn’t know where the issue was. However because we had confidence in our CI build, we knew it would not be the base version but something on their machine, which closed the range of where the problem could be. In this instance it removed the question of is the base version stable and so sped up the process of finding the error.

I strongly suggest following this process or one relevant to your project as although it might take some time to set up and get developer comfortable with it, the time and assurance gain is much better.

Feel free to share any extra processes you do in your projects to create the safest process.

How to use Chutzpah with Visual Studio and Build?

If you want to do some JavaScript Unit testing your most probably going to use things like Jasmine, Qunit or Mocha, which are all great things, but how do you run them? This was a challenge I came upon with Visual Studio and how to get it running like a normal NUnit test.

First of all if you don’t know what Chutzpah is, it’s an Open Source Test Runner for JavaScript unit testing frameworks. It currently works with Jasmine, Qunit or Mocha, using the PhantomJS headless browser for testing. You can find out more information on these subjects with the links below:

Visual Studio Setup

To get Chutzpah set up in Visual Studio could not be easier. You simply need to install the Nuget Package from the manager. There are multiple method to do this, but the one I use is this.

  1. Open Visual Studio (version 2013 is used in the graphics)
  2. Click on ‘Tools’ from the top menu
  3. Click on ‘NuGet Package Manager’ then ‘Manage NuGet Packages for Solution…’
  4. Once the manager window pops up, search for ‘chutzpah’
  5. The results should show ‘Chutzpah – A JavaScript Test Runner’
  6. Install this NuGet package.


You now have it installed and you should see the unit testing icons next to your tests. For example, I use Qunit and it looks like this.



You will also notice if you are using Qunit that I have the ‘setup’ property in my module. This is because Chutzpah is running Qunit 1.23.1, not the latest 2.x. Therefore I would check what version testing framework you are using and if it is supported. You might want to use a different test runner or downgrade your framework.

Building with Team Foundation Server

Now I’m not going to go through how to use the TFS build definitions as this is a whole huge subject itself. However I would like to show how I got Chutzpah running on build as when researching I found only snippets scattered.

The method to do this is to run it via the command line, so I know most peoples ‘Build process templates’ will be different, but this is mine. In the build process template, I have a ‘Post-test script path’ and the arguments inputs. In these I put the commands.


As you will see I have a destination to the console exe application for Chutzpah. This is downloaded with the NuGet package and so should be in the same location for everyone. For copy and pasting people here is the destination:


As well as other parameters, the exe takes the directory location of where the tests are held. As the build would run from the root of the project, I have done the same. This is why the exe directory starts from packages and the arguments would also start from there. If you have one script you might put the file name at the end as well, but if you want to run all tests in a directory then only go as deep as you need.

You can test the running of this by opening the project directory in command line and running the same command. Quick-tip to get to the directory faster is to open it in File Explorer, then type ‘cmd’ in the address bar and click enter as below:


Once open you can run the command like this:

\packages\Chutzpah.4.3.4\tools\chutzpah.console.exe project\JsUnitTest\

You should then get the below:




Learn a Framework not a Language

I listened to a podcast that said you should learn a coding framework first not a language. Now this might seem weird and confusing to existing developers as we were brought up with the basic languages and then frameworks have become a trend, but that’s because we are old. New developers are coming into the industry and frameworks are the top dog thing to know and use. You can’t see a job without one these day, so is it better to learn the framework first or should you go back to basics first?


When you learn the language you are learning like a baby. You start from small works like ‘var’, ‘string’ and don’t forget the bloody semicolon. From this you then build on these like bricks on a house, slowly building the finished product of a complete application. Now this seems smart as you start from the beginning to the end and you understand everything about the language. It can help a lot when you come across a new problem, search for it, get the answer and now you can understand what they are on about. I think learning like this will build you up to be a great developer in that language as you are learning everything about it.


However because you are learning everything about it, you also learning the rubbish you don’t need. A lot of languages will teach you fundamental code, but somethings are just not good practice and not how the real world works. The tutorials will tell you how to code in a perfect world from an application built from scratch, which isn’t bad as that is how you should be coding. The problem is most companies have an existing infrastructure and procedure to work with, therefore perfect world doesn’t help. For example when you are doing CSS and you build this perfect world style sheet, then you open it up in IE… enough said really! Maybe learning the language would be better if they also showed you common issues and how to fix them instead of just showing you what it will and will not work on.


Before I said that learning a language first is like building a house from the ground up. Well learning a framework first is like building the roof for the house before the house is built. The idea is if you learn to build the roof perfectly, then as you do this you will also slowly be building the house bricks at the same time. As you learn a feature of a language to say convert a string to an integer, you will also be learning what a string, a integer, a variable and a function is at the same time.

The issue with this can come that you are only learning what you need for the framework, so when you come across an issue with the language, not only do you need to understand the issue, you may have to read up to learn what the code is that fixes the issue, causing more work. Also you are not learning what is going on behind that framework you are using, which could be a debate that do you need to know, but for example if you only use Jquery then to display an element you would only know…




However you can either think this is magic or you can find out that this is what it is doing in plain JavaScript…


Document.getElementByClass('class').display = 'block';


This also plays into the real world working because at the moment frameworks are the hot thing. Therefore only knowing the framework might not be a bad thing. If the customer wants a pie, you give them a pie, you don’t need to know or care about the ingredients. This is why cutting out the middle man can be a good idea especially for the new developer on the block trying to get a job. You give what the recruiter is looking for.


So after all of that, what do I think?… I think for new developers coming out of university or just starting out this can be a great idea to learn the framework first. Your trying to get into the business, so you need to be the developer they want not what you think they need. If you learn the hottest techniques and code that is required in the industry you want to get into, then you have more chance of getting a job quick. Once your in, then your in, so from therefore can continue learning that path on the job while also back filling the spaces you didn’t get to learn.


However the older generation nearer the end of there career I wouldn’t suggest. At this point the younger person has the best opportunity to get the job and that is what you are competing at. Its not to say you are extinct, but to say that you are valuable how you are. The young generation can chase the rabbit of keeping up with the trends and the new troubles, all the while you are the tech support for the legacy technologies you know like the back of you hand. Developers who can keep the company running are getting paid more than the young guns, because no one knows what they know and no one wants to learn what they know, therefore your safe until the end.


There you have it my opinion that the young learn the new framework and the old stay comfortable with what they know, but what is your opinion? Tell us below in the comments.