What do you consider when building a new application?

When you’re starting a new project and thinking about what you’re going to use in your application, what factors do you consider? Sometimes this depends on what your role is, like a developer might jump straight in with just use X coding language and continue on their way. Whereas others might want to play with whatever the new and latest technology is. Then there is people like myself, that likes to think about the whole picture, and so here are some of the key factors I consider when building a new application.


Code Repository

This one should just come hand in hand with your company, as there should already be a standard of where and how you store your code. However there’s a lot of ‘should’ in that sentence, as some junior companies don’t have this thought through yet, or you could be doing it alone, or even the company might have something in place but you are thinking of exploring new technologies and new grounds.

The big factor to consider with a repository is the company that is holding that information. It starts with where the code will be held for legal laws, security and for access. Now you might think access is a silly thing to think about in this, as it is just all done over https on your computer isn’t it?, but you might need to consider if you are going through a proxy so security might lock you down unless it is a secure root. You also might put the repository on premise due to the value of the code you are storing, which might also be the reason for your choice on the company to store your code. If you think that the company storing your code will be going after 2 years, then you might want to think about either a different company or a good get out plan just in case. These days there a few big players that just make clear sense, so after this it would come down to the cost of that companies services for the level you require.

The other factor is how it is stored and retrieved from the repository with things like GIT, as this is another technology that you will depend on. You will need to consider what learning curve will others need to undertake if they are to use this Version Control System and again like the storage factor, will they still be around in as few years’ time?

Linked from this would be what tools you are thinking of using later in the thought process for build, test and deployment, as these might be harder work for you to move code between locations and tools. For example if your repository is on premised behind a firewall and security, but your build tool is in the cloud with one company and then the test scripts are stored in another companies repository.



You might have an easy job with choosing a language if you are a pure Java house or PHP only then that is what you will be using, as you can only do what you know. However, if you want to branch out or you do have more possibilities then the world can open up for you.

A bit higher level than choosing the language you want, but design patterns do come into this. I have seen where someone would choose a .NET MVC language for their back end system, but then put a AngularJS front end  framework on top. What you are doing here is putting an MVC language design on top of an MVC language design, which causes all types of issues. Therefore you need to consider, if you are using more than one language then how do they complement each other. For instance in this circumstance you could either go for the AngularJS MVC with a micro service .NET backend system, or have the .NET MVC application with a ReactJS front end to enrich the users experience.

As I said before, you might already know what languages you are going to use as that is your bread and butter now, but if it is not then you need to think about the learning curve for yourself and other developers. If you are throwing new technologies into the mix then you need to be sure everyone can keep up with what you intend on using, or you will become the Single Point Of Failure and cause support issue when someone is off.

As well as thinking about who will be developing the technology, you need to think about who will be using the technology. This can either be from an end users experience or even the people controlling the data like content editors, if this is that type of system. If you would like a fast and interactive application then you will want to push more of the feature to the client side technologies to improve the users experience, but you might not need to make it all singing and dancing if it is an console application running internally you want to just do the job. Therefore the use case of the language has an importance to the choice.



Testing is another choice in itself, as once you know your language you know what testing tools are available to use, but they then have all the same consideration as what coding language you want to use, as you will still need to develop these tests and trust in their results.

I add this section in though, as it is a consideration you need to have and also how it factors into giving you, the developer, feedback on your test results. These might run as part of your check in or they might be part of a nightly build that reports back to you in the morning, so how they are reported instantly to the develop depends on how fast they can react to them.

As part of the tooling for the tests you will need to recognize what level of testing they go down to, for example unit tests, integration tests, UI tests or even security testing. These then need to consist of what tools you can integrate into your local building of the application, to give you instant feedback, for example a linter for JavaScript which will tell you instantly if there is a conflict or error. This will save you time of checking in and waiting for a build result, which might clog up the pipeline for other checking in.


Continuous Integration(CI) and Continuous Delivery(CD)

This is a little out of touch with what application you are building as another person in the DevOps roll might be doing this and it should have no major impact on your code, as this is abstract to what you are developing. However the link can be made through how you are running this application on your local machine. You could be using a task runner like Gulp in your application to build and deploy you code on your local machine, which then makes sense to use the same task runner in the CI/CD.

Therefore you need to think about what tooling can and will be used between the your local machine and the CI/CD system to have a single method of build and deployment. You want to be able to mirror what the pipeline will be doing, so you can replicate any issue, plus also the other way round as it will help that DevOps person build the pipeline for you application.


Monitoring and logging

Part of the journey of your code, is not just what you are building and deploying, but also what your code is doing after that in the real world. The best thing to help with this is logging for reviewing past issues and monitoring to detect current or coming issues.

For your logging I would always encourage 3 levels of logging Information, Debug and Error, which are configurable to turn on or off in production. Information will help when trying to source where the issue happens and what kind of data is being passed through. It will be medium level of output as to not fill up your drive fast, but to give you a lot of information to help with your investigation. Debug is then the full level down, giving you everything that is happening with the application and all the details, but be careful of printing GDRP data that will sit in the logs and to not crash your drives from over filling. Errors are then what they say on the tin, they will only get reported out when there is an error in the application, which you should constantly check to make sure your remove all potential issue with the code. The considering factor with this for your application is technology and implementation to your code. We have recently changed a logging technology, but how it was implemented made it a longer task then it should have been, which can be made easier with abstraction.

Monitoring depends on what your application is doing, but can also expand past your code monitoring. If you have something like message queue’s you can monitor the levels or you could be monitoring the errors in the logs folder remotely. These will help pre-warn you if there is something going wrong before it hits the peak issue. However the issue might not be coming from your code, so you should also be monitoring things like the machine it is sitting on and the network traffic in case there is an issue there. These have an impact on the code because some monitoring tools do not support some languages, like .NetCore which we have found hard in some places.



Document everything is the simple way to put it. Of course you need to do it in a sensible manner and format, but you should have documentation before even the first character of code is written to give you and others the information you have decided above. Then you will need to be documenting any processes or changes during the building for others to see. If you know exactly how it all work then someone else takes over while you are away, then you put that person is a rubbish position unless they have something to reference to.

These need to have a common location that everyone can have access to read, write and edit. However a thought you could try is using automated documentation draw from the codes comments and formatting, so you would need to bear this in mind when writing out your folder structure and naming convention.

You can go over board by documenting to much as somethings like in the code or the CI/CD process should be clear from the comments and naming. However even if documentation for tools like GIT have already been written, it is helpful to create a document saying what tooling you are using from a high level, why you are using this and then reference their documentation. It gives the others on the project a single point of truth to get all the information they require, plus if the tooling changes you can update that one document to reference the new tooling’s, and everyone will already know where to find that new information.



In the end what we have just gone through is the DevOps process of Design, Build, Test, Deploy, Report and Learn.

  • You are currently looking at the design point while looking at what languages and tools you would like to use.
  • We are going to get a language to build the new feature or application.
  • There will be a few levels of testing through the process of building the new project.
  • The consideration of CI and CD gets our product deployed to new locations in a repeatable and easy method.
  • Between the Logging and Monitoring we are both reporting information back to developers and business owners, who can learn from the metrics to repeat the cycle again.


Reference: https://medium.com/@neonrocket/devops-is-a-culture-not-a-role-be1bed149b0

How to merge multiple images into one with C#

Due to a requirement we needed to layer multiple images into one image. This needed to be faster and efficient plus we didn’t want to use any third party software as that would increase maintenance. Through some fun research and testing I found a neat and effective method to get the outcome required by only using C#.NET 4.6.

So the simple result was to use the C# class ‘Graphics’ to collect the images as Bitmaps and layer them, then produce a single resulting Bitmap.

As you can see from below, we first create the instance of the end Bitmap by create a new type with the width and height of the resulting image passed in. Using that Bitmap we create an instance of the Graphics, which we use in our loop of each image. For each of the images, they are added to the graphic with the starting X/Y co-ordinates of 0.

This solution solves the requirement I had as they all needed to be layered from the starting point of the top left corner, but you could also get imaginative with the settings to place the layers in different places, or even with the Bitmaps width you could create a full length banner.

// merge images
var bitmap = new Bitmap(width, height);
using (var g = Graphics.FromImage(bitmap)) {
foreach (var image in enumerable)     {      
g.DrawImage(image, 0, 0);

This is of course handy and simple, so I thought to share and help I would create a full class to handle the processing. With the class below you do not need to create an instance as it is static, so that it can be used as a tool like it is.

You can find the full code on my Github at https://github.com/PureRandom/CSharpImageMerger

The aim of this class which can be expanded, is to layer an array of images into one. You can do this by passing an array of links, of bitmaps or a single folder directory.

When you pass the array of links, you also have the option of providing proxy settings depending what your security is like. It then uses an inside method to loop each link to attempt to download it and return them in a bitmap list.

private static List ConvertUrlsToBitmaps(List imageUrls, WebProxy proxy = null) {
    List bitmapList = new List();
    // Loop URLs
    foreach (string imgUrl in imageUrls)
        WebClient wc = new WebClient();
        // If proxy setting then set
        if (proxy != null)
        wc.Proxy = proxy;
        // Download image
        byte[] bytes = wc.DownloadData(imgUrl);
        MemoryStream ms = new MemoryStream(bytes);
        Image img = Image.FromStream(ms);
      catch (Exception ex)
    return bitmapList;

When you pass the array of bitmaps it is the same as the above, but it doesn’t have to download anything.

Finally the file system method can be used by passing the folder directory you wish it to search, then the image extension type. So if you was looking to merge all png’s in the directory ‘src/images/png’, then that is what you pass.

private static List ConvertUrlsToBitmaps(string folderPath, ImageFormat imageFormat)
    List bitmapList = new List();
    List imagesFromFolder = Directory.GetFiles(folderPath, "*." + imageFormat, SearchOption.AllDirectories).ToList();
    // Loop Files
    foreach (string imgPath in imagesFromFolder)
        var bmp = (Bitmap) Image.FromFile(imgPath);
      catch (Exception ex)
    return bitmapList;

With all of these it then uses the common method to loop each item in the array of bitmaps for find the biggest width and height, so the images don’t over or under run the results size. As explained above, each bitmap is looped through to merge the images to the top left of the result Bitmap to create the final image.

private static Bitmap Merge(IEnumerable images)
  var enumerable = images as IList ?? images.ToList();
  var width = 0;
  var height = 0;
  // Get max width and height of the image
  foreach (var image in enumerable)
    width = image.Width > width ? image.Width : width;
    height = image.Height > height ? image.Height : height;
  // merge images
  var bitmap = new Bitmap(width, height);
  using (var g = Graphics.FromImage(bitmap))
    foreach (var image in enumerable) {
      g.DrawImage(image, 0, 0);
  return bitmap;

Feel free to comment, expand and share this code to help others.


Should you unit test CSS?

When I told a college I was going to write some unit tests for CSS they went crazy, and I do see why, however I think it can be valuable in the right way. I would like to describe to you why I think doing unit tests on CSS can be worth your time as a developer and also beneficial to the project.

Why oh why you may ask would you unit test CSS? Styles change so often, the style might be abstract from the code and they can be hard to test. You can’t test just the code, you have to test it for what it is, which is User Interface(UI) coding. Therefore you need to test it through the UI with something like Selenium, that boots up a browser and checks the UI. Though even if you use this technology then testing literally the size of the font and the colour of the background, which have no variable changes, you not testing properly.

Normally when you are unit testing, it is on something that can change depending on multiple variables, so testing the font size isn’t that. When you are testing them things they can only change if you want them to, so you not testing the code, you’re testing that you remembered to update the test. For example, if you have a ‘h1′ with a font size ’14px’ and write a unit test to check the browser has rendered a ‘h1’ with that size, then you have a change come in. You change the font size and now your unit test fails, so you update the test case, but what have you just shown to the project? You have proved that the font has been updated in both places.

It also gets hard when you are testing with browsers, as each browser will interpret the CSS in different ways. They render different, so when you test ‘1em’ is ’14px’ you might get a different answer in another browser.

Therefore why do I think you should unit test CSS?

Well that’s more because I am not saying to test the CSS purely, but to test modular items. In the project I work on there are modules in the site that share classes. Things like a promotion box with a background colour and a banner with the same background colour. We use the CSS pre-processor called LESS, so the background colour is stored in a variable shared across the code base. If a developer decides to change that variable for the banner, we want the unit test to flag that changing this colour effects the promotion box as well.

Example CSS:

@bg-color: #66bb6a;

.banner { background-color:@bg-color;}
.promo { background-color:@bg-color;}

This is why we should unit test, because we want to know if a classes style changes then what else does it effect. Imagine the above CSS lines were in separate files. You change the ‘@bg-color’ as you want the banner to be a different colour and then the unit test flags that the promotion box is incorrect. The value from this means the developer can find out what breaking changes they have introduced, which helps them decide it should all change or no I need a new class.

There is also testing where it takes graphical images and compares the two, but this is browser and code structure dependent. You can to make sure you can test the code in all situations and that’s why a banner for example is better than a whole page.

In our organisational structure then CSS is in a separate code base as the HTML it is running on, due to the CSS project being used in multiple projects. Therefore, we can’t test against the project’s code base, instead we need to create a running example each time. This has it benefits as we then have a working example to demo to the HTML developers.

This is where and why I think there is value in doing unit testing on CSS, but what do you think? Do you do it already or do you think it is a waste of time?

Sticky Menu Bar code and example

This is a bit of a feature a lot of websites use and love, which is the Sticky Menu Bar. This could also be used for things like a cookie policy bar as well, but what ever the use it is a good feature. However I see some people add these with some terrible code and methods, because they don’t understand how it would work. I will explain my method and why I think it is better.

Although it is all done via JavaScript and JQuery, we will start with the HTML. This is sort of simple that you can have what ever you like, but you need the navigation wrapped in a container. For my example I have the HTML5 ‘nav’ element as my container and then the actual navigation as an Unordered List.

<nav >
<ul >


This will be explained further on, but the reason for this is so your content does jump.

For the CSS it is pretty much just standard styling more for the benefit of the visual of the navigation, not for the functionality. The ‘nav’ has colour, the ‘ul’ removes the bullet points and then the final styling to bring the link containers inline.

nav {
background-color: #0073AA;

ul {
list-style: none;

li {
display: inline-block;
color: white;
padding: 10px 0;

Then is the JavaScript/JQuery that I have done in a prototype, so that it can be added once then used however many times. The basics to set this up is the initialisation method and the defaults object added to the prototype.

The ‘defaults’ contain 3 parameters that are the ‘itemClass’ the class given to the navigation controller for my example the ‘ul’, the ‘itemParent’ the class given to the parent of the navigation the ‘nav’ and finally the ‘stickyClass’ which is the class given to the item once it has stuck to the top of the screen. This is so if you want to style it different once it has stuck. The defaults, mean if you don’t want your own classes then there are back up classes instead.

StickyBar.prototype = {
defaults: {
itemClass: "sticky_item",
itemParent: "sticky_parent",
stickyClass: "sticky_class"
init: function() {

You also have a ‘init’ method attached. This is where we action the method to run the functionally of the navigation. This is below and starts off with putting the object into a scope called ‘base’. This give the ‘this’ the scope of the method.

We then have the on scroll attribute of JQuery to read each time the user scrolls, you can tell where they are. Next I have put in the ‘each’ attribute so if you have multiple sticky bars with the same class, they are all picked up and checked. You can have multiple sticky bars, as the position of these are done in the CSS, so as long as you have a different class on each for their direction then you can have more than one.

Within here is the real thinking. With the if command we check if the scroll position of the window is further down the page then the navigation. This is the reason we have the navigation container, as once you have stuck the navigation to the page it moves with the page and therefore the window will never scroll back over it. With the container you can stick the content inside and then use the container as the reference point.

If it is to be stuck then we add some inline CSS to position fix it to the window and add the sticky class to the navigation. If it is not to be stuck then it does the opposite. There is also a bit of functionality to give the parent element some height, because once you stick the navigation it is not relative on the page so all the content will shift up. Instead we add the height on as a placeholder. I have added it here, so that if your item change for instance a drop down menu or you also have multiple stick bars, then it works for all.

startScroll: function() {
var base = this;
$(window).scroll(function() {
$(base.itemClass).each(function() {
if ($(window).scrollTop() >= $(this).parent(base.itemParent).offset().top) {
'position': 'fixed'
'height': $(this).height()
} else {
'position': 'relative'

Once last thing we need to do is add it to the ‘init’ method and it is complete. Please also notice the object StickyBar that sets the each sticky bar and the IFFY around everything to run this on load.

All you then need to do to initialize each instance is call the object and add in the optional parameters, which again I have put in an IFFY to have it load straight away.

//New Sticky
(function() {
var stickyBar = new StickyBar({
itemClass: '.nav_bar',
itemParent: '.nav_bar_parent',
stickyClass: 'nav_bar_stuck'

Please see the full code and an example on CodePen