Setup Hyper Guest for SSH without IP Address

When setting up the Hyper-V Guest hosts, I found it a little tricky and hard to find documentation on how to easily set these up, so I thought I would share how I got them into a configuration with the most simple process. With this setup you can also SSH into the Guest Host even if you do not have an IP address exposed on the Guest Network Adaptor.

To make thing even more simple I am using the pre-selected OS versions from the Hyper-V quick create options, but the steps should also work on other versions.

Linux Virtual Machine

In these steps below you will create an Linux Virtual Machine(VM) with the version ‘Ubuntu 18.04.3 LTS’

  1. Install and Open Hyper-V.
  2. Click Quick Create from the menu on the right.
  3. Select ‘Ubuntu 18.04.3 LTS‘ from the menu and create it.
  4. Follow all the details from the wizard as requested with your chosen details.
  5. Once completed, start and login to your machine.
  6. Open the Terminal within the VM.
  7. Run the following commands
    1. Update installs
      sudo apt-get update
    2. Install open ssh server
      sudo apt-get install openssh-server
    3. Install linux azure
      sudo apt-get install linux-azure
    4. start services by running the below repacing SERVICE-NAME for each of these sshd, ssh, hv-kvp-daemon.service
      sudo systemctl start SERVICE-NAME
      sudo systemctl status SERVICE-NAME
    5. Allow SSH through the fire wall
      sudo ufw allow ssh

Windows Virtual Machine

In these steps below you will create an Windows Virtual Machine(VM) with the version ‘Windows 10 dev environment’

  1. Install and Open Hyper-V
  2. Click Quick Create from the menu on the right
  3. Select Windows 10 dev environment from the menu and create it
  4. Follow all the details from the wizard as requested.
  5. Once completed start and login to your machine
  6. Run these commands
    1. Install Open SSH
      Add-WindowsCapability -Online -Name OpenSSH.Client~~~~

SSH Keys

If you would like to login to your Virtual Machine then you will need to install the SSH keys.

You can find out how to generate keys and what keys you need from the SSH website. (

Here is some more information on where to store the Public Keys once generated.

Public Key Store

On Linux, you can store them in the users directory in .ssh/authorized_keys for example C:\Users\USERNAME\.ssh\authorized_keys

Unlike Linux there are one of two places you will need to add the keys. If you are admin add it to C:\ProgramData\ssh\administrators_authorized_keys If you are not admin add it to C:\Users\USERNAME\.ssh\authorized_keys

Check If Admin
  1. Run lusrmgr.msc
  2. Select Groups
  3. Select Admin
  4. Check if you are in the group.

Once these tasks are completed you should be able to SSH into your Virtual Machines via the Hyper-V Console(HVC).

I have written about how to use this in a previous post ‘SSH to Hyper-V Virtual Machine using SSH.NET without IP Address‘. Although this targets the SSH.NET, you can use the commands from it to SSH from the Terminal.

SSH to Hyper-V Virtual Machine using SSH.NET without IP Address

I have uses the Dotnet Core Nuget package SSH.NET to SSH into machines a few times, is a very simple, slick and handy tool to have. However, you cannot SSH into a Virtual Machine(VM) in Hyper-V that easy without some extra fiddling to get an exposed IP Address.

With your standard SSH command you can run the simple:

ssh User@Host

This can have many other attributes, but lets keep it simple.

If your VM has an IP Address assigned to the Network Adapter then this can still be very simple, with using the user for the machine and the IP Address as the host.

However, not every VM in some situation will have an IP Address and therefore you cannot connect to it like this.

You can though if you use the Hyper-V Console CLI(HVC). If installed it can be located in ‘C:\Windows\System32\hvc.exe‘ and it is normally install when enabling the Hyper-V Feature in Windows. This tool enables your to communicate to your VM via the Hyper-V Bus in-between your local machine and the VM.

To use this tool you can run the same SSH command but with the HVC prefix:

hvc ssh User@Host

However, instead of the host you can pass the Hyper-V VM name, which you can get from the Hyper-V Program or with PowerShell in Administrator mode:


This is great to use in the Terminal, but doesn’t let you use standard SSH commands, which the SSH.Net tool uses. I have not come across a tool to do this execution via Dotnet Core yet, so I have come up with this solution.

What we can do to accomplish this is port forwarding, where we tell the VM to route traffic from one port on the VM to another port on the local machine.

Below we are telling the VM to push port 22 traffic, which is the SSH standard port, to port 2222 on the local machine with the correct Username and VM Name.

hvc.exe ssh -L 2222:Localhost:22 User@VmName

Once this has been done you can then run the standard SSH command, but with the port parameter and ‘Localhost’ as the Host, the same as you SSH to your own local machine.

Ssh user@Localhost -p 2222

To get this working in C# I would recommend using SSH keys to avoid the requirement of passwords as you would need key entry for that, and then the PowerShell Nuget package to run the HVC command like below:

$SystemDirectory = [Environment]::SystemDirectory
cd $SystemDirectory
hvc.exe ssh -L 2222:Localhost:22 User@VmName -i "KeyPath" -FN

What is in a projects builds and releases

While working with other companies I have seen multiple builds and releases, plus also reading books like ‘Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation’. Through this I have learnt more and more about what should really be in the builds and releases of code applications. I would like to describe how I think they should both be used to create a scalable, reliable and repeatable process to bring confidence to your projects.

In showing these I will using Visual Studio Team Services(VSTS) and C#.NET as examples. These are the day to day parts I use and know how to represent what I would like to demo.

Continuous Integration Build

A Continuous Integration Build, or also known as CI Build, is the first build that your code should see. In the normal process I follow with code, you create a feature branch of your code which is where you write the new feature. Once you are happy with the product, it can be checked in to the development branch, where the current developing code is held before releasing. The CI build will sit in between this process to review what you are about to check in.

The goal of the CI Build is to protect the development branch and in turn all other developers that want to clone the code. Therefore we want to process the code in every way that it will be done in the release, but don’t move it. The general actions would be to get all assets, compile the code and then do unit tests.

By getting all the assets, I mean not just getting the code from the repository, but also other required assets like for ASP.NET projects, Nuget packages. This could also be things like Node Package Manager(NPM) packages and processing tasks like Grunt which manipulates the code before compiling. This is basically the setting up process for the code to be built.

We then compile the code into the state it will be used in, then run the unit tests. These unit tests are for checking if there are any errors in the code and should be testing your new change with the current state of the code, but a balance for this build is for speed as well and reliability. You and others will be checking into this branch multiple times during the day, so you don’t want to be waiting all day to find out if your code is ok. If your tests are taking a long time, then it might be an idea to do enough unit tests for you to be confident with the merge and then run all the longer and indepth tests overnight.

Nightly Build

This build is optional to how well your daily CI build behaves. If you feel the CI is taking too long and you want to run some extensive tests on the project, then this is the build you will need. However not all projects are as large and detailed so might not need this.

The Nightly Build is the same process as the CI, as with Continuous Integration it should be a repeatable process, so it will get the resources and compile the code if required in the exact same method as the CI Build. Then at this point you can run all the same CI Unit Tests just as a confidence test that they still pass. You wouldn’t want to run the whole build and find out something failed in the small amount of tests you missed.

You can now run any lengthy Unit Tests, but what would also be good is to run any integration tests. These tests will stop using the stubbed versions of services and databases, to then use the real thing. The purpose of these tests are to make sure that when working with the real end points everything still works. When you use stubs for Unit Tests, you are practically configuring the end points to work as you would like. Even though you should be configuring them to be the same as the real deal, you can never 100% know they are working the same unless you just use the real thing. However just a point to be clear, when we say the real end points we do not mean the production ones, but the development versions instead.

After the build is complete, you should be confident that the code compiles fine, it works correctly by itself and works fine with the real systems as well. With this confidence there should be no hesitation to be happy to merge this into the next stage of the branching.

Release Build

At this point you have compiled the code, tested the code, tested the integration and had human testers check the system. There is now 100% confidence that the project will work when it gets to its destination, so we move to packaging up the project and moving it to its destination.

However we don’t want to jus trust what was check a few days ago will be ok. What we do want is to trust that what we are packaging up at this point will be the working, tested and complete code. Therefore we do the repeatable process by getting the resources, compile the code and testing as much as what gives you confidence but as minimal as the Unit Tests. This now gives you the product that you should be happy to put on a server. It is also the same product you was happy with at the CI stage and the Nightly Build stage, so it is what you have practiced with throughout the process.

With the resulting product you can package it as required for the language and/or framework, which will be placed on the build server with a version number ready for the release. It is important that the package is accessible by the release for obvious reasons to pick the package up, but also the version number is very important. When the release picks up the package, we want to make sure it is the exact one we happily built, configured and tested. Most build tools like Visual Studio Team Services will automatically add this build id to the package and manage the collection of it.


We now have a confident deployable package to release, so there is no more building required, but there is still some configuration. When building an application that will be going to multiple location, you don’t want to use the same credentials for things like databases. This would be unsecure as if one of the servers was compromised then all of them are. There are also things like the database location as this would be different for each environment. There shouldn’t be one central system for all the environment, as this then can cause issues with that system goes down. If it is the development environment, then all systems should be applicable for just development. Nothing worst then testers bugging you because your development took down their testing.

What we will need to do is update the code to use the specific environment variables. This should be stored in the code base, so if the same application was deployed to multiple development environment there is minimal to no set up. Another example is a load balanced system where you want to deploy the same configuration to all servers. The way to do this will depend on the language, framework and system you are deploying to, but for an .NET Core project the best ways to have an ‘appsettings.config’ file for each environment. This would then be converted on deployment to its specific environment, so settings in the ‘appsettings.development.config’ would be merged in and the settings for ‘appsettings.production.config’ would not be touched until required.

Now the code is ready for the environment, but is the environment ready for the code. Part of the DevOps movement is Infrastructure As Code, where you not only configure the code for each environment, but also configure the environment. In a perfect cloud environment you would have the servers image with all the setting up instructions saved in the code base to keep all required assets in the same location. With this image you can target a server, install the image, then configure anything required for the environment, for example an environment variable, and finally deploy the code. This method would mean we could create and deploy any of the environments at will, for instance if the development server went down or was corrupted, you would point then fire to result is a perfect set up. An example of this would be using Azure with the JSON configuration details.

However we don’t all live in perfect world and our infrastructure is not always perfect, but we can still make it as good as we can. For instance I have worked on a managed OnPremise server where it has been created to a basic specification including Windows Operating System, user accounts and other basic details. This gives me a base to start with and an certain level of confidence that if I asked for another server to be created, it will be in the same format. Now I need to make sure it is fit for what I require it for, so we can use PowerShell that will run on the target machine to install things like IIS. This can be a script stored in the code base and then the environment variables pulled in from another file or the release configuration. This would give a level of Infrastructure As Code, by the requirements of the project being installed at each environment. This process could also check everything is in working order, so before you put your project on the server you are happy it is ready for it.

We should be all set to put the code and the server together now with a deployment, but once we have done that we have lost some confidence. Like the Integration Tests, we know the package is ok on its own and we know the server is ok on its own, but how do we know they are going to work together? At this point there should be some small, as to not increase the release time, but required tests to make sure that it has been installed correctly. This can depend on the project type and the environment etc, but should give you a certain level of confidence that everything will be ok. For an example you could have a URL endpoint that once called responds with the new codes version number. If the correct version is installed and set up on IIS, then it should be able to do this. There is now confidence it is in the correct place on the server, with the correct build version and working correctly with the environments set up. Of course this doesn’t test every endpoint of the project is working with no errors, but you would need to take some of that confidence from all the previous builds and testing.


With the CI Build every commit, the Nightly Build every night and the Release Build before all releases, then the configuration at each environment for both the server and the code, we end with a secure, resilient and well established product. This should result in you and your team being happy to fire a build or release off and not worrying about if it will work. An example of this confidence is once a developers code base was showing errors after a merge and didn’t know where the issue was. However because we had confidence in our CI build, we knew it would not be the base version but something on their machine, which closed the range of where the problem could be. In this instance it removed the question of is the base version stable and so sped up the process of finding the error.

I strongly suggest following this process or one relevant to your project as although it might take some time to set up and get developer comfortable with it, the time and assurance gain is much better.

Feel free to share any extra processes you do in your projects to create the safest process.

Visual Studio Is Everywhere Now

Microsoft has finally listened to the public and released Visual Studio for all OS platforms YAY. Oh but it isn’t really the full Visual Studio’s you know and love. I have played with the new Visual Studio Code and now here I what I think.

If you was watching or following Build 2015, you would have seen the news and also seen how happy people were to the announcement of the Visual Studio editor coming to all other OS platforms called Visual Studio Code or as I have seen it VScode. This is what I believe a lot of people wanted as most people I know do not like the Windows Operating System. Not only am I an Apple fan so my home set-up is all Apple products, but most digital agency’s I have seen use Macintosh as well. The Mac is meant to be the best PC for designers to use, so I have found most Creative Agencies then use the same system in the whole office for Design, Development and Management. This means if you are a .Net developer like myself, you have to use a Virtual Machine or Dual Boot to use Visual Studio’s. This becomes the bane of your life as you are then flipping between OS and learning limitations between the two systems. Its not to say it is really hard as you can set the whole system up to work for you, but it is work to set it up I find. This is then why people wanted the great Visual Studios on all platforms. This also I think expands the .Net reach so new companies aren’t put off.


(adsbygoogle = window.adsbygoogle || []).push({});

As soon as I heard of this great news, I was downloading the new editor from Unfortunately I was keeping up with Build 2015 through Twitter so I wasn’t pre-warned of what the functionalities of the new program are. I got to say I was just to exited to read about it, so I just downloaded and learnt by clicking as many buttons as I could.

The download is smooth and great, there was not other things to download or install so it was systems go once complete. This is a small detail but the user experience starts from the first install of the program, its like the introduction to what you are about to experience. Since I have downloaded it on my Mac and also my Windows PC, I have played with multiple languages, different type of projects and also just clicked around like a mad bull. To make sure I was all on top of it, I even stayed up to watch the Build 2015 Visual Studio talk on Channel 9.

First impression the look and style of the VScode is nice and reflects how I have the VS2013 set up to look as well. This helps ease you into the new program and not baffle you straight away. The UI is also very simple, which makes it perfect to just crack on with some coding. You can make the most out of the screen by hiding all menu’s and tabs, which gives you more room for code.

For the first test I was going to start with a clients website I have been working on recently,  so I could see how it would perform if I was to stop using VS2013. This is when I released it was never going to replace the VS2013 I have at work as its doesn’t have Team Foundation Server built in. I know that the package would be bigger and this is meant to be a lightweight program, but TFS is their own product and thousands use it. They have Git built in which works for some, but all my works code is in TFS. This means I can’t get any code down unless I FTP the files down. This still would cause problems as I can’t check the files back in once changes. I think even if they had it as an additional plugin that could be installed I would be happy. I would guess this would be a future advancement if they keep up the support for it.

When using the editor I saw that it was basically another option compared to Sublime and Notepad++. It is an adaptable  editor, which is good for the developers using PHP, Ruby or things like that. However for a .Net developer I find I need and just really like all the feature of the full Visual Studio. That doesn’t mean I don’t like VScode or that it is worthless to me as a developer. I do little projects sometimes for fun on the side to keep inspiration, plus as Visual Studios is very resource hungry I don’t like to run two instances at the same time. I reuse some code and need to do quick edits, so I like a small editor on the side I an quick boot up programs. This means at the moment I use Sublime for my side editor and also I use it at home on my Mac for all my coding. This means even someone like me with my set up can find the new VScode useful in their development.


(adsbygoogle = window.adsbygoogle || []).push({});

As you have noticed I don’t think you can compare the full Visual Studio to the new VScode as they are not the same. VScode is an editor like Sublime with a Visual Studio’s skin on it, so you will notice I will compare VScode to Sublime in the rest of the review. It does make me question why they would release this if it is not the new VScode. Unless they are looking to improve it slowly to take over. It get more people using their product as it is their program, but no one needs a new editor like this. Personally and I think more would agree, I would rather a new full Visual Studios that works on all platforms. What ever they do with this editor they are starting from zero and need to catch up with others like Sublime.

The first thing that struck me when I got a project open was that it automatically picked up the syntax for what language the file was. I find this a draw back for Sublime as I have to choose the language. There is problem a plugin, but that is more to install and more time wasted.  The syntax was standardly coloured like the Visual Studio so it was very familiar to me and makes it easier to start coding in it. I think consistency is vital between your own programs.

Unfortunately, the one thing I was really looking forward to was the intelligence for .Net. As I would be using this coding and I am a lazy developer, I use the intellisense a lot. It makes it more convenient to use plus it gives you what you need next, making it a faster way to snap code into place. This is built into the editor, but I could not get it working for the life of me. I search, watched and hunted for anyone with the same issue, but as it is new I couldn’t find many that have had the issue or fixed the issue. If I use the controls ‘Ctrl + Space’ then the intellisense comes up, but it comes with all the options the editor can give. This is not what I need though, plus it doesn’t automatically work when typing or like in .Net pressing the dot. For example if I type ‘Response.’ I would expect it to show me things like Write, End and Cookies. This issue is on both my Mac and the Windows PC, so it can’t just be me. Though as I said it is built in and I have seen it in action, so if and when I get this working then I would think a lot more of it.

Many of the other features exist in other editors and so I do not think there is anything special with this editor compared to others. You can see all the features it offers on their website ‘Docs’

Comparing to Sublime I would say I really like it. I find the design nice and smooth, the functionality of the program on both operating systems great and when they get the intellisense working it’s even better. I would like to see some more plugin’s and an online library of these, so I could do things like compress CSS files. In summary small bugs, but really liking the new editor for my Mac. However I will still be using the full VS2013 for the Windows developing while I am still coding .Net for work as I need TFS. Unless they bring out a program just for TFS that I can run in parallel.

Use the comments below to share what you thought of VScode and how you compare it.


(adsbygoogle = window.adsbygoogle || []).push({});