Think about Cloud – the correct way

Recently I got a question from a friend – what good will moving to cloud give me, will it save my money?

Maybe or maybe not – it depends and it depends on how you look at your money/time correlation.

Moving to cloud and thinking about saving money first of all is very wrong. Cloud gives you reliability, capacity, high-availability, performance, flexibility, mobility, seamless experience, newest versions of hardware/software… Cloud is your possibility to make your company work better and more efficient. For example, what about cross-continent high-availability? What about five "9" availability? What about new hardware that is being released every year? Can you afford the downtime caused by faulty software update? Think how much effort you would need to accomplish these complex things on your own and those are there, in cloud, ready to be used and deployed, just a few mouse clicks away.

Apart from that, moving to cloud means that you do not have to think about hardware or software maintenance, patch management (countless hours of pretesting included), new versions, updates and upgrades and so on. You always get the newest hardware, newest software and almost all the work is done there already for you, particularly if you choose SaaS and PaaS. That means that your IT team does not waste the valuable time to problem solving. Instead, they can focus on innovation, focus more on things that will move your company forward in much shorter timeframe.

As you are already thinking about cloud the correct way, we can say some words about saving money too. There are many cases when cloud costs less than on-premises. For example, when you need a SharePoint or SAP development environment for your developers – why pay money for licenses? Why pay for the hardware or someone to construct that complex environment for you? Instead, you can do that all in minutes – in cloud (Azure is a good example here).
Another good case is a new company – building your own, albeit even a small sized datacenter costs a lot of money. Often cloud is way cheaper for such cases.

If you already have existing infrastructure, cloud may be a great place for your backup datacenter – particularly when modern cloud providers charge only for the amount of resources you use (e.g. you do not pay a cent for a turned off VM in Azure, you pay just for storage).

The topic is huge and we can talk a lot about it but I hope I touched enough in this short post to emphasize some of the key elements of cloud.

So think about cloud, but please, the correct way :)

Running Azure Stack inside a Virtual Machine in a home lab

Microsoft Azure Stack is a great product which will go GA at the end of the year but untill then we have first Technical Preview available which can be downloaded here.

Considering it is actually a whole cloud platform Azure Stack's minimum hardware requirements are quite hefty – 12 physical cores and 96 GBs of RAM. You wont find such amount of compute power in most PCs but there are some options, particularly for people who have access to relatively strong workstation-style PC – I did it myself with 20 GBs of RAM.

For the start you need Windows Server 2016 TP4 installation with virtualization features inside it. You can install it directly on your computer, use the .vhdx file included with Microsoft Azure Stack POC to boot directly from it or use the virtualization software with nested virtualization support (like I did it).

For Hyper-V, nested virtualization is available from Windows 10 Insider Preview 10565 – you can read the instructions to enable it here.

Otherwise you can also use other software, like VMware Workstation. For nested virtualization to work, you need to select "Hyper-V (unsupported)" as an OS in VM settings (there is also an option with editing the VM's .vmx file but let's keep it simple).

Please keep in mind that the method described below is not supported by Microsoft but when you need to learn while having access to the limited resources, you can improvise – it's more than acceptable.

As the first part is finished now we need to customize the installation files for Azure Stack, shrinking the official hardware requirements. After downloading and extracting the installation files go and mount "MicrosoftAzureStackPOC.vhdx" file to get write access to the content inside it.
Move to \AzureStackInstaller\PoCDeployment directory and open "Invoke-AzureStackDeploymentPrecheck.ps1" file inside it.
Now find this part of code:

function CheckRam {
    Write-Verbose "Check RAM."
    $mem = Get-WmiObject -Class Win32_ComputerSystem
    $totalMemoryInGB = [Math]::Round($mem.TotalPhysicalMemory / (1024 * 1024 * 1024))
    if ($totalMemoryInGB -lt 64) {
        throw "Check system memory requirement failed. At least 64GB physical memory is required."

Change "64" in the line "if ($totalMemoryInGB -lt 64)" to whatever size you can give the Azure Stack. In my case I went for 20 GB. Save and close the file.

Now we need to edit the memory requirements for the infrastructure VMs. Go to the \AzureStackInstaller\PoCFabricInstaller directory and open the "PoCFabricSettings.xml" file for editing.

Find this line of code:


Below it there is the configuration settings for the "ADVM" virtual machine. Modify the code to match this:


If you would like you can also modify the "ProcessorCount" value. For me CPU count was not critical so I left it unmodified.
The settings we set here means that "ADVM" virtual machine will have a Startup Memory of 1 GB and Dynamic Memory feature turned on and configured with the minimum of 1 GB and maximum of 2 GB of RAM.

Now go and find every other VM configuration inside the file and modify it. Here is the full list of VM names:


After modifications save and close the "PoCFabricSettings.xml" file and unmount the "MicrosoftAzureStackPOC.vhdx" file.

That's all. Now you can run "DeployAzureStack.ps1" and install Azure Stack as described in the official guide.