Category: Testing

Azure Lab Services – creating an effective and reliable testing environment

Azure Lab Services (formerly DevTest Labs) is designed to allow for the rapid creation of Virtual Machines for testing environments. A variety of purposes and use cases can be serviced using DevTest Labs, for example, Development Teams, Classrooms, and various Testing environments.

The basic idea is that the owner of the Lab creates VMs or provides a means to create VMs, which are driven by settings and policy, all of which is configurable via the Azure Portal.

The key capabilities of Azure Lab Services are:

  • Fast & Flexible Lab Setup – Lab Services can be quickly setup, but also provides a high level of customization if required. The service also provides built in scaling and resiliency, which is automatically managed by the Labs Service.
  • Simplified Lab Experience for Users – Users can access the labs in methods that are suitable, for example with a registration code in a classroom lab. Within DevTest Labs an owner can assign permissions to create Virtual Machines, manage and reuse data disks and setup reusable secrets.
  • Cost Optimization and Analysis – A lab owner can define schedules to start up and shut down Virtual Machines, and also set time schedules for machine availability. The ability to set usage policies on a per-user or per-lab basis to optimize costs. Analysis allows usage and trends to be investigated.
  • Embedded Security – Labs can be setup with private VNETs and Subnets, and also shared Public IPs can be used. Lab users can access resources in existing VNETs using ExpressRoute or S2S VPNs so that private resources can be accessed if required. (Note – this is currently in DevTest Labs only).
  • Integration into your workflows and tools – Azure Lab Services provides integration into other tools and management systems. Environments can automatically be provisioned using continuous integration/deployment tools in Azure DevTest Labs.

You can read more about Lab Services here: https://docs.microsoft.com/en-us/azure/lab-services/lab-services-overview

I’m going to run through the setup of the DevTest Labs environment, and also run through a few key elements and run through the use cases for these:

Creating the Environment:

This can be done from the Azure Portal – just search for “DevTest Labs” and then we can create the Lab Account. Note – I have left Auto-shutdown enabled (this is on by default at 1900 with no notification):

Once this has been deployed, we are able to view the DevTest Labs Resource overview:

From here we can start to build out the environment and create the various policies and settings that we require.

Configuring the Environment

The first port of call is the “Configuration and Policies” pane at the bottom of the above screenshot:

I’m going to start with some basic configuration – specifically to limit the number of Virtual Machines that are allowed in the lab (total VMs per Lab) and also per user (VMs per user). At this point I will also be setting the allowed VM sizes. These are important configuration parameters, as with these settings in place we effectively limit our maximum compute cost:

[total number of VMs allowed in the lab] x [maximum/most expensive VM size permitted] = the maximum compute cost of the lab environment

This is done using the panes below:

First up, setting the allowed VM sizes. For this you need to enable the setting and then select any sizes you wish to be available in the Lab. I have limited mine to just Standard_B2s VMs:

Once we have set this up as required we just need to click “Save” and then we can move onto the “Virtual Machines per user” setting. I am going to limit my users to 2 Virtual Machines each at any time:

You’ll notice you can also limit the number of Virtual Machines using Premium OS disks if required. Once again – just click “Save” and then we can move onto the maximum number of Virtual Machines per Lab:

As you can see I have limited the number of VMs in the Lab to a maximum of 5. Once we have clicked “Save” we have now configured a few of the basic elements for our Lab.

Defining VM Images for our Lab

Next up – it’s time to configure some images that we want to use in our Lab. We have three options here – which provide a number of different configurations that suit different Lab requirements:

Marketplace images – these are images from the Azure Marketplace, much like we are used to selecting when creating Virtual Machines in the Portal

Custom images – these are custom images uploaded into the Lab, for example containing bespoke software or settings not available via the Marketplace or a Formula.

Formulas – these allow for the creation of a customised deployment based on a number of values. These values can be used as-is or adjusted to change the machine deployed. Formulas provide scope for customisation within defined variables and can be based on both Marketplace and Custom images.

For more information on choosing between custom images and formulas this is well worth a read: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-comparing-vm-base-image-types

I’ve defined a single Marketplace Image from the Portal – and provided only Windows 10 Pro 1803:

Next, I am going to create a Formula based on this image, but also with a number of customisations. This is done by clicking on “Formulas” and then “Add”:

Next, we can configure the various settings of our Formula, but first we need to setup the Administrator username and password in the “My secrets” section of the lab. This data is stored in a Key Vault created as part of the Lab setup, so that they can be securely used in Formulas:

Next, I am going to create a Windows 10 Formula with a number of applications installed as part of the Formula, to simulate a client PC build. This would be useful for testing out applications against PCs deployed in a corporate environment for example. When we click “Formulas” and then  “Add” we are presented with the Marketplace Images we defined as available in the earlier step:

Marketplace Image selection as part of the Formula creation:

Once the base has been selected we are presented with the Formula options:

There are a couple of things to note here:

  • The user name can be entered, but the password is what we previously defined in the “My secrets” section
  • The Virtual Machine size must adhere to the sizes defined as available for the lab

Further down the options pane we can define the artifacts and advanced settings:

Artifacts are configuration and software items that are applied to the machines when built – for example, applications, runtimes, Domain Join options etc. I’m going to select a few software installation options to simulate a client machine build:

There are a few very useful options within other artifacts, which I feel deserve a mention here:

  • Domain Join – this requires credentials and a VNET connected to a Domain Controller
  • Download a File from a URI – for example if we need to download some custom items from a specific location
  • Installation of Azure PowerShell Modules
  • Adding a specified Domain User to the Local Admins group – very useful if we need all testing to be done using Domain Accounts and don’t want to give out Local Administrator credentials
  • Create an AD Domain – if we need a Lab domain spun up on a Windows Server Machine. Useful if an AD Domain is required temporarily for testing
  • Create a shortcut to a URL on the Public Desktop – useful for testing a Web Application on different client bases. For example we could test a specified Website against a number of different client builds.
  • Setup basic Windows Firewall configuration – for example to enable RDP or to enable/disable the Firewall

It is also worth noting that we can define “Mandatory Artifacts” within the Configuration and Policies section – these are artifacts that are applied to all Windows or Linux VMs created with the Lab:

After artifact selection we can specify the advanced settings for the Lab:

It is worth noting here that we can specify an existing VNET if required – this is particularly useful if we need to integrate the Lab VMs into existing environments – for example an existing Active Directory Domain. Here we can also configure the IP address allocation, automatic delete settings, machine claim settings, and the number of instances to be created when the formula is run.

Once the Formula is created we can see the status:

Granting access to the Lab

We can now provide access to end users – this is done from the Lab Configuration and Policies pane of the Portal:

We can then add users from our Azure Active Directory to the Lab Environment:

Visit this URL for an overview of the DevTest Lab Permissions: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-add-devtest-user

Now we can start testing the Lab environment logged in as a Lab User.

Testing the Lab Environment

We can now start testing out the Lab Environment – to do this, head over to the Azure Portal and log in as a Lab User – in this case I am going to log in as “Labuser1”. Once logged into the Portal we can see the Lab is assigned to this user:

The first item I am going to do is to define a local username and password using the “My secrets” section – I won’t show this here but you need to follow the same process as I did earlier in this post.

Once we have accessed the Lab, we can then create a Virtual Machine using the “Add” button:

This presents the Lab user with a selection of Base Images – both Marketplace (as we previously defined) and Formulas (that we have previously setup):

I’m going to make my life easy – I’m a lab user who just wants to get testing and doesn’t have time to install any software… so a Formula is the way to go! After clicking on the “Windows10-1803_ClientMachine” Formula I just need to fill out a few basic details and the VM is then ready to provision. Note that the 5 artifacts we setup as part of the Formula and the VM size is already setup:

Once we have clicked create the VM is then built and we can see the status is set to “Creating”:

After some time the VM will show as Running:

Once the VM has been created we can connect via RDP and start testing. When creating this VM I left all of the advanced settings as defaults – which means as part of the first VM deployment, a Public IP and Load Balancer (so that the IP can be shared across multiple Lab VMs) has been created. When we now look at the VM overview window, we can just click connect as we normally would to an Azure VM:

Once we have authenticated, we can then use the VM as we would any other VM – note in the screenshot below, both Chrome and 7Zip (previously specified artifacts) are visible and have been installed (along with other items) for us before we access the VM:

When we have finished our testing or work on this VM – we have a number of options we can use:

  • Delete the VM – fairly self explanatory this one… the VM gets deleted
  • Unclaim the VM – the VM is then placed into the pool of Claimable VMs so that other Lab users can claim and use them. This is useful if you wish to simply have a pool of VMs that people use and then return to a Pool. For example – in a development team testing different OS versions or Browsers etc.
  • Stop the VM – this is the same as deallocating any Azure VM – we’d only pay for the Storage use when stopped

Hopefully this has been a useful overview of the DevTest Labs offering within Azure… congratulations if you made it all the way to the end of the post! Any questions/comments feel free to reach out to me via my contact form or @jakewalsh90 🙂

Testing out the Azure Firewall Preview

Azure Firewall was released for preview this week, so I thought I would give it a quick try and look at some of the features available. The firewall provides the following features at the current time:

  • Built-in high availability – built into the Azure Platform, so no requirement to create load balanced configurations
  • Unrestricted cloud scalability – the firewall can scale to meet your requirements and meet changing traffic demands
  • FQDN filtering – outbound HTTP/S traffic can be filtered on a specific set of domain names without requiring SSL termination
  • Network traffic filtering rules – centrally create allow or deny network filtering rules, based on IP, port, and protocol. Azure Firewall is fully stateful, and rules can be enforced and logged across multiple subscriptions and VNETs.
  • Outbound SNAT support – outbound virtual network traffic IP addresses are translated to the Azure Firewall Public IP so you can identify and allow VNET traffic to remote Internet Destinations
  • Azure Monitor logging –  All Firewall events are integrated with Azure Monitor. This allows archiving of logs to a storage account, streaming to Event Hub, or sending them to Log Analytics.

You can read more about the features here: https://docs.microsoft.com/en-us/azure/firewall/overview

Getting access to the Azure Firewall is easy – it’s built directly into the VNET Configuration window:

However, before we can use this, we need to enable the Public Preview for our Subscription with a few PowerShell commands:

Connect-AzureRmAccount
Register-AzureRmProviderFeature -FeatureName AllowRegionalGatewayManagerForSecureGateway -ProviderNamespace Microsoft.Network
Register-AzureRmProviderFeature -FeatureName AllowAzureFirewall -ProviderNamespace Microsoft.Network

You’ll need to wait upto 30 minutes at this point for the request to be enabled – see https://docs.microsoft.com/en-us/azure/firewall/public-preview for further information. You can run the following commands to check the status:

Get-AzureRmProviderFeature -FeatureName AllowRegionalGatewayManagerForSecureGateway -ProviderNamespace Microsoft.Network
Get-AzureRmProviderFeature -FeatureName AllowAzureFirewall -ProviderNamespace Microsoft.Network

If all is well – it should look like this:

Finally, run the following command to complete the setup:

Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network

Before we can add a Firewall to a VNET, we need to create a subnet called “AzureFirewallSubnet” – this is to allow the firewall to communicate with addresses on the VNET. Once this is completed, we can setup the Firewall. This is just a case of filling in some basic details:

Once we have completed the basic details, we can review and complete the deployment:

Now that the Firewall is created, we are ready to start testing. In order to test the Firewall out, we need a subnet that is routed out via this Firewall. To do this, I used a route table that directs traffic to the Firewall IP:

We now have a Subnet within our VNET that is routed via the Azure Firewall – so now we can test out some rules. My lab environment is now setup as below (Note the jump VM in a separate Subnet that is NOT routed to the Firewall. This is to allow me to RDP to the test box as I have no VPN in place to test from etc.):

From the Test VM, internet access is now blocked – because there is no firewall rule in place to allow it. I am going to add an “Application Rule collection” which I will use to allow HTTPS access to jakewalsh.co.uk, but not HTTP access. This is configured from the Firewall management interface via the Azure Portal:

Then you will be presented with the following window:

Once I have clicked on “Add” the rule will be added to the Azure Firewall. From my test VM, access to https://jakewalsh.co.uk works, but note that HTTP does not:

HTTPS:

HTTP:

The same also works in reverse, so we can selectively block HTTP or HTTPS sites as we require.

As well as the Application Rules we can deploy, we can also create more traditional firewall rules (replace 0.0.0.0):

Overall, the Azure Firewall complements and extends the functionality of Network security groups and gives additional control over networks residing within Azure. The rules are simple to adjust and easy to work with. It will be promising to see how this feature develops over the coming months…

Azure VM Scale Sets and Remote Desktop Services?

When using any environment that provides virtual desktops at scale, it makes sense to have only the required number of resources running at the right time – rather than all of the resources all of the time. The usual approach to this is to use power management – so unused virtual machines are shut down when not in use.

With Azure we have another potential option designed for large workloads – to use Virtual Machine Scale Sets. This allows us to automatically scale up and down the number of Virtual Machines based on various factors and choices. This effectively allows us to ensure the most economical use of resources – as we never pay for more than we need to use, because the machines are de-allocated when not required. Scale Sets also provide a number of features around image management and VM sizing that could be useful for VDI environments.

In this post I am going to explore the validity and feasibility of VM Scale Sets for a Remote Desktop Services Environment. To start this post – I have the following environment configured, minus the scale set:

Note: if you need an RDS environment – this Azure template is awesome: https://azure.microsoft.com/en-gb/resources/templates/rds-deployment/ – I would advise using multiple infrastructure VMs for each role if this is a production service though.

Next – I configured a single server with the RDS Session Host role and all of the applications I require, as this will become our VM image. I then ran sysprep /generalize as per the Microsoft instructions for Image Capture in Azure. (See here). Once this is done we need to stop and de-allocate the VM, and then we need to turn this into an image we can use with a scale set:

$vmName = "rdsimage01"
$rgName = "eus-rg01"
$location = "EastUS"
$imageName = "rdsworker"
Stop-AzureRmVM -ResourceGroupName $rgName -Name $vmName -Force
Set-AzureRmVm -ResourceGroupName $rgName -Name $vmName -Generalized
$vm = Get-AzureRmVM -Name $vmName -ResourceGroupName $rgName
$image = New-AzureRmImageConfig -Location $location -SourceVirtualMachineId $vm.ID
New-AzureRmImage -Image $image -ImageName $imageName -ResourceGroupName $rgName

Once this is done – we have a VM image saved:

So once we have an image – we can create Virtual Machines from this image, and create a Scale Set that will function as the means to scale up and down the environment. However – we need to do some more work first, as if we just scale up and down with a sysprepped VM, we end up with a VM off domain that won’t be of any use to us…. !

Usually – I just spin up Lab VMs using a JSON Template that creates the VM and joins it to an existing lab domain, using the JoinDomain extension. This saves me lots of time and gives me VMs deployed with minimal input (just a VM name is all I have to enter):

    {
      "apiVersion": "2015-06-15",
      "type": "Microsoft.Compute/virtualMachines/extensions",
      "name": "[concat(parameters('dnsLabelPrefix'),'/joindomain')]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', parameters('dnsLabelPrefix'))]"
      ],
      "properties": {
        "publisher": "Microsoft.Compute",
        "type": "JsonADDomainExtension",
        "typeHandlerVersion": "1.3",
        "autoUpgradeMinorVersion": true,
        "settings": {
          "Name": "[parameters('domainToJoin')]",
          "OUPath": "[parameters('ouPath')]",
          "User": "[concat(parameters('domainToJoin'), '\\', parameters('domainUsername'))]",
          "Restart": "true",
          "Options": "[parameters('domainJoinOptions')]"
        },
        "protectedSettings": {
          "Password": "[parameters('domainPassword')]"
        }

See https://github.com/Azure/azure-quickstart-templates/tree/master/201-vm-domain-join for more details and to use this template.

Now that we have a template – we are ready to go. I’m using Visual Studio to create the JSON for my deployment – and fortunately there is a built in scale set template we can use and modify for this purpose:

With the template up and running, we just need to add some parameters – and we can run a basic test deployment to confirm everything is working. My parameters for the basic template are shown below:

A quick test deployment confirms we are up and running:

However, there are a few issues with the template we need to correct – namely:

  • The machines are not joined to the Domain – and we need to place them into the correct OU for GPO settings too
  • A new VNET is created – we need to either use peering (prior to creation – or domain join operations will fail), or better an existing VNET already setup
  • The load balancer created is not required – we’ll be using the RDS Broker anyway

For this test – all I am concerned about is the domain join and VNET. The load balancer won’t be used so I can just discard this – however, the VNET and Domain Join issues will need to be resolved!

Issue 1 – using an existing VNET

To fix this, I am not going to reinvent the wheel – we just need some minor adjustment to the JSON file, based on this Azure docs article – https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-existing-vnet. In short, this will achieve the following:

  1. Add a subnet ID parameter, and include this in the variables section as well as the parameters.json
  2. Remove the Virtual Network resource (because our existing VNET is already in place)
  3. Remove the dependsOn from the Scale Set (because the VNET is already created)
  4. Change the Network Interfaces of the VMs in the scale set to use the defined subnet in the existing VNET

Issue 2 – joining the Scale Set VMs to an AD Domain

To get the VMs in the scale set joined to an AD Domain we need to make use of JsonADDomainExtension.

"extensionProfile": {
    "extensions": [
        {
            "name": "joindomain",
            "properties": {
                "publisher": "Microsoft.Compute",
                "type": "JsonADDomainExtension",
                "typeHandlerVersion": "1.3",
                "settings": {
                    "Name": "[parameters('domainName')]",
                    "OUPath": "[variables('ouPath')]",
                    "User": "[variables('domainAndUsername')]",
                    "Restart": "true",
                    "Options": "[variables('domainJoinOptions')]"
                },
                "protectedsettings": {
                    "Password": "[parameters('domainJoinPassword')]"
                }
            }
        }
    ]
}

With this added to the JSON template for our deployment, we just need to add the variables and parameters (shown below) and then we are good to go:

Note: the first time I used this I had an issue with the Domain Join – it was caused by specifying only the domain admin username. When specified in the form above (domain\\adminusername) it then worked fine.

Now when we run the template, we get the usual Visual Studio output confirming success – but also a scale set, and, machines joined to the domain:

Because I have previously configured the image used in the Scale Set with the RDS Role, and the Software required – we just need the servers to use an RDS Broker that will manage inbound connections into the RDS Farm. This is where I encounter the first sticking point – these need to be added manually when the Session Collection is created 🙁

This wasn’t a massive issue for this test – so I went ahead and created a Session Collection and added in my VMs:

Next I tested the solution by launching a Desktop via Remote Desktop Web Access:

Bingo – I was then logged into an RDS Session. Note the RDS Connection Name (showing the Broker) and the Computer Name (showing the Session host). This confirms we are running as expected:

I’ve now demonstrated the RDS Farm up and running, utilizing machines created by a Scale Set, and also accessed via a connection broker. But – we aren’t quite done yet, as we have not looked how a scale set could enhance this solution. Below are a few ways we can improve the environment using Scale Sets, and a few limitations when used with RDS:

  • We have the option to Manually increase VM instances if we need more Session Hosts:

Note: this will require adding to the RDS Session collection manually (or via PowerShell)

  • We can scale the environment automatically using Auto Scale:

Below you can see a default scale rule (5 VMs in the Scale set) and then a rule that runs between 0600 and 1800 daily, and increases the VM Count up to 10 VMs if average CPU usage goes above 80%.

The rule for this Scale operation is shown below:

Note: this will still require machines adding to the Session Collection manually.

  • We can increase the size of the VMs

Once a new size has been selected – the existing VMs show as not up to date:

We would then need to upgrade the VMs in the scale set (requiring a reboot), but, does not require the VMs to be re-added to the Session Collection. With this option a drain, upgrade, drain, upgrade option would be available. This allows for a sizing upscale – without lots of reconfiguration or management required.

Overall, it would seem that although scale sets aren’t able to fully integrate with Remote Desktop Services collections, they are still very capable and powerful when it comes to managing RDS Workloads. Scale Sets can be used to size and provision machines, as well as to provide simple options to increase environment capacity and power. Purely using a scale set for the ability to spin up new VMs, or to manage sizing across multiple VMs is a logical step. We also have the option to reimage a VM – taking it back to a clean configuration.

Key Observations from my investigation:

  • We can scale an RDS environment very quickly, but RDS Servers can’t be automatically added to a session collection – the GPO settings for this don’t appear to support RDS post 2008R2 (whereby Session Collections and the new configuration method was introduced). This means servers have to be manually added when the Scale Set is scaled up
  • Scale sets can be used to increase VM size quickly – without reimaging servers (a reboot is all that is required)
  • Scaling can only look at performance metrics – we can’t scale on user count for example
  • Reimaging means we can take servers back to a clean build quickly – if a server has an issue we would just prevent logons and then reimage.
  • Scaling down can’t take logged on users into consideration – so we’d need a way of draining servers down first
  • Scale Sets will also allow us to scale up to very large environments with minimal effort – just increase VM count or size, and add the servers into the RDS Collection. A growing business for example – or one that provides a hosted desktop could scale from 10 servers to a few hundred with minimal effort.

Hope this helps, and congratulations if you have made it to the end of this article! Until next time!

Resources: