Category: Testing

Azure Traffic Manager for NetScaler Gateway Failover

Azure Traffic Manager is designed to provide traffic routing to various locations based on a ruleset that you specify. It can be used for priority (failover), weighted distribution, performance, and geographic traffic distribution.

The failover option is similar to GSLB – and works in a similar way, so I am going to demonstrate that in this post. I’ve started with the following environment already configured:

  • Two Azure Locations (East US and South Central US), with a VPN between the sites to join the VNETs
  • 1 Domain Controller in each location
  • 1 NetScaler (standalone) in each location
  • 1 Citrix Environment spread across the two locations
  • NetScaler Gateway’s setup in both sites, and NAT’d out using Azure Load Balancer. (So that we have a public IP offering NetScaler Gateway services in both Azure Locations. Have a look at this Blog post if you require guidance on setting this up.)

Azure Lab Diagram

Before we setup the Azure Traffic Manager profile, we need to give our Public IP Addresses a DNS name label. To do this, browse to the Public IPs for your Load Balancers, and then click on “Configuration”. We need to give our Public IP addresses a DNS name label, as this is what Traffic Manager will be using to balance the endpoints.

Azure Public IP configuration

I have two public IPs so I have created two DNS Name Labels and given them appropriate names:

  • desktop-eus-jwnetworks = East US NetScaler Public IP
  • desktop-scus-jwnetworks = South Central US NetScaler Public IP

Next – it’s time to create the Azure Traffic Manager profile!

Traffic Manager profile creation

After we click create, we just need to populate a few basic details:

Traffic Manager profile creation

As you can see – I have given my Traffic Manager a name, selected Priority as the routing method (this gives us the failover in a similar manner to Active/Passive GSLB). Note: there are other options available:

Traffic Manager routing options

See here for an overview of the Traffic Routing Methods. Next – we need to configure some more settings on our Traffic Manager, to ensure that the Monitoring and Traffic Routing are going to work correctly. In the screenshot below I have adjusted the following:

  • DNS TTL – I’ve adjusted this to 60 seconds, this defaults to 300 seconds (5 minutes)
  • Protocol – HTTPS, this is because we are Monitoring the HTTPS NetScaler Gateway
  • Port – 443 as we are using this port for the NetScaler Gateway
  • Path – this is the path to the files that the monitor will be checking for, so in the case of NetScaler Gateway this is /vpn/index.html – if this page is not available then the service will be marked as unavailable.
  • Probing Interval – this is how often the endpoint health is checked. Values are either every 10 seconds or every 30 seconds
  • Tolerated number of failures – this is how many health check failures are tolerated before the endpoint is marked as unhealthy
  • Monitoring timeout – this is the time the monitor will wait before considering the endpoint as unavailable.

For more information on these configuration options – click here.

Traffic Manager configuration

Next – it is time to add our endpoints! To do this, click on Endpoints and then on Add:

Traffic Manager endpoints

We then need to add our Public IP addresses assigned to the Azure Load Balancers (where the NAT rules were created). Note – you will need to do this for BOTH endpoints:

Adding Endpoints to Traffic Manager

Once both are added, you will see the below in the Endpoints screen. Note that both Endpoints are shown as “Online” – this confirms our monitor is detecting the Endpoints as up. Also note that the Endpoints have priority – this means that under normal operation, all traffic will be sent to the “eus-desktop” endpoint (Priority 1), and in the event of a failure of the “eus-desktop” endpoint, all traffic will be directed to the “scus-desktop” Endpoint.

Endpoints

All that is left to do is test – however, first let’s make things neat for our users with a CNAME DNS Record. We are effectively going to CNAME our jwdesktop.trafficmanager.net record to something that users would be able to remember. You can find your record from the overview screen:

Traffic Manager overview

Next up I added a CNAME record in my Azure DNS Zone:

Add DNS Record

Add CNAME Record

Once this is created – we can start testing! But first, a diagram! Below is shown what we now have setup and working:

Solution Diagram

Note: in order to easily distinguish between my two Gateways, I set the EUS Gateway to the X1 theme, and the SCUS Gateway was left on the default NetScaler theme. When accessing https://desktop.jwnetworks.co.uk I am correctly shown the EUS Gateway:

EUS Gateway Test

Bingo – this all looks good to me! Next up, I disabled the Virtual Server for the EUS NetScaler:

Virtual Server Disabled

After around 30 seconds… the Monitor Status shows as Degraded:

[ for those interested in the maths (10 Second Probe interval + 5 second timeout)x1 tolerated failure (so effectively 15×2 attempts at connecting) ]

Endpoint Health

Next I refreshed the Page and we are presented with the SCUS Gateway page:

SCUS Gateway Page

As you can see, during a failure condition (the EUS Gateway vServer being taken down) the Traffic Manager directs traffic to our Priority 2 site, without any intervention from us. Any users would be able to refresh the page and then log back in. This can be used not only for NetScaler Gateway but for many internet facing services – for example OWA, SharePoint etc. There’s a great many services that can benefit from this type of failover and the resiliency that it offers.

Testing out Project Honolulu

Recently I have been testing out something new – Project Honolulu from Microsoft. I first heard about this on Twitter (thanks to Eric @XenAppBlog), and was interested in what it could offer straight away. Project Honolulu is a new way to manage Windows Server – using a web based method, that does not rely on the traditional Server Manager GUI. Functionality is similar, and offers the usual range of configuration options, as well the ability to manage roles and features as you would normally expect.

You can download Project Honolulu here. Windows Server 2016 is supported natively, but for 2012R2 Support, you will need to install WMF5.0 (KB3134758).

Project Honolulu has a number of ways to deploy – but I went with a simple install on a single server within my lab. Once this is completed (it’s an easy next next next done install), you are presented with the following screen, which opens up in your default browser:

From here – we can add server connections. Note: Standalone, Failover Cluster, and Hyper-Converged systems are supported:

After I’d added a few servers from my lab, the main screen appeared as below. You can also import servers from a text file – so an export from AD is possible too:

From here, we can see the status of the servers I have added and then drill down further into the options by clicking on a server name. The overview screen gives the usual range of information we’d expect to see:

Particularly nice – is the metric display, which gives an overview of CPU, Memory, Ethernet, and Disk Activity. This is realtime data – but useful for monitoring key servers/clusters, perhaps on an Ops display board or large screen etc.:

As well as a range of metrics available, we have a range of management tools we can take advantage of. Particularly interesting is the ability to manage elements like Network Adaptors, Services, and Roles/Features, as well as to view Event Log entries and the Registry:

Management of Services is also a very useful feature – allowing services to be stopped and started (I wish it had a restart button though!) from the Web Console. This is particularly useful for Managed Service Providers – when the 2am call comes in that a failure has occurred, instead of a VPN into an RDP Session into another RDP Session, you can fire up a Web Interface and restart the service from there (NAT rules and an SSL cert required of course…) :

You’ll notice here that I’ve highlighted a couple of Citrix Services too – Project Honolulu allows you to manage all services running on a supported machine. So this is great for managing 3rd Party applications and services too. The lightweight nature of the system also means that this can be added to existing systems with ease (a single installer and a list of servers).

I’m really interested to see where this Project will go – in particular, it makes the use of Server Core much more accessible, because a familiar and common interface can be used for management of multiple servers. It also allows simple management of basic server configurations, as well as Service management for Microsoft and Third Party applications. Any environment could probably benefit from a single interface that allows basic configuration and Service restarts… the key questions is… where will this Project go next?

I’d really like to see support for more configuration changes, for example, customisable PowerShell options (e.g. this button in the interface runs this remote command) or support for a PowerShell session via the Web Interface. Also it would be great to see support for Third Party software – for example, additional modules that could be included to provide web based management of other software items on the server.

 

 

 

 

Citrix Workspace Environment Management – IO Management

I’ve been blogging a lot this year on the merits of Citrix Workspace Environment Management (WEM) and the various features it provides. Another feature is I/O Priority – which enables us to manage the priority of I/O operations for a specified process:

To demonstrate this, I am going to run IOMeter (a storage testing tool – that consumes, but also measures CPU utilisation during testing), and SuperPi (a tool that calculates Pi to a specified number of digits – and consumes large amounts of CPU during calculation).

Before making any WEM configuration changes, on my virtual desktop the results are as follows:

IOMeter (Using the Atlantis Template – available here) –  shows 6.56% CPU Utilisation, and 3581 I/Os per second:

SuperPI calculation to 512K – 7.1 seconds:

Next I added the IOMeter and SuperPi executables into WEM, and set the priority to very low:

As a result of doing this the IOMeter results are significantly reduced, and the calculation time for SuperPi has increased significantly:

IOMeter Result – around 60% reduction in I/O per second, and 2% CPU usage reduction:

SuperPI – time to calculate has increased by nearly 200%:

From this test – it is clear to see that I/O Management within Workspace Environment Management is an effective way to control the I/O operations of specified processes. Whilst you might think slowing down the performance of an application is unlikely to be a major requirement for many of us – the ability to control particularly resource intensive applications is a definite win for complex environments. If a particular application is causing performance problems (for example degrading the performance for others) then this provides a suitable solution to manage that process.