Category: Citrix

Introducing PowerScale – a community driven Smart Scale alternative!

As you may know, Smart Scale has been discontinued as of 31/05/2019. But – fear not, a community project now provides the same functionality. This project is the brainchild of Leee Jeffries (twitter/blog – well worth a follow/read for anyone working in the EUC space by the way!), and provides a simple to use solution, that provides a great replacement method that can save VM cost in Cloud Environments, with an on-premises control place.

PowerScale can carry out the following actions on VDAs:

  • Scheduled Machine Management
    • Working Hour Schedule
    • Outside Working Hours Schedule
      • Power On Machines
      • Power Off Machines
      • Scale Machines based on performance metrics
        • CPU
        • Memory
        • Load Index
        • Session Limits
    • User Logoff
      • Forced User Logoff
      • Two Message sent to users at specified intervals before shutdown
    • Graceful User Logoff
      • Wait for sessions to drain before shutdowns complete
    • Email on critical error
    • Testing only mode
      • Logfile generated on every run
      • No farm actions performed during test mode

You can download PowerScale here, and an installation guide is also available here.

Azure Traffic Manager for NetScaler Gateway Failover

Azure Traffic Manager is designed to provide traffic routing to various locations based on a ruleset that you specify. It can be used for priority (failover), weighted distribution, performance, and geographic traffic distribution.

The failover option is similar to GSLB – and works in a similar way, so I am going to demonstrate that in this post. I’ve started with the following environment already configured:

  • Two Azure Locations (East US and South Central US), with a VPN between the sites to join the VNETs
  • 1 Domain Controller in each location
  • 1 NetScaler (standalone) in each location
  • 1 Citrix Environment spread across the two locations
  • NetScaler Gateway’s setup in both sites, and NAT’d out using Azure Load Balancer. (So that we have a public IP offering NetScaler Gateway services in both Azure Locations. Have a look at this Blog post if you require guidance on setting this up.)

Azure Lab Diagram

Before we setup the Azure Traffic Manager profile, we need to give our Public IP Addresses a DNS name label. To do this, browse to the Public IPs for your Load Balancers, and then click on “Configuration”. We need to give our Public IP addresses a DNS name label, as this is what Traffic Manager will be using to balance the endpoints.

Azure Public IP configuration

I have two public IPs so I have created two DNS Name Labels and given them appropriate names:

  • desktop-eus-jwnetworks = East US NetScaler Public IP
  • desktop-scus-jwnetworks = South Central US NetScaler Public IP

Next – it’s time to create the Azure Traffic Manager profile!

Traffic Manager profile creation

After we click create, we just need to populate a few basic details:

Traffic Manager profile creation

As you can see – I have given my Traffic Manager a name, selected Priority as the routing method (this gives us the failover in a similar manner to Active/Passive GSLB). Note: there are other options available:

Traffic Manager routing options

See here for an overview of the Traffic Routing Methods. Next – we need to configure some more settings on our Traffic Manager, to ensure that the Monitoring and Traffic Routing are going to work correctly. In the screenshot below I have adjusted the following:

  • DNS TTL – I’ve adjusted this to 60 seconds, this defaults to 300 seconds (5 minutes)
  • Protocol – HTTPS, this is because we are Monitoring the HTTPS NetScaler Gateway
  • Port – 443 as we are using this port for the NetScaler Gateway
  • Path – this is the path to the files that the monitor will be checking for, so in the case of NetScaler Gateway this is /vpn/index.html – if this page is not available then the service will be marked as unavailable.
  • Probing Interval – this is how often the endpoint health is checked. Values are either every 10 seconds or every 30 seconds
  • Tolerated number of failures – this is how many health check failures are tolerated before the endpoint is marked as unhealthy
  • Monitoring timeout – this is the time the monitor will wait before considering the endpoint as unavailable.

For more information on these configuration options – click here.

Traffic Manager configuration

Next – it is time to add our endpoints! To do this, click on Endpoints and then on Add:

Traffic Manager endpoints

We then need to add our Public IP addresses assigned to the Azure Load Balancers (where the NAT rules were created). Note – you will need to do this for BOTH endpoints:

Adding Endpoints to Traffic Manager

Once both are added, you will see the below in the Endpoints screen. Note that both Endpoints are shown as “Online” – this confirms our monitor is detecting the Endpoints as up. Also note that the Endpoints have priority – this means that under normal operation, all traffic will be sent to the “eus-desktop” endpoint (Priority 1), and in the event of a failure of the “eus-desktop” endpoint, all traffic will be directed to the “scus-desktop” Endpoint.

Endpoints

All that is left to do is test – however, first let’s make things neat for our users with a CNAME DNS Record. We are effectively going to CNAME our jwdesktop.trafficmanager.net record to something that users would be able to remember. You can find your record from the overview screen:

Traffic Manager overview

Next up I added a CNAME record in my Azure DNS Zone:

Add DNS Record

Add CNAME Record

Once this is created – we can start testing! But first, a diagram! Below is shown what we now have setup and working:

Solution Diagram

Note: in order to easily distinguish between my two Gateways, I set the EUS Gateway to the X1 theme, and the SCUS Gateway was left on the default NetScaler theme. When accessing https://desktop.jwnetworks.co.uk I am correctly shown the EUS Gateway:

EUS Gateway Test

Bingo – this all looks good to me! Next up, I disabled the Virtual Server for the EUS NetScaler:

Virtual Server Disabled

After around 30 seconds… the Monitor Status shows as Degraded:

[ for those interested in the maths (10 Second Probe interval + 5 second timeout)x1 tolerated failure (so effectively 15×2 attempts at connecting) ]

Endpoint Health

Next I refreshed the Page and we are presented with the SCUS Gateway page:

SCUS Gateway Page

As you can see, during a failure condition (the EUS Gateway vServer being taken down) the Traffic Manager directs traffic to our Priority 2 site, without any intervention from us. Any users would be able to refresh the page and then log back in. This can be used not only for NetScaler Gateway but for many internet facing services – for example OWA, SharePoint etc. There’s a great many services that can benefit from this type of failover and the resiliency that it offers.

Load Balancing Citrix StoreFront with Azure Load Balancer

Sometimes there is a requirement to Load Balance StoreFront using a method other than NetScaler. Although rare (in my experience!) this does occasionally happen when NetScaler is perhaps not being used for Remote Access –  in an internal only environment for example.

In this post I will explain how to Load Balance StoreFront using the native Azure Load Balancers. We start with a simple setup:

  • 1x Domain Controller
  • 2x Citrix StoreFront Servers – in an availability set called “EUS-StoreFront”
  • 1x Virtual Network (VNET)

All of the above is in the East US Azure Location.

We start by creating a new Azure Load Balancer. Note a few key settings here:

  • Type: Internal – this is because we are balancing traffic within our VNET (Internal Network only)
  • IP address – static… we don’t want the LB IP to change!

Once this is done – we can add the backend servers. We do this by targeting the Availability Set that the StoreFront Servers are in. For those familiar with NetScaler, this is similar to a Service Group:

Next – we need to configure some Health Probes. This allows us to determine the state of the StoreFront server and to confirm that the services we are load balancing are healthy and available. Note: at the current time Azure Load Balancer HTTP checks support relative paths only, so I have used /Citrix/CitrixWeb/monitor.txt – a simple text file (Static Content) I created to check that the Web Server is serving out content and thus working correctly. (https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/load-balancer/load-balancer-custom-probe-overview.md) I have configured by Health Probe as below:

Next – it’s time to create the Load Balancing Rule that will form the entry point for Load Balanced traffic. Note the Protocol (TCP), Ports (80 Frontend, and 80 Backend), Backend Pool (StoreFront Availability Set), Health Probe (our HTTP 80 monitor.txt check), Session Persistence (Client IP), and Idle Timeout (30 minutes is currently the maximum value):

We can then click OK and our Load Balancing Rule is created! Next I created a DNS A Record for StoreFront and pointed it at the Load Balancer IP. After this, I opened up a browser and typed in my newly created StoreFront DNS record. Bingo – we have a page!

To test that the Load Balancing was working. I shut down IIS on each server in turn, and then tested. Sure enough – even when only 1 out of 2 servers was running, the page stayed up and StoreFront was accessible.

This Load Balancer can be used for a variety of Web Applications, and is a simple way to Load Balance Azure based services as you require. Until next time… cheers!