Citrix Connection Quality Indicator


Connection Quality Indicator is a new tool from Citrix designed to inform and alert the user to network conditions that may affect the quality of the Session they are using.  Information is provided to the end user via a notification window which can be controlled using Group Policy.

Installation is supported on the following platforms:

See for more details.

Test Environment

My environment consists of a basic Citrix XenDesktop 7.12 installation:

  • 1x Desktop Delivery Controller (Local Database)
  • 1x Citrix StoreFront
  • 2x XenDesktop Session Host (Static VMs)

All VMs are 1 vCPU, 4GB RAM, and Windows Server 2016.


Connection quality indicator needs to be installed on each Session Host, or to a master template – and follows a simple next next finish installation with no configuration during install:

Post installation we can see the program installed via Control Panel:

Group Policy Configuration

As outlined in CTX220774 there are also Group Policy Templates that can be used. I have opted to copy these to the Central Store within my Domain. The templates can be extracted from any machine with Connection Quality Indicator on:

En-US templates are within the configuration folder ready for use:

Note – once placed into the Central Store, Group Policy Administrative Templates will be available as below:

Within Citrix Components we now have access to the Policy Settings for Connection Quality Indicator:

We are then able to modify the following settings:

Enable CQI – this setting allows us to enable the Utility, and also configure the refresh rate for data collection counters:

Notification Display Settings – from this setting we can configure the initial delay before the tool alerts the user to the connection quality rating, and define a minimum interval between notifications:

Connection Threshold Settings – this setting is perhaps the most interesting, because it is here we can tailor the tool to any specific environmental requirements. From this setting, we can control the definitions of High and Low Latency (in milliseconds), High and Low ICA RTT (in milliseconds), and the High and Low bandwidth value (in Mbps):

For the purposes of this demonstration – I’ve used default settings all round.

After configuring the group policies – I logged into a Desktop Session with the tool installed. 60 seconds after login the Window appeared with the session quality result:

If the cog symbol is clicked the user has the option to modify the location of the display window, snooze the tool, and also to see the test results:

Unfortunately, I have no method in my lab for degrading network performance artificially, or increasing latency etc. – but to prove that the metrics were functional, I adjusted the Group Policy settings so that some fairly unobtainable figures were used for all settings – and thus the tool would grade the connection quality differently:

This highlights how the tool can be used to identify connection quality through tailoring the GPO for a specific environment. After changing these settings, rebooting the Session Host (the lazy way of updating Group Policy!), and logging back in, the tool reported the following:

This is a very useful option within the tool – as we can specifically modify the settings to suit a range of environments. In some environments having low bandwidth might not be an issue, but high latency might be for example.


Overall this tool is very useful for giving the end user an insight into the quality of the network environment, and provides real time feedback on this quality. This is great for keeping end users informed, and managing expectations of performance too. What I also like is that end users will be able to see differences based on where they work – for example, a user with a “Strong Connection” inside the office, but a “Weak Connection” over 3G or at home, would know what sort of experience to expect, and would have real time data to support any troubleshooting moving forward.


Testing out the Atlantis USX Community Edition

Recently I’ve been using the Atlantis USX Community Edition, which is a free edition of the Atlantis USX Software – specifically for the purposes of testing and learning how the USX can improve the performance of a virtual desktop. Atlantis provide a number of testing guides and videos on the USX Community Edition landing page  – and also a testing guide, which outlines how to benchmark the software.

For this post I wanted to demonstrate the results I’m getting in my lab – as they give an idea of the benefits a solution like this can bring. As part of this process I’ve been reading up on various testing methods and options, and I eventually settled on using the same configuration detailed in Jim Moyle‘s excellent article – available here. I should also note – the USX Community Edition also provides a pre-made IOMeter configuration file (in the Citrix Testing Guide), but I have opted to follow the baseline in Jim’s article.

My test configuration is as follows:

  • 1x HPE ML110 Gen9
  • 40GB RAM
  • 2x 700GB SSD in RAID 0
  • 1x 480GB SSD
  • 1x 1TB HDD

All storage is via an HPE Dynamic Smart Array B140i RAID Controller.

Base VM for testing using IOMeter is a Windows 2012R2 Standard VM with no tuning or modification applied:

My configuration for the USX Appliance is as follows:

Due to the RAM available on my host I went for the small appliance:

All other configuration was standard, and all infrastructure VMs were stored on storage not participating in this testing (so as not to affect the result). The USX CE also includes an excellent management interface, which allows you to monitor the health of the environment, and displays useful statistics:

After setting up the configuration, I decided to test 3 storage configurations, against the IOMeter baseline, and then post the results to give an idea of performance:

Test 1 – VM on 1x HPE 1TB HDD

Test2 – VM on 2x 700GB SSD RAID 0

Test 3 – VM on Atlantis USX CE Storage

As you can see, the USX CE wins in every storage metric displayed – there is no contest here:

  • In terms of storage throughput, the SSD array provides around 13x the speed of the HDD, but the USX provides around 60x the performance of the HDD, and 5x the performance of the SSDs in RAID 0.
  • The average read and write response times are also significantly different across the board – with the USX read being around 30x faster than the HDD, and the write being around 60x faster than the HDD. The USX also demonstrates performance around 4-5x faster than the SSDs in RAID 0 for average read and write response time.
  • Total IOPS is also a useful metric – again one that the USX appliance claims the prize for; IOPS are around 65x higher than the HDD, and around 5x higher than the SSDs in RAID 0.

Overall – the USX demonstrates around 60x the performance of the HDD, and around 5x the performance of the RAID 0 SSD array in my lab. If you haven’t already tried out the USX Community Edition I would definitely recommend it, not only as a demonstrator of how this technology can improve VDI (and other) workloads, it’s also great if (like me) your lab time is precious, and anything to speed up deployment and testing is a real bonus.


Citrix PVS – NTFS vs ReFS 2012R2 vs ReFS 2016

I’ve been doing some work recently around Citrix Provisioning Services, and this has prompted me to investigate what new features are available in version 7.11. One that stood out to me was the support for Microsoft’s Resilient File System (ReFS) on Windows Server 2016. This file system is interesting for those with virtualized environments due to the extra speed enhancements around the use of VHD and VHDX files.

What’s also interesting is that Citrix state the type of performance enhancement that can be expected when using this version:


So… I decided to test this out!

I started with a basic PVS setup of 1 server and 1 client – but times three to give me three farms. The first farm being a 2012R2 Server with the PVS vDisk storage on NTFS, the second being 2012R2 using ReFS, and the third being 2016 using ReFS. All of the client machines captured for the session hosts were 2012R2. All base specifications were identical with 4GB RAM and 1vCPU.

Capture Times:

  • Capture time for NTFS on 2012R2: 8:44
  • Capture time for ReFS on 2012R2: 5:50
  • Capture time for ReFS on 2016: 5:15

Boot Times:

PVS Server Streaming Directory – NTFS on 2012R2:

PVS Server Streaming Directory – ReFS on 2012R2:

PVS Server Streaming Directory – ReFS on 2016:

Testing vDisk Merging:

To test out vDisk merging I created a new maintenance revision and booted the client machines from this:

I then installed a number of applications using Ninite:

Installing these and the associated changes created a file size of around 4.5Gb for the differencing disk.

I then merged the changes to create a new base:

  • NTFS on 2012R2: 12:44
  • ReFS on 2012R2: 4:04
  • ReFS on 2016: 0:14 (yes… less than 15 seconds!)


As you can see – there is a noticeable speed increase when using ReFS on 2016 – in all tests the performance was significantly faster. Capture was around 40% faster. Boot times within my lab environment were almost negligible they were that fast – but ReFS on 2016 had a 3 second lead over ReFS on 2012R2, and 4 seconds on NTFS on 2012R2. Perhaps the most impressive speed increase was the merge operation though – 12:30 faster on ReFS on 2016 than NTFS on 2012R2!

All in all it’s pretty clear what I will be using when implementing PVS from now on….

VMware OS Optimization Tool and Windows 10


I’m a big fan of the VMware OS Optimization Tool and its capabilities, not only does it help to optimize VDI Environments through a range of settings and templates, it also helps optimize settings that otherwise would be complicated to control without a scripted or policy based method. I must confess I have been using this tool for some time, but mostly without quantifying the effect (particularly in lab environments, where every spare bit of resource is cherished).

I wanted to give an idea in this blog post about the power of the tool, by doing some side by side comparison of Windows 10 Operating systems, against templates available in the tool.

I’ll aim to cover the following in an Optimized and Non-Optimized capacity:

  • Booting
  • Resource usage, idle 5 minutes after login
  • Roaming User profile size after first login
  • Logon time with a roaming user profile – first login, profile removed from the local machine, and then a second login

2 identical VMs were configured for this test, with the following specification:


Both VMs are identical and the resource limits fall well within the capacity of the host, so I can be sure of no bottlenecks etc.

On one of the VMs I ran the VMware OS Optimization Tool:


I used the LoginVSI Template for my Optimizations:


This template contains lots of areas and settings – created by the good folks over at LoginVSI:


Optimization is simple – just pick a template and click “Analyze” and then “Optimize”:


After this you are presented with a results Window:



Test 1 – Boot time (time to logon screen):

Optimized Non-optimized
44 seconds 45 seconds

Little difference here – to be honest I wasn’t expecting a huge change. Both machines are on SSD storage, with two fast processor cores available, and plenty of RAM – so no real bottleneck.

Test 2 – Resource usage after 5 minutes idle (whilst logged in):

Optimized Non-optimized
 vm6  vm7

Again – not a huge difference here. But the RAM saving of 0.2GB is worth noting. Multiply 0.2GB up to factor in a 1000 Desktop deployment and that’s 200GB of additional RAM – and when each GB of RAM comes at a price, this is a worthwhile saving.

Test 3 – Roaming profile size after first login

Optimized Non-optimized
Local: 110MB Local: 124MB
Roaming: 692KB Roaming: 984KB

Not really any huge difference here either – the smaller profile size is likely due to some of the features that have been disabled by the Optimization tool that would normally write data back into a profile during first login etc. I’d usually recommend avoiding Windows profiles with any VDI Solution anyway – and look to a solution like Citrix Profile Management or AppSense Personalisation Manager etc.

Test 4 – Login with a roaming profile, profile clear out, and subsequent login time (time to start screen):

Optimized Non-optimized
First Login (Profile Creation): 23 seconds First Login (Profile Creation): 43 seconds
Second Login (Loading Roaming Profile): 9 seconds Second Login (Loading Roaming Profile): 17 seconds

Quite a noticeable difference here. Initial logon time was 20 seconds less (47% faster). Subsequent logins were also noticeably faster – 9 seconds against 17 seconds (also 47% faster). Most of this appears to be due to tasks running during logon. For the initial profile creation this was things like default applications and Windows Store Apps etc – if we look further into the LoginVSI template for the tool, we can see a specific section just for login time reduction:


Overall, this tool has a clear impact on Windows 10 (and other operating systems) for VDI use. Not only does it lead to a reduction in Login Times, we also see a reduction in RAM usage from the VMs too. I’d recommend anyone currently running non-optimized environments to give this tool a go on a test machine and do some comparison themselves. Many of the features within a Desktop OS are unnecessary for VDI machines and a tool that provides a baseline like this is a great starting point.



Citrix Workspace Environment Management – Memory Management

After testing out the excellent CPU management features in Citrix Workspace Environment Management (WEM), I wanted to test out how well it handled applications that were particularly greedy with RAM consumption.

To start – I have a single Windows Server 2012R2 Session Host, with 4GB of RAM, and a single vCPU, running on vSphere:


Limitations for this VM have been individually configured as follows:





I wanted to use limitations to give a performance baseline. Although this is much lower than most Session Hosts would likely be – it will prove the concept for this test.

Next I configured the Session Host with the WEM Agent and imported the default baselines as per Citrix documentation. Within the Console, we can then see the Memory Management options:


According to the Administration Guide, this enables the following:


For the purposes of this test, I am going to set the idle limit time to 5 minutes. I will be using TestLimit, a command line tool to simulate high memory usage, available here:

I’ve configured a batch file that will start TestLimit64.exe, and consume 3.5GB of RAM (from a total of 4GB assigned to the session host).

Prior to any WEM configuration being applied, running this batch file causes Memory Usage to rise as expected:


This remains until the process is closed manually.

Next, I ran the same process but for a user logged on with active WEM Settings – including Memory Management. Initially we saw the same rise in memory:


I then waited 5 minutes (the time limit we set earlier), with the application running in the background, and then checked the stats again:


As you can see the excess memory consumed by this application has been released – and is now available to other processes running on the Session Host. I tested this multiple times on different machines and session hosts, and saw the same result each time.

This potentially very useful for situations where a single user may be using a program that runs periodically, but sits with high RAM consumption in the background. Releasing under-utilized RAM will improve the session experience in the event that RAM capacity is being reached.



Book Review – “Inside Citrix – The FlexCast Management Architecture” by Bas van Kaam

Recently I have been reading the excellent “Inside Citrix – The FlexCast Management Architecture” by Bas van Kaam. I wanted to write a quick post up about this book – as it’s well worth a read for anyone working with Citrix Desktop Virtualization products.,204,203,200_.jpg

You can purchase the book here.

What I really like about this book is how thorough the sections are – no area is left untouched. Each element of the FlexCast infrastructure is covered, including the history behind FMA, and an overview of how FMA is different to IMA. As well as thorough details, there is also an excellent troubleshooting section, which goes through various tools and troubleshooting methods, and various cloud services available to assist.

Also, each section has a “Key Takeaways” area at the end, which provides an overview – highlighting the key elements and considerations covered. This is really useful if you are wanting to improve your knowledge in a particular area. Just by reading this book I’ve already uncovered, and filled, gaps in my own knowledge – this for me is the main reason for reading any technical publication.

Overall, for anyone working with Citrix products this book is an excellent read in my opinion – not only useful for improving your knowledge, but also serving as a reference guide when there are decisions to be made.


Citrix Workspace Environment Management – CPU Management

One of the great features in Citrix Workspace Environment Management (WEM) is the ability to intelligently manage CPU usage. This is especially important in a shared desktop scenario – where the actions of one user could ruin the experience for another.

To test this I am going to demonstrate a user running SuperPI (a CPU stress testing tool) and how this can be managed with WEM.

We start with a Session Host virtual machine of the following specification:


In vSphere this host is limited to 2000mhz of processing power, to provide a limit that won’t be affected by other VMs running on my host, and to provide a CPU benchmark:


Next I logged on a user before any WEM config was applied, and ran SuperPI. Note that the user is able to use all of the processing power available:


This is also confirmed by the CPU utilisation within vSphere – you can see that at the time SuperPI was started the CPU utilisation rocketed to nearly 100%:


I then tested this with a WEM Configuration applied. I started by importing the recommended default settings provided by Citrix with the Software:


With the default settings in place, CPU usage protection applies when the CPU usage goes over 25%:


It’s worth noting when I ran SuperPI without any CPU protection the rest of the sessions felt sluggish – e.g. opening windows, launching notepad took significantly longer than would be considered normal. (Because the CPU was saturated with requests from SuperPI).

Next I ran SuperPI as a user logged on with WEM Configuration applied:


What’s very noticeable now is that the CPU usage of SuperPI varies greatly (before it was a constant 99%) – when launching other applications or items, the utilisation of the CPU by SuperPi drops significantly.

I noticed ranges from 65% down to 30%. This was more noticeable when launching other applications in other sessions – these were not sluggish or slow to respond. Each application launch was accompanied with a noticeable drop in usage by SuperPI to accommodate the new process.

This is the CPU management feature of WEM controlling this application – and making the experience better for all users on the Session Host.





Quick Tip – Change SQL Server Collation

I recently needed to change the SQL Collation to allow for a System Center Configuration Manager install, but I forgot to do this when I built the SQL Server. To change this after installing SQL, run the following command:

Note: if you need to see the command output, remove the /q. Also – use with caution, I ran this on an empty SQL Server with no databases on it.

For further information see:

XenDesktop 7.11 – Zones


Many XenApp administrators will remember the Zone Preference Failover (ZPF) features available in previous versions. For those unaware, this allows resources to be served from Hosts within a logical group (a Zone), unless a failure/maintenance condition exists, in which case resources are served from a Secondary Zone.

Using Zones we can provide applications from a single Zone, with failover to another Zone in the event of an issue. This is useful for a great many reasons:

  • Automatic failover to a DR site if there are issues with the Primary Zone’s Session Hosts
  • Automatic placement of users into a Zone that is nearest their location – useful for those with geographically disparate environments.
  • Capacity management – we can seamlessly “spill” users across to a secondary zone in the event that the primary zone reaches capacity.


Zones are configured using the Zones option, under Configuration, within Citrix Studio:


Lab Environment

For the purposes of this demonstration, I have considered a fictional environment shown below. We have two Geographical sites, linked via a wide area network. We have Active Directory Services across both sites, and a replicated SQL Environment. XenDesktop has been configured as a single site, with two Delivery Controllers, and two Session Hosts.



For this demonstration I will show how user preferences can be assigned to resources in a zone, and then seamlessly failed over in the event of a fault or issue.


Creation of Zones

We begin with two Delivery Controllers configured for this farm – obviously in a production environment we would be working with N+1, so at least two Controllers per Zone.


We can then create the Zone Configuration – this is done using the Zones pane in Citrix Studio:


As you can see, the default configuration places all controllers in the Primary Zone (the only zone within the Farm):


To begin, I will create a secondary zone called “Zone 2” and also rename “Primary” to “Zone 1” – in a production environment these could be physical locations, or different Data Centers. To create a new Zone, click on the “Create Zone” within the Actions pane:


You will then be presented with a new window which can be name as required. Note – I have selected my Zone 2 Delivery Controller (Z2XD01) to be added to this Zone:


When the details have been populated and Controllers selected, click on “Save”. The new Zone is then created:


After this I updated the Primary zone by right clicking on the Zone and selecting “Edit Zone”:


My Zone configuration is now as follows:


Machine Catalogue Setup

Next I will create two new machine catalogues, each with a single session host, and assign these machine catalogues to Zone 1 and 2. The creation of these is done as you would any other machine catalogue – except we can now select a zone to add the machines to:


I’ve created two catalogues now – one for Zone 1 and one for Zone 2:


Now we will create a single Delivery Group – in this scenario, the Delivery Group spans across the Zones, and contains machines from both Zones:


Note: Create the Delivery Group with machines from one Zone first, and then add in the machines from the 2nd Machine Catalogue after.

Zone Configuration

Before we can assign users to a Zone, we need two new AD Groups to define the users home Zone:


Next – we can assign these Groups to Zones within the Console. We do this with the “Add Users to Zone” option:


Adding a user Group to a Zone:


Now – we can see the user configuration for each Zone:




To test this setup I created 5 Test User accounts, and added them all to the Users_Zone_1 AD Group. This will mean that when these users launch a session from our Delivery Group, it will be served by Session Hosts in Zone 1 (as we have associated these users to that Zone). I logged all of the users into StoreFront and launched a session – the result was all users launched desktops from Zone 1 Resources:


Next – I will remove Test User 5 from Zone 1, and place them into a Zone 2 Group. Then I will log them off and back on. The result is shown below – note how a change in the Zone Assignment changes the Session Host used:


Practical Use – Failover!

Having users log into different Session Hosts based on an AD Group is all well and good – but this is not unique to Zone Configuration. What’s great about the Zoning in XenDesktop is that we can automatically fail over between the zones. In the example below, all 5 Users are assigned, by AD Group, to Zone 1.

But – if there’s an issue with the Session Hosts in Zone 1, perhaps a failure or outage, we don’t want to manually have to fail this over. When setting up the Delivery Group, this option was not selected:


This means that because both Machine Catalogues (in each Zone) are members of the same Delivery Group (which spans the Zone), we can have user sessions automatically launch in Secondary Zone for the Users. To test this, I placed all Machines in Zone 1 into Maintenance Mode, and then logged on all 5 Test Users. The result – Users are logged onto Machines in Zone 2:


Essentially, we now have a solution that automatically connects users into their primary resource, based on Zone Assignment, but in the event of an issue with that Zone, connects them into resources in a Secondary Zone.

Obviously there many other elements to consider with a design like this – particularly around the SQL, StoreFront, and NetScaler infrastructure. But for a simple and automatic failover solution – this works really well.

Citrix Self Service Password Reset – Setup


Self Service Password Reset (SSPR) is a technology that allows users to enroll and answer a series of questions, which then allows them to reset their password later on should they forget it.

Before setting up SSPR you need to have Citrix StoreFront setup and secured with an SSL Certificate, and an SSL certificate available to use for the SSPR server. You also need to have Platinum XenDesktop licensing to use this feature. I also have a small XenDesktop 7.11 environment setup so that I can test successful launching of applications after a password reset.

Environment Overview

Below is a diagram of the virtual environment I have created for this lab:


Not too much to setup for this – this lab was created in a virtual environment, utilising PFSense as the gateway. I also have a client machine, and a XenDesktop infrastructure setup which aren’t in the picture. All VMs are running Windows Server 2012R2, and are 1vCPU and 4GB RAM. All storage is across SDDs local to the VMware host.

SSPR Setup:

The installation media for Self Service Password Reset is included on the XenDesktop 7.11 Media:


Installation is fairly straightforward – the next few screenshots cover the install process:








Installation of SSPR is now completed – and we can start configuring everything.

IIS Changes

We need to configure a few basic IIS settings on the SSPR Server – these are detailed below:

Install an SSL Certificate:

Firstly, we need to open the IIS Management console, and open Server Certificates:


You will need to specify a certificate for use with the service, and then bind that to the default website. In my case I have used a self signed certificate, which I have installed on both the SSPR and StoreFront servers in this lab:


Bindings adjusted as per the below screenshot:


Adjust the Authentication Settings:

As per the Citrix article you will need to adjust the authentication settings for the MPMService website. Open the MPMService Website in the IIS Management Console:


Click on Authentication, then Windows Authentication, and then Advanced Settings:


Un-tick “Enable Kernel-mode authentication”:


Click “OK” and then click on “Providers”:


Add “Negotiate:Kerberos” and remove all other Providers:


Click OK. Then browse back to the MPMService Website in the IIS Management Console, and ensure that under SSL Settings, “Require SSL” is selected:


Click “Apply” and then close the IIS Management Console.

Setting up the Self Service Password Reset Server

Before running the setup of SSPR, you’ll need two service accounts ready for use. I’ll cover off the permissions needed for each account later on in this post:

Account Name Usage Function
svc-ssprdataprox Data Proxy Account Reads and Writes data to the Store.
svc-ssprselfservice Self-Service Account Unlocks accounts and resets passwords on user AD Objects.



To start the SSPR setup process, log onto the server running SSPR and click on “Citrix Self-Service Password Reset Configuration”:


The console then loads are you are presented with the following:


Before any configuration, we need to create a central store, as per the Citrix Article

To do this, use the Server Manager console, and then click on “File and Storage Services”, and then “Shares”:


Click on “Tasks” and then “New Share…”


Continue with the “SMB Share – Quick” option:


Select “Type a custom path”:


Then create a new folder – mines call “SSPRShare” below and click “Select Folder”:


Click Next, and then type the share name as “CITRIXSYNC$” and click Next:


Select “Access Based Enumeration”, uncheck “Allow caching of share”,  and select “Encrypt data access”:


Click Next and then “Customise Permissions”, and then “Disable Inheritance”:


Select “Convert inherited…”, and then remove all users except for CREATOR OWNER, SYSTEM, and the Local Administrators Group:


We then need to modify the permissions assigned to creator owner, so that the permissions are as follows:


We also then need to add the Data Proxy Account we created earlier with Full Control of the Share. And Also the NETWORK SERVICE account with Read Permission:


Once this is done, we create two subfolders “CentralStoreRoot” and “People”. The Data Proxy Accounts requires full control of these folders:


Once this is done, we can go back to the SSPR Setup Console:


Click on “Service Configuration”, and then on “New Service Configuration” on the right hand side:


We have already setup the Central Store, and installed an SSL Certificate – so we can press next, and enter the UNC path to our Central Store:


Press “Next” and then tick the correct Domain, and select “Properties” – for this guide I will be configuring a single domain only for SSPR. Then we need to enter the details of the Service Accounts we created:


Enter the Account Details and press OK, and then press Next. The SSPR Password Reset service is then created:


Click on Finish, and then we are taken back to the Console. Next we need to create the user configuration, by selecting the User Configuration pane, and then clicking “New User Configuration” on the right hand side. We can now choose an LDAP Path or an AD Group for the users eligible for Self Service Password Reset – I’m choosing an AD group 🙂


Click on Next, and enter the License Server Name:


We can now configure the options users will have when using the service – either a password reset, and/or the ability to unlock their account. I’m going to allow them to use both. Also, we need to enter the URL to the SSPR Service, which in my case, is the server name:


Then we can click “Create” and the User Configuration is created. Next we move into Identity Verification – this is where the questions come in!

Back in the main console, click on “Identity Verification”, and then “Manage Questions” on the right hand side. We then need to select the Default Language, and choose whether to mask answers. I’m going to use English and choose not to mask answers for this Lab Setup:


After clicking “Next”, we can customise the questions and add more or create a new Group of questions. I’ve customised a couple of the default questions for demonstration purposes:


Once you are happy with the questions created, click “Next”, and the ordering of questions can be adjusted. Once happy – click Finish, and the configuration is completed.

Delegation of Active Directory Rights for the Self Service Account

Before we can setup StoreFront to use the SSPR Service, we need to delegate permissions to the AD Account used for Password resets and account unlocking – the Self Service Account. We will do this in Active Directory with the Delegation of Control Wizard. Note: you will need to delegate to control to all OUs where users of the SSPR system reside.


First, we select the Self Service account we have created:


And then click “Next”:


Select “Create a custom task to delegate”. Then select “Only the following objects in the folder” and select “User objects”:


In the next window select “General” and “Property Specific”:


Ensure that the following permissions are checked, and then press next:

  • Read lockoutTime
  • Write lockoutTime
  • Reset Password
  • Change Password
  • Read userAccountControl
  • Write userAccountControl
  • ReadpwdLastSet
  • WritepwdLastSet


Click “Finish” and then the delegation has been setup for the Self Service Account. We can now move on and configure Citrix StoreFront.

Configuring Citrix StoreFront for Password Self Service

To begin, open the StoreFront Console, and visit the Store you wish to add the SSPR Site to:


Then click on “Manage Authentication” on the right hand side:


Click on the settings option next to “User name and password”, and then select “Configure Account Self-Service”:


Next, choose “Citrix SSPR” from the drop down list:


And then press “Configure”. Then tick the boxes for “Enable password reset” and “Allow account unlock”, and then enter the URL to the SSPR Server:


Click OK 3 times, until you are back to the main StoreFront Console.

SSPR is now setup – and we can test with a user!

SSPR Signup Process and Testing

Now we have SSPR setup – we can begin the signup process and start testing. To do this, log into StoreFront with a user account that is a member of the AD Group we assigned in the SSPR Console (SSPR_Users in my case). You will then see the following extra “Tasks” option when you are logged in:


When we click on tasks, we can enroll in the Self Service Password Reset system:


Click on “Manage Security Questions” – before we can proceed we are required to authenticate again:


We will now see the security questions we defined earlier – and can provide answers:


Once these have been completed – we are presented with the following screen:


This means that the user is now enrolled and can use the Self Service Password Reset system.

Testing Self Service Password Reset:

Now that we have enrolled – we can make use of the SSPR System. If we visit the StoreFront website we also see an additional section of the login screen, labelled as “Account Self-Service”:


When a user who has forgotten their password visits the site, they can click on “Account Self-Service” to start the password reset process. For this test I will assume the role of a user who has forgotten their password. So I clicked on “Account Self-Service” then then selected the “Reset password” option:


After Clicking “Next” I am presented with the following screen, where I enter my username in the format domain\username, and then click “Next”:


After this, I am presented with the Questions that we previously setup, and answered for this test user:


After answering all 4 questions, I am presented with a change password dialogue:


A new password can be entered and “Reset” pressed:


The user’s password has now been changed, and we can login to our published resources – all without the need for any helpdesk calls.