Testing out Project Honolulu

Recently I have been testing out something new – Project Honolulu from Microsoft. I first heard about this on Twitter (thanks to Eric @XenAppBlog), and was interested in what it could offer straight away. Project Honolulu is a new way to manage Windows Server – using a web based method, that does not rely on the traditional Server Manager GUI. Functionality is similar, and offers the usual range of configuration options, as well the ability to manage roles and features as you would normally expect.

You can download Project Honolulu here. Windows Server 2016 is supported natively, but for 2012R2 Support, you will need to install WMF5.0 (KB3134758).

Project Honolulu has a number of ways to deploy – but I went with a simple install on a single server within my lab. Once this is completed (it’s an easy next next next done install), you are presented with the following screen, which opens up in your default browser:

From here – we can add server connections. Note: Standalone, Failover Cluster, and Hyper-Converged systems are supported:

After I’d added a few servers from my lab, the main screen appeared as below. You can also import servers from a text file – so an export from AD is possible too:

From here, we can see the status of the servers I have added and then drill down further into the options by clicking on a server name. The overview screen gives the usual range of information we’d expect to see:

Particularly nice – is the metric display, which gives an overview of CPU, Memory, Ethernet, and Disk Activity. This is realtime data – but useful for monitoring key servers/clusters, perhaps on an Ops display board or large screen etc.:

As well as a range of metrics available, we have a range of management tools we can take advantage of. Particularly interesting is the ability to manage elements like Network Adaptors, Services, and Roles/Features, as well as to view Event Log entries and the Registry:

Management of Services is also a very useful feature – allowing services to be stopped and started (I wish it had a restart button though!) from the Web Console. This is particularly useful for Managed Service Providers – when the 2am call comes in that a failure has occurred, instead of a VPN into an RDP Session into another RDP Session, you can fire up a Web Interface and restart the service from there (NAT rules and an SSL cert required of course…) :

You’ll notice here that I’ve highlighted a couple of Citrix Services too – Project Honolulu allows you to manage all services running on a supported machine. So this is great for managing 3rd Party applications and services too. The lightweight nature of the system also means that this can be added to existing systems with ease (a single installer and a list of servers).

I’m really interested to see where this Project will go – in particular, it makes the use of Server Core much more accessible, because a familiar and common interface can be used for management of multiple servers. It also allows simple management of basic server configurations, as well as Service management for Microsoft and Third Party applications. Any environment could probably benefit from a single interface that allows basic configuration and Service restarts… the key questions is… where will this Project go next?

I’d really like to see support for more configuration changes, for example, customisable PowerShell options (e.g. this button in the interface runs this remote command) or support for a PowerShell session via the Web Interface. Also it would be great to see support for Third Party software – for example, additional modules that could be included to provide web based management of other software items on the server.

 

 

 

 

XenDesktop Site Failover – asking the community…

Recently I’ve been doing a lot of work on large deployments that require active/active or active/passive setups, whereby options to fail over to a DR site are either required as part of the design, or presented as future enhancement to the customer. Most of these have been fairly open questions – “How can we achieve this?” for example. It’s a question that is almost completely subjective; it depends entirely on business needs, and what the available budget is.

Subjective elements aside, it is a much debated technical area, so I opened up a question on the MyCUGC forums to ask the community how they were going about this. I also tweeted the question out @jakewalsh90:

I based my question around the concept that is most common (certainly to me at least) – an active/active or active/passive design, with a primary site and a secondary (DR/Backup) site. This is without a doubt the most common environment type that I encounter, predominantly in small and medium enterprises up to around 5000 users.

The main purpose of this post is to summarize the elements (both technical and strategic) that could be considered, and the different options we can lean on to help achieve the desired results. And also, to highlight just how good the response from the Citrix Community was on this question!

Key Considerations

By far the most common point that came out of the discussion around this was – “it depends”. There are a great number of factors to consider for any solution like this, including:

  • Budget – what is affordable and achievable with our budget?
  • Connectivity – are we limited by latency/bandwidth/other traffic etc? Are we using Dark Fiber, MPLS, VPN etc?
  • DC Locations – if we are planning for a Secondary/DR site, is it likely this would ever be affected by an issue that took down our primary site? (Hurricanes, Floods, Earthquakes etc.)
  • Capacity – is this a full DR/Secondary solution or just a subset of applications and users?
  • Hardware – do we have the hardware to achieve this? Is it within our budget?
  • Software – can we do this within our current licensing or do we need an uplift?
  • Applications – are we replicating everything or just key applications? How will these applications perform in another DC? (Applications may have web/database dependencies based only in a single site).
  • User Data – are we replicating user data too? How are profiles going to be handled?
  • Failover method – are we utilizing a Citrix solution for this, or perhaps a product like VMware Site Recovery Manager? How is failover undertaken – automatic? manual?

Citrix Considerations

Aside from the many other factors affecting a question like this, our discussion focused on the Citrix technical elements aimed at DR/Failover options available. I’ve highlighted the key points we discussed, and gathered a number of resources that I think are helpful in discovering these further:

 

GSLB via NetScaler for StoreFront (Access Layer) – this was a common theme throughout the discussions, and there seems to be a general consensus that utilising GSLB on NetScaler is a logical way forward. Creating an access layer that utilizes NetScaler GSLB and StoreFront, whilst spanning the DC’s, will give a solution that is resilient and reliable, and won’t require complex replication/management. Dave Brett has written an excellent article on setting this up.

 

 XenDesktop Site with ZonesZones in XenDesktop are an awesome way to split geographically (or logically) separate resources, whilst maintaining the ease of management and reduced overhead of only having a single farm. Utilizing Zoning to form an active/active or active/passive solution is simple in configuration terms too. With Zones users can be automatically redirected to a secondary zone VDA during the failure of a their primary zone VDA.

 

Local Host Cache – as I am sure you are aware, Local Host Cache is now back in XenDesktop, and provides additional tolerance for database outages. LHC allows connection brokering operations to take place when the following issues occur:

The connection between a Delivery Controller and the Site database fails in an on-premises Citrix environment.

The WAN link between the Site and the Citrix control plane fails in a Citrix Cloud environment.

See https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-12/manage-deployment/local-host-cache.html for further details on LHC.

You can check to see if LHC is on by running the following PowerShell: Get-BrokerSite. I’m running 7.15 in my lab so it is enabled by default:

 

SQL Options – SQL is a key component of the FMA architecture – so any solution (with or without DR/Failover) needs a reliable solution for hosting Site Databases. Usually my go to solution is to mirror any databases using SQL Mirroring. AlwaysON Failover Clustering, and Always On AvailabilityGroups are both possible solutions – particularly given that Database Mirroring is being deprecated.

When DR is considered this opens up additional hardware and software requirements to provide suitable hardware and SQL Server licensing.

See page 101-102 of the Updated Citrix VDI Handbook for further information on SQL redundancy and replication options: http://docs.citrix.com/content/dam/docs/en-us/xenapp-xendesktop/7-15-ltsr/downloads/Citrix%20VDI%20Handbook%207.15%20LTSR.pdf

 

Using StoreFront to handle the Failover (Site/Delivery Controller Level) – From StoreFront 3.6 it has been possible to Load Balance Resources across controllers, allowing StoreFront to effectively handle failover between XenDesktop Farms. (See https://www.citrix.com/blogs/2016/09/07/storefront-multi-site-settings-part-2/ for more details on this)

This method allows us to have two XenDesktop Farms – and to publish identical resources which are then load balanced by the StoreFront server. Failover would only occur in the event that a Delivery Controller was unavailable in the primary site. This solution would still allow for a GSLB approach with StoreFront and NetScaler too.

The main disadvantage of this approach is the increased management overhead of the additional XenDesktop Farm, but this can be managed by having good practices in place.

This is configured in the Delivery Controller section of a StoreFront site – and requires both farms to publish the resources required for failover. See below – two Farms configured in the Delivery Controller section within a StoreFront site:

We also need to configure the “User Mapping and Multi-Site Aggregation Configuration”. Note that below I have configured all Delivery Controllers to for “Everyone” – but this may need to be adjusted in a production environment:

You will also need to configure resource aggregation as below. For failover, do not tick “Load Balance resources across controllers”. However, “Controllers publish identical resources” will need to be ticked so that identically named published applications or desktops are de-duplicated:

With this set, any resources published in both farms will be launched from the Secondary Site in the event that the Delivery Controllers in the first site fail to respond.

 

Application Level Failover using Application Group Priorities – it is also possible to use application groups with priorities to control the failover of applications. When you configure an application group in XenDesktop 7.9+ you are able to configure this:

Gareth Carson has a great blog post on this which explains the functionality in more detail.

In Conclusion…

Hopefully this post has been helpful in highlighting some of the considerations for a DR/Second Site scenario. And also, has helped to highlight some of the Citrix technologies and great community resources out there to help make the process a little easier. It’s been useful for me to ask the question and compile a post like this because I’ve had to look into the various technologies and find out more about them in my own lab before writing this… until next time, cheers!

 

 

 

Citrix Workspace Environment Management – IO Management

I’ve been blogging a lot this year on the merits of Citrix Workspace Environment Management (WEM) and the various features it provides. Another feature is I/O Priority – which enables us to manage the priority of I/O operations for a specified process:

To demonstrate this, I am going to run IOMeter (a storage testing tool – that consumes, but also measures CPU utilisation during testing), and SuperPi (a tool that calculates Pi to a specified number of digits – and consumes large amounts of CPU during calculation).

Before making any WEM configuration changes, on my virtual desktop the results are as follows:

IOMeter (Using the Atlantis Template – available here) –  shows 6.56% CPU Utilisation, and 3581 I/Os per second:

SuperPI calculation to 512K – 7.1 seconds:

Next I added the IOMeter and SuperPi executables into WEM, and set the priority to very low:

As a result of doing this the IOMeter results are significantly reduced, and the calculation time for SuperPi has increased significantly:

IOMeter Result – around 60% reduction in I/O per second, and 2% CPU usage reduction:

SuperPI – time to calculate has increased by nearly 200%:

From this test – it is clear to see that I/O Management within Workspace Environment Management is an effective way to control the I/O operations of specified processes. Whilst you might think slowing down the performance of an application is unlikely to be a major requirement for many of us – the ability to control particularly resource intensive applications is a definite win for complex environments. If a particular application is causing performance problems (for example degrading the performance for others) then this provides a suitable solution to manage that process.

Citrix Workspace Environment Management – Process Management

After testing out the excellent CPU and Memory management features in Citrix Workspace Environment Management (WEM), I wanted to blog about how processes can be controlled using the software.

Prior to starting this test, I have a basic Citrix XenDesktop environment configured, a WEM environment configured, and the relevant group policies in place to support this.

To prevent processes from running, we browse to System Optimization, and then Process Management:

From here we can enable process management:

Next – we have two options, we can whitelist, or blacklist. If we whitelist – only those executables listed will be allowed to run, whereas a blacklist will block only those listed.

I’m going to test out a blacklist:

We can exclude local administrators, and also choose to exclude specified groups – for example perhaps a trusted subset of users or specific groups of users who need to run some of the applications we wish to block.

For this test I am going to add notepad.exe to the list:

Next I saved the WEM configuration, refreshed the cache, and then logged into a Desktop Session to test the blacklist. Upon firing up notepad I am greeted with the message:

Bingo – a simple and effective way to block processes from running. This would be very effective when combined with a list of known malicious executables for example, or known problematic software items.

In a future release I’d love to see more granularity in this feature – for example blacklists, with the ability to whitelist processes for certain groups, rather than as a whole. This would enable control of applications on a much more granular level – for example, blocking “process.exe” for Domain Users, but allowing it for a trusted group of users.

 

Citrix Connection Quality Indicator

Overview

Connection Quality Indicator is a new tool from Citrix designed to inform and alert the user to network conditions that may affect the quality of the Session they are using.  Information is provided to the end user via a notification window which can be controlled using Group Policy.

Installation is supported on the following platforms:

See https://support.citrix.com/article/CTX220774 for more details.

Test Environment

My environment consists of a basic Citrix XenDesktop 7.12 installation:

  • 1x Desktop Delivery Controller (Local Database)
  • 1x Citrix StoreFront
  • 2x XenDesktop Session Host (Static VMs)

All VMs are 1 vCPU, 4GB RAM, and Windows Server 2016.

Installation

Connection quality indicator needs to be installed on each Session Host, or to a master template – and follows a simple next next finish installation with no configuration during install:

Post installation we can see the program installed via Control Panel:

Group Policy Configuration

As outlined in CTX220774 there are also Group Policy Templates that can be used. I have opted to copy these to the Central Store within my Domain. The templates can be extracted from any machine with Connection Quality Indicator on:

En-US templates are within the configuration folder ready for use:

Note – once placed into the Central Store, Group Policy Administrative Templates will be available as below:

Within Citrix Components we now have access to the Policy Settings for Connection Quality Indicator:

We are then able to modify the following settings:

Enable CQI – this setting allows us to enable the Utility, and also configure the refresh rate for data collection counters:

Notification Display Settings – from this setting we can configure the initial delay before the tool alerts the user to the connection quality rating, and define a minimum interval between notifications:

Connection Threshold Settings – this setting is perhaps the most interesting, because it is here we can tailor the tool to any specific environmental requirements. From this setting, we can control the definitions of High and Low Latency (in milliseconds), High and Low ICA RTT (in milliseconds), and the High and Low bandwidth value (in Mbps):

For the purposes of this demonstration – I’ve used default settings all round.

After configuring the group policies – I logged into a Desktop Session with the tool installed. 60 seconds after login the Window appeared with the session quality result:

If the cog symbol is clicked the user has the option to modify the location of the display window, snooze the tool, and also to see the test results:

Unfortunately, I have no method in my lab for degrading network performance artificially, or increasing latency etc. – but to prove that the metrics were functional, I adjusted the Group Policy settings so that some fairly unobtainable figures were used for all settings – and thus the tool would grade the connection quality differently:

This highlights how the tool can be used to identify connection quality through tailoring the GPO for a specific environment. After changing these settings, rebooting the Session Host (the lazy way of updating Group Policy!), and logging back in, the tool reported the following:

This is a very useful option within the tool – as we can specifically modify the settings to suit a range of environments. In some environments having low bandwidth might not be an issue, but high latency might be for example.

Conclusion

Overall this tool is very useful for giving the end user an insight into the quality of the network environment, and provides real time feedback on this quality. This is great for keeping end users informed, and managing expectations of performance too. What I also like is that end users will be able to see differences based on where they work – for example, a user with a “Strong Connection” inside the office, but a “Weak Connection” over 3G or at home, would know what sort of experience to expect, and would have real time data to support any troubleshooting moving forward.

 

Testing out the Atlantis USX Community Edition

Recently I’ve been using the Atlantis USX Community Edition, which is a free edition of the Atlantis USX Software – specifically for the purposes of testing and learning how the USX can improve the performance of a virtual desktop. Atlantis provide a number of testing guides and videos on the USX Community Edition landing page  – and also a testing guide, which outlines how to benchmark the software.

For this post I wanted to demonstrate the results I’m getting in my lab – as they give an idea of the benefits a solution like this can bring. As part of this process I’ve been reading up on various testing methods and options, and I eventually settled on using the same configuration detailed in Jim Moyle‘s excellent article – available here. I should also note – the USX Community Edition also provides a pre-made IOMeter configuration file (in the Citrix Testing Guide), but I have opted to follow the baseline in Jim’s article.

My test configuration is as follows:

  • 1x HPE ML110 Gen9
  • 40GB RAM
  • 2x 700GB SSD in RAID 0
  • 1x 480GB SSD
  • 1x 1TB HDD

All storage is via an HPE Dynamic Smart Array B140i RAID Controller.

Base VM for testing using IOMeter is a Windows 2012R2 Standard VM with no tuning or modification applied:

My configuration for the USX Appliance is as follows:

Due to the RAM available on my host I went for the small appliance:

All other configuration was standard, and all infrastructure VMs were stored on storage not participating in this testing (so as not to affect the result). The USX CE also includes an excellent management interface, which allows you to monitor the health of the environment, and displays useful statistics:

After setting up the configuration, I decided to test 3 storage configurations, against the IOMeter baseline, and then post the results to give an idea of performance:

Test 1 – VM on 1x HPE 1TB HDD

Test2 – VM on 2x 700GB SSD RAID 0

Test 3 – VM on Atlantis USX CE Storage

As you can see, the USX CE wins in every storage metric displayed – there is no contest here:

  • In terms of storage throughput, the SSD array provides around 13x the speed of the HDD, but the USX provides around 60x the performance of the HDD, and 5x the performance of the SSDs in RAID 0.
  • The average read and write response times are also significantly different across the board – with the USX read being around 30x faster than the HDD, and the write being around 60x faster than the HDD. The USX also demonstrates performance around 4-5x faster than the SSDs in RAID 0 for average read and write response time.
  • Total IOPS is also a useful metric – again one that the USX appliance claims the prize for; IOPS are around 65x higher than the HDD, and around 5x higher than the SSDs in RAID 0.

Overall – the USX demonstrates around 60x the performance of the HDD, and around 5x the performance of the RAID 0 SSD array in my lab. If you haven’t already tried out the USX Community Edition I would definitely recommend it, not only as a demonstrator of how this technology can improve VDI (and other) workloads, it’s also great if (like me) your lab time is precious, and anything to speed up deployment and testing is a real bonus.

 

Citrix PVS – NTFS vs ReFS 2012R2 vs ReFS 2016

I’ve been doing some work recently around Citrix Provisioning Services, and this has prompted me to investigate what new features are available in version 7.11. One that stood out to me was the support for Microsoft’s Resilient File System (ReFS) on Windows Server 2016. This file system is interesting for those with virtualized environments due to the extra speed enhancements around the use of VHD and VHDX files.

What’s also interesting is that Citrix state the type of performance enhancement that can be expected when using this version:

See: https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-11/whats-new.html

So… I decided to test this out!

I started with a basic PVS setup of 1 server and 1 client – but times three to give me three farms. The first farm being a 2012R2 Server with the PVS vDisk storage on NTFS, the second being 2012R2 using ReFS, and the third being 2016 using ReFS. All of the client machines captured for the session hosts were 2012R2. All base specifications were identical with 4GB RAM and 1vCPU.

Capture Times:

  • Capture time for NTFS on 2012R2: 8:44
  • Capture time for ReFS on 2012R2: 5:50
  • Capture time for ReFS on 2016: 5:15

Boot Times:

PVS Server Streaming Directory – NTFS on 2012R2:

PVS Server Streaming Directory – ReFS on 2012R2:

PVS Server Streaming Directory – ReFS on 2016:

Testing vDisk Merging:

To test out vDisk merging I created a new maintenance revision and booted the client machines from this:

I then installed a number of applications using Ninite:

Installing these and the associated changes created a file size of around 4.5Gb for the differencing disk.

I then merged the changes to create a new base:

  • NTFS on 2012R2: 12:44
  • ReFS on 2012R2: 4:04
  • ReFS on 2016: 0:14 (yes… less than 15 seconds!)

Conclusion

As you can see – there is a noticeable speed increase when using ReFS on 2016 – in all tests the performance was significantly faster. Capture was around 40% faster. Boot times within my lab environment were almost negligible they were that fast – but ReFS on 2016 had a 3 second lead over ReFS on 2012R2, and 4 seconds on NTFS on 2012R2. Perhaps the most impressive speed increase was the merge operation though – 12:30 faster on ReFS on 2016 than NTFS on 2012R2!

All in all it’s pretty clear what I will be using when implementing PVS from now on….

VMware OS Optimization Tool and Windows 10

Overview

I’m a big fan of the VMware OS Optimization Tool and its capabilities, not only does it help to optimize VDI Environments through a range of settings and templates, it also helps optimize settings that otherwise would be complicated to control without a scripted or policy based method. I must confess I have been using this tool for some time, but mostly without quantifying the effect (particularly in lab environments, where every spare bit of resource is cherished).

I wanted to give an idea in this blog post about the power of the tool, by doing some side by side comparison of Windows 10 Operating systems, against templates available in the tool.

I’ll aim to cover the following in an Optimized and Non-Optimized capacity:

  • Booting
  • Resource usage, idle 5 minutes after login
  • Roaming User profile size after first login
  • Logon time with a roaming user profile – first login, profile removed from the local machine, and then a second login

2 identical VMs were configured for this test, with the following specification:

vm1

Both VMs are identical and the resource limits fall well within the capacity of the host, so I can be sure of no bottlenecks etc.

On one of the VMs I ran the VMware OS Optimization Tool:

vm2

I used the LoginVSI Template for my Optimizations:

vm3

This template contains lots of areas and settings – created by the good folks over at LoginVSI:

vm9

Optimization is simple – just pick a template and click “Analyze” and then “Optimize”:

vm4

After this you are presented with a results Window:

vm5

Testing:

Test 1 – Boot time (time to logon screen):

Optimized Non-optimized
44 seconds 45 seconds

Little difference here – to be honest I wasn’t expecting a huge change. Both machines are on SSD storage, with two fast processor cores available, and plenty of RAM – so no real bottleneck.

Test 2 – Resource usage after 5 minutes idle (whilst logged in):

Optimized Non-optimized
 vm6  vm7

Again – not a huge difference here. But the RAM saving of 0.2GB is worth noting. Multiply 0.2GB up to factor in a 1000 Desktop deployment and that’s 200GB of additional RAM – and when each GB of RAM comes at a price, this is a worthwhile saving.

Test 3 – Roaming profile size after first login

Optimized Non-optimized
Local: 110MB Local: 124MB
Roaming: 692KB Roaming: 984KB

Not really any huge difference here either – the smaller profile size is likely due to some of the features that have been disabled by the Optimization tool that would normally write data back into a profile during first login etc. I’d usually recommend avoiding Windows profiles with any VDI Solution anyway – and look to a solution like Citrix Profile Management or AppSense Personalisation Manager etc.

Test 4 – Login with a roaming profile, profile clear out, and subsequent login time (time to start screen):

Optimized Non-optimized
First Login (Profile Creation): 23 seconds First Login (Profile Creation): 43 seconds
Second Login (Loading Roaming Profile): 9 seconds Second Login (Loading Roaming Profile): 17 seconds

Quite a noticeable difference here. Initial logon time was 20 seconds less (47% faster). Subsequent logins were also noticeably faster – 9 seconds against 17 seconds (also 47% faster). Most of this appears to be due to tasks running during logon. For the initial profile creation this was things like default applications and Windows Store Apps etc – if we look further into the LoginVSI template for the tool, we can see a specific section just for login time reduction:

vm8

Overall, this tool has a clear impact on Windows 10 (and other operating systems) for VDI use. Not only does it lead to a reduction in Login Times, we also see a reduction in RAM usage from the VMs too. I’d recommend anyone currently running non-optimized environments to give this tool a go on a test machine and do some comparison themselves. Many of the features within a Desktop OS are unnecessary for VDI machines and a tool that provides a baseline like this is a great starting point.

 

 

Citrix Workspace Environment Management – Memory Management

After testing out the excellent CPU management features in Citrix Workspace Environment Management (WEM), I wanted to test out how well it handled applications that were particularly greedy with RAM consumption.

To start – I have a single Windows Server 2012R2 Session Host, with 4GB of RAM, and a single vCPU, running on vSphere:

wem1

Limitations for this VM have been individually configured as follows:

RAM:

wem2

CPU:

wem3

I wanted to use limitations to give a performance baseline. Although this is much lower than most Session Hosts would likely be – it will prove the concept for this test.

Next I configured the Session Host with the WEM Agent and imported the default baselines as per Citrix documentation. Within the Console, we can then see the Memory Management options:

wem4

According to the Administration Guide, this enables the following:

wem5

For the purposes of this test, I am going to set the idle limit time to 5 minutes. I will be using TestLimit, a command line tool to simulate high memory usage, available here:

https://blogs.msdn.microsoft.com/vijaysk/2012/10/26/tools-to-simulate-cpu-memory-disk-load/

I’ve configured a batch file that will start TestLimit64.exe, and consume 3.5GB of RAM (from a total of 4GB assigned to the session host).

Prior to any WEM configuration being applied, running this batch file causes Memory Usage to rise as expected:

wem6

This remains until the process is closed manually.

Next, I ran the same process but for a user logged on with active WEM Settings – including Memory Management. Initially we saw the same rise in memory:

wem7

I then waited 5 minutes (the time limit we set earlier), with the application running in the background, and then checked the stats again:

wem8

As you can see the excess memory consumed by this application has been released – and is now available to other processes running on the Session Host. I tested this multiple times on different machines and session hosts, and saw the same result each time.

This potentially very useful for situations where a single user may be using a program that runs periodically, but sits with high RAM consumption in the background. Releasing under-utilized RAM will improve the session experience in the event that RAM capacity is being reached.

 

 

Book Review – “Inside Citrix – The FlexCast Management Architecture” by Bas van Kaam

Recently I have been reading the excellent “Inside Citrix – The FlexCast Management Architecture” by Bas van Kaam. I wanted to write a quick post up about this book – as it’s well worth a read for anyone working with Citrix Desktop Virtualization products.

https://images-na.ssl-images-amazon.com/images/I/31PRVGSE6jL._SX322_BO1,204,203,200_.jpg

You can purchase the book here.

What I really like about this book is how thorough the sections are – no area is left untouched. Each element of the FlexCast infrastructure is covered, including the history behind FMA, and an overview of how FMA is different to IMA. As well as thorough details, there is also an excellent troubleshooting section, which goes through various tools and troubleshooting methods, and various cloud services available to assist.

Also, each section has a “Key Takeaways” area at the end, which provides an overview – highlighting the key elements and considerations covered. This is really useful if you are wanting to improve your knowledge in a particular area. Just by reading this book I’ve already uncovered, and filled, gaps in my own knowledge – this for me is the main reason for reading any technical publication.

Overall, for anyone working with Citrix products this book is an excellent read in my opinion – not only useful for improving your knowledge, but also serving as a reference guide when there are decisions to be made.