Azure Storage Sync – the easiest branch office file sync solution?

Azure Storage Sync provides the means to synchronise files from various locations into an Azure Storage account and to endpoints running the Azure Storage Sync agent. In this post I will give a quick overview of how it can be setup to service branch office requirements whereby VPN connectivity does not exist. For further information see here: https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning. Here’s my environment which I will be testing with:

Both of my “Branch offices” are actually VMs running on my home lab – but on isolated networks, so they can only communicate outbound to the internet with no LAN access.

Essentially – the goal for this article is to show how to configure Azure Storage Sync to replicate files between Branch Office 1 and Branch Office 2, using Azure Storage as the intermediary.

There are a number of key components to this deployment:

  • An Azure Storage Account – this is where our file share will be hosted
  • An Azure File Share – this is where our replicated data will reside
  • An Azure Storage Sync Group – this is where the synchronisation will be configured and managed
  • Two installations of the Azure Storage Sync Agent that will sit on our Branch Office servers

To start, I have created a folder on Branch Office 1 called “HeadOfficeDocs” – this is the folder that I want to replicate, and it contains a couple of folders and files of “business data” that we need replicated across to Branch Office 2:

Next – we need to setup an Azure Storage account, and an Azure File Share to host the data. First up, I will create a storage account:

Once this has been deployed we can create a File Share – this is where our replicated data will reside:

Next we create a file share and give it a quota:

Once this has been created, we can setup the synchronisation! To do this, we need to create a new Azure File Sync resource:

And fill in a few details – note that the location should be the same as where your Azure File Share is hosted:

Once this has been created, we can access the Storage Sync Resource and create a Sync Group:

A Sync Group defines the sync topology that will replicate our files – so if you require different sets of data replicated then you will need to use different sync groups. For example we could sync completely different sets of data, but on the same or different file servers, in completely different locations, using the same Storage Sync Resource, but separate Sync Groups. This provides plenty of flexibility, and the option use the local file servers (in our Branch sites) to act as a cache for the Azure File Share, and thus reduce the amount of data that is replicated within the topology: https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-cloud-tiering

Creating the Sync Group is easy – just a few basic details to fill in; the Storage Account we have created, and the File Share that will host the data:

Once this is completed – we have the basics for our topology in place, and we need to register our first server into the topology. The registered servers pane will be blank at this point, as we have not yet installed the Azure File Sync Agent onto any Servers:

To start the process – log onto the first server and download the Azure File Sync Agent from this URL: https://www.microsoft.com/en-us/download/details.aspx?id=57159. Installation of the File Sync Agent is straightforward and simple – just keep clicking next (you can input proxy settings if you require)… the only configuration is the account we wish to associate the server to, which is completed after setup:

Just click “Sign in” and then follow the instructions – you’ll need to sign into your Azure account, and will then be prompted to select the Azure Subscription, Resource Group, and Storage Sync Service we are registering this server to:

Once this is done – just click “Register”. (You may get prompted to sign in again here – I did) – once completed, the registration process is completed:

Our newly registered server now shows up in the “Registered Servers” pane:

Next we need to configure the Sync Group – so that this server is added and starts to replicate data into our topology. To do this, browse to the Sync Group settings:

From here, we can add the server:

This is just a case of selecting the server from a drop down, and then entering the path where our data exists. Note – I am not using Cloud Tiering here as I want all data on all replication points within the topology:

Once this is done – our server is added to the Sync Group, initially it will show as provisioning, then pending, and then as healthy:

If we now look in the Azure File Share – we can see that the data from the server has been replicated into the Cloud:

So – we now have a single server in a replication topology between itself and an Azure File Share. If I add a new folder to the Branch Server – we see this replicated onto the Azure File Share after a short time:

Next – I will deploy the Agent to a new server (in another Branch) to test out the replication and initial sync. To do this it’s just a case of installing the Agent exactly as we did before – and ensuring that we register this server to the same Storage Sync Service. Once this has been completed, we register the server again (exactly as before – but with a different path if required) and then we will start seeing data being synchronised:

Note that I have used a different path – you can change this on a per server basis if you require, so there is no need to have all servers setup in an identical disk arrangement. If we look in the Sync Group pane after a short while for the sync to take place, we can see both servers are setup:

We can also see that the data has been replicated to the 2nd Server I have added:

Bingo – we now have a working topology that will keep data in sync between our offices using Azure File Sync. No VPNs required, no complicated configuration, just two Agent installations, and some basic Azure Configuration. This provides a simple and effective way to keep branch site data in Sync and provides a number of potential use cases whereby complicated setups would otherwise be required.

Until next time, Cheers!

Azure Lab Services – creating an effective and reliable testing environment

Azure Lab Services (formerly DevTest Labs) is designed to allow for the rapid creation of Virtual Machines for testing environments. A variety of purposes and use cases can be serviced using DevTest Labs, for example, Development Teams, Classrooms, and various Testing environments.

The basic idea is that the owner of the Lab creates VMs or provides a means to create VMs, which are driven by settings and policy, all of which is configurable via the Azure Portal.

The key capabilities of Azure Lab Services are:

  • Fast & Flexible Lab Setup – Lab Services can be quickly setup, but also provides a high level of customization if required. The service also provides built in scaling and resiliency, which is automatically managed by the Labs Service.
  • Simplified Lab Experience for Users – Users can access the labs in methods that are suitable, for example with a registration code in a classroom lab. Within DevTest Labs an owner can assign permissions to create Virtual Machines, manage and reuse data disks and setup reusable secrets.
  • Cost Optimization and Analysis – A lab owner can define schedules to start up and shut down Virtual Machines, and also set time schedules for machine availability. The ability to set usage policies on a per-user or per-lab basis to optimize costs. Analysis allows usage and trends to be investigated.
  • Embedded Security – Labs can be setup with private VNETs and Subnets, and also shared Public IPs can be used. Lab users can access resources in existing VNETs using ExpressRoute or S2S VPNs so that private resources can be accessed if required. (Note – this is currently in DevTest Labs only).
  • Integration into your workflows and tools – Azure Lab Services provides integration into other tools and management systems. Environments can automatically be provisioned using continuous integration/deployment tools in Azure DevTest Labs.

You can read more about Lab Services here: https://docs.microsoft.com/en-us/azure/lab-services/lab-services-overview

I’m going to run through the setup of the DevTest Labs environment, and also run through a few key elements and run through the use cases for these:

Creating the Environment:

This can be done from the Azure Portal – just search for “DevTest Labs” and then we can create the Lab Account. Note – I have left Auto-shutdown enabled (this is on by default at 1900 with no notification):

Once this has been deployed, we are able to view the DevTest Labs Resource overview:

From here we can start to build out the environment and create the various policies and settings that we require.

Configuring the Environment

The first port of call is the “Configuration and Policies” pane at the bottom of the above screenshot:

I’m going to start with some basic configuration – specifically to limit the number of Virtual Machines that are allowed in the lab (total VMs per Lab) and also per user (VMs per user). At this point I will also be setting the allowed VM sizes. These are important configuration parameters, as with these settings in place we effectively limit our maximum compute cost:

[total number of VMs allowed in the lab] x [maximum/most expensive VM size permitted] = the maximum compute cost of the lab environment

This is done using the panes below:

First up, setting the allowed VM sizes. For this you need to enable the setting and then select any sizes you wish to be available in the Lab. I have limited mine to just Standard_B2s VMs:

Once we have set this up as required we just need to click “Save” and then we can move onto the “Virtual Machines per user” setting. I am going to limit my users to 2 Virtual Machines each at any time:

You’ll notice you can also limit the number of Virtual Machines using Premium OS disks if required. Once again – just click “Save” and then we can move onto the maximum number of Virtual Machines per Lab:

As you can see I have limited the number of VMs in the Lab to a maximum of 5. Once we have clicked “Save” we have now configured a few of the basic elements for our Lab.

Defining VM Images for our Lab

Next up – it’s time to configure some images that we want to use in our Lab. We have three options here – which provide a number of different configurations that suit different Lab requirements:

Marketplace images – these are images from the Azure Marketplace, much like we are used to selecting when creating Virtual Machines in the Portal

Custom images – these are custom images uploaded into the Lab, for example containing bespoke software or settings not available via the Marketplace or a Formula.

Formulas – these allow for the creation of a customised deployment based on a number of values. These values can be used as-is or adjusted to change the machine deployed. Formulas provide scope for customisation within defined variables and can be based on both Marketplace and Custom images.

For more information on choosing between custom images and formulas this is well worth a read: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-comparing-vm-base-image-types

I’ve defined a single Marketplace Image from the Portal – and provided only Windows 10 Pro 1803:

Next, I am going to create a Formula based on this image, but also with a number of customisations. This is done by clicking on “Formulas” and then “Add”:

Next, we can configure the various settings of our Formula, but first we need to setup the Administrator username and password in the “My secrets” section of the lab. This data is stored in a Key Vault created as part of the Lab setup, so that they can be securely used in Formulas:

Next, I am going to create a Windows 10 Formula with a number of applications installed as part of the Formula, to simulate a client PC build. This would be useful for testing out applications against PCs deployed in a corporate environment for example. When we click “Formulas” and then  “Add” we are presented with the Marketplace Images we defined as available in the earlier step:

Marketplace Image selection as part of the Formula creation:

Once the base has been selected we are presented with the Formula options:

There are a couple of things to note here:

  • The user name can be entered, but the password is what we previously defined in the “My secrets” section
  • The Virtual Machine size must adhere to the sizes defined as available for the lab

Further down the options pane we can define the artifacts and advanced settings:

Artifacts are configuration and software items that are applied to the machines when built – for example, applications, runtimes, Domain Join options etc. I’m going to select a few software installation options to simulate a client machine build:

There are a few very useful options within other artifacts, which I feel deserve a mention here:

  • Domain Join – this requires credentials and a VNET connected to a Domain Controller
  • Download a File from a URI – for example if we need to download some custom items from a specific location
  • Installation of Azure PowerShell Modules
  • Adding a specified Domain User to the Local Admins group – very useful if we need all testing to be done using Domain Accounts and don’t want to give out Local Administrator credentials
  • Create an AD Domain – if we need a Lab domain spun up on a Windows Server Machine. Useful if an AD Domain is required temporarily for testing
  • Create a shortcut to a URL on the Public Desktop – useful for testing a Web Application on different client bases. For example we could test a specified Website against a number of different client builds.
  • Setup basic Windows Firewall configuration – for example to enable RDP or to enable/disable the Firewall

It is also worth noting that we can define “Mandatory Artifacts” within the Configuration and Policies section – these are artifacts that are applied to all Windows or Linux VMs created with the Lab:

After artifact selection we can specify the advanced settings for the Lab:

It is worth noting here that we can specify an existing VNET if required – this is particularly useful if we need to integrate the Lab VMs into existing environments – for example an existing Active Directory Domain. Here we can also configure the IP address allocation, automatic delete settings, machine claim settings, and the number of instances to be created when the formula is run.

Once the Formula is created we can see the status:

Granting access to the Lab

We can now provide access to end users – this is done from the Lab Configuration and Policies pane of the Portal:

We can then add users from our Azure Active Directory to the Lab Environment:

Visit this URL for an overview of the DevTest Lab Permissions: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-add-devtest-user

Now we can start testing the Lab environment logged in as a Lab User.

Testing the Lab Environment

We can now start testing out the Lab Environment – to do this, head over to the Azure Portal and log in as a Lab User – in this case I am going to log in as “Labuser1”. Once logged into the Portal we can see the Lab is assigned to this user:

The first item I am going to do is to define a local username and password using the “My secrets” section – I won’t show this here but you need to follow the same process as I did earlier in this post.

Once we have accessed the Lab, we can then create a Virtual Machine using the “Add” button:

This presents the Lab user with a selection of Base Images – both Marketplace (as we previously defined) and Formulas (that we have previously setup):

I’m going to make my life easy – I’m a lab user who just wants to get testing and doesn’t have time to install any software… so a Formula is the way to go! After clicking on the “Windows10-1803_ClientMachine” Formula I just need to fill out a few basic details and the VM is then ready to provision. Note that the 5 artifacts we setup as part of the Formula and the VM size is already setup:

Once we have clicked create the VM is then built and we can see the status is set to “Creating”:

After some time the VM will show as Running:

Once the VM has been created we can connect via RDP and start testing. When creating this VM I left all of the advanced settings as defaults – which means as part of the first VM deployment, a Public IP and Load Balancer (so that the IP can be shared across multiple Lab VMs) has been created. When we now look at the VM overview window, we can just click connect as we normally would to an Azure VM:

Once we have authenticated, we can then use the VM as we would any other VM – note in the screenshot below, both Chrome and 7Zip (previously specified artifacts) are visible and have been installed (along with other items) for us before we access the VM:

When we have finished our testing or work on this VM – we have a number of options we can use:

  • Delete the VM – fairly self explanatory this one… the VM gets deleted
  • Unclaim the VM – the VM is then placed into the pool of Claimable VMs so that other Lab users can claim and use them. This is useful if you wish to simply have a pool of VMs that people use and then return to a Pool. For example – in a development team testing different OS versions or Browsers etc.
  • Stop the VM – this is the same as deallocating any Azure VM – we’d only pay for the Storage use when stopped

Hopefully this has been a useful overview of the DevTest Labs offering within Azure… congratulations if you made it all the way to the end of the post! Any questions/comments feel free to reach out to me via my contact form or @jakewalsh90 🙂

Azure CDN – Speeding up WordPress on Azure App Service and proving the results

It’s probably no secret that half of the IT blogs out there are running on WordPress or a similar Platform. WordPress is easy to use, simple, and requires little maintenance to run – what’s not to love?!

As with any Website – it doesn’t matter how good the back-end code or server hosting the site is, if your user is half way around the globe from where your site is hosted, the experience may not be that great… and not great experiences do not make for happy visitors. In the case of business/shopping websites this can mean international customers getting a bad experience for example, which is less than ideal if you are looking to grow and provide the same great service to users around the globe.

To combat this issue, a Content Delivery Network (CDN) is a great solution. A CDN essentially spreads your data across geographically separate servers across the world, and ensures that user requests are dealt with by the server that is closest to the end user. It is worth noting this means closest in networking distance, which is not always the same as physical distance. Azure has a great CDN offering with Points of Presence spread all over the world: https://docs.microsoft.com/en-us/azure/cdn/cdn-pop-locations – you would need to be in a VERY remote location to not have an almost local POP.

I’m going to test out WordPress running on Azure App Service natively, and then setup Azure CDN and compare the two – making use of Performance tests within Azure App Service to highlight the difference in metrics for both arrangements (Without CDN, and with CDN).

To start, I have deployed WordPress on Azure App Service with MySQL In App using the below template:

Head over to https://github.com/Azure/azure-quickstart-templates/tree/master/wordpress-app-service-mysql-inapp for the template.

Once this was deployed I completed the usual WordPress setup, and I now have a functioning site – but without any content. To create some content for testing, I used a plugin designed to generate posts and pages (with images) to give the site some content (including images) we can use when testing response times:

Once installed and run, this plugin gave me lots of posts and pages, to simulate the content of a real site, and all of these posts included an image:

Now I can start seeing how the site performs – without a CDN in place. Initially, I’m using a Performance Test to measure site performance – which can be accessed from the App Service pane in the Portal:

Creating a test is simple, I am just going to simulate 1000 users accessing in a 1 minute window:

As you can see, the metrics are coming back from the West EU test as follows with no CDN:

Average response time is probably the most key metric here – so 2.89 secs average from the West EU test region. To give an idea of the variation, I ran the test again, but this time from the East Asia region:

As you can see, there is a noticeable speed difference, albeit one that is to be expected. Based on the metrics (2.89s average West EU vs 5.81 average East Asia), we can see that the average response time for clients in the East Asia Region is around 200% that of those in the West EU region. So… about twice the waiting time for the page to load.

Configuring the Azure CDN

Configuring a CDN Endpoint for Azure Web Apps is extremely simple – it can be done from the Web App section of the Azure Portal:

For this test I have configured the CDN Endpoint as below. I’m using the Standard Akamai Offering for my test:

Once we have filled in the details the Endpoint is created:

Once the endpoint is created, we are presented with a new URL to access the CDN version of the site:

I then configured WordPress to integrate with the CDN using the CDN Enabler plugin:

To check the function – if we now have a look at the properties of an image on the page, we can see it is being sourced from the Azure CDN, and thus from a location geographically close to our users:

Because the plugin includes any content in wp-content, any image we upload to the Website will be provided to users via the CDN. Next up, I re-ran the performance tests to measure the performance differences now that we have switched some content to the CDN:

West EU:

East Asia:

Based on the above test results, when implementing the CDN endpoint, we saw the following differences in test results in terms of average speed increases:

without CDN with CDN % speed increase
West EU 2.89 1.52 47
East Asia 5.81 2.11 64

There are a few key results from this test:

  • Utilizing the CDN improved performance in both the local and remote regions – in both cases significantly
  • Remote regions saw the greatest performance boost at a 67% average load time speed increase
  • The performance in the remote (East Asia) region was better than the original test of the West EU region after we had added the CDN endpoint
  • Once configured both in Azure and in the Application (WordPress) there is no further configuration required
  • We can further improve the speed by taking more elements of the Web Application and bringing them into the CDN – for example theme files, static code, CSS etc. In my test I have only included the wp-content directory but there is more that could be added.

Hope this has been useful… until next time!