Category: Lab

Smart Home tinkering – Using Stringify, Nest Cameras, and TP-Link Plugs to simulate house activity based on camera detections

First of all – Happy New Year, I hope 2019 is already going great for you!

This write up focuses on the use and integration of three things; Nest Cameras, TP-Link Smart Plugs and Lights, and Stringify (a tool that allows the creation of IOT Flows). The combination of all three provides a powerful way to create security routines and outcomes based on various triggers. I should point out, I’m fairly wary of the security implications of IOT devices in general, so for me I like to see them as a way to augment, rather than replace, traditional security products.

In this article I am going to demonstrate an integration routine I have setup recently:

  • When my Nest Camera(s) detect a person between the hours of 2300 and 0600, a Stringify Flow runs, which turns on lights in correct order to simulate a person coming downstairs. The routine then waits for a time period, and turns the lights off in the reverse order, to simulate a person going back upstairs. Finally, a push notification is sent out that provides an alert that the Flow has been run to a mobile phone.

To create this type of setup you need three things:

  • Cameras – I am using the Nest Outdoor Cameras
  • Smart Lighting/Plugs – I am using TP-Link Products, both plugs and bulbs
  • An IOT Tool to link the triggers to actions – I am using Stringify

It’s worth noting that you could use a number of different tools to achieve the same result – for example IFTTT works in a similar way to Stringify, and there are lots of IOT Camera and Lighting products out there.

So – how do we set this up?

To start – we need some cameras, here’s one of my Nest Cameras:

I have a few of these setup around the house – so pretty much anyone near the house is picked up by the cameras. Next, we need some smart lighting to allow for the lighting to come on. For this I have two products in use; TP- Link Smart Plugs and TP-Link Smart Bulbs.

Next, we need to create a Stringify Account – to do this you need to download the app for your device and sign up. Once completed you can create Flows and add Things, which are, in brief:

  • Flows – sequences of events/actions that are run by triggers we define
  • Things – these are the IOT devices we have added to our account

Before we can create a sequence, we need to add Things to our account – which is done by tapping on the + sign:

Next we can add accounts for our various smart devices – this will vary depending on what devices you are using, but for me it was just a case of adding my Nest and TP-Link accounts:

Once this is done, the devices/accounts show up in the home screen within Stringify:

We’re now good to go and can setup our first Flow. To do this, we need to open the Stringify App, and click on “Flows”, and then on the + symbol to create a new flow:

From here, we can start to build out a Flow. Here’s an overview of a completed Flow to give you an idea – we can then drill down into the building blocks that form this Flow:

As you can see – the Flow mainly comprises timers, and light actions (turning a light on or off). We can break this Flow down into 5 main sections:

Essentially the above Flow can be broken down into a few key elements:

  1. A trigger – or in this case, a trigger and a time variable. Both must be met for the sequence to run. In my case, it is that the Nest Camera must detect a person (not just activity – the ability to determine a person or just motion is a feature of the Nest cameras), and the time must be between 2300 and 0600. Unless both conditions are met the sequence won’t progress any further.
  2. The “Person coming down the stairs” sequence – this is just lights and timers that wait for time periods before kicking off the next light. So the first light comes on, then the sequence waits, and then the next light comes on, and waits, and so on…
  3. A wait – purely to act as a waiting time before the next element runs – effectively to simulate a person being downstairs doing something.
  4. The “Person going upstairs” sequence – again this is just lights and timers, so it simulates the lights going off as if someone was going upstairs. Exactly the same as element 2 but in reverse.
  5. This is the final element, AKA letting me know – a push notification, so my phone is alerted that the sequence has been run. This is a useful step as it allows me to be alerted to the fact that the sequence has run (so I have awareness) , and also to see what caused the sequence to run (I can make sure it was a legitimate activation and there is no cause for concern).

Using an automation sequence like this is great way to turn smart home products into a smart security feature. There are loads more possibilities you can create with Stringify too – for example, a few other things you could do with this sequence alone:

  • Integrate this sequence with other smart home products – for example using SmartThings you can connect to Siren/Strobe devices to trigger an alarm. For example if a person is spotted in your garden between a certain time range. All house lights come on and a siren going off is a good deterrent too, and certainly attracts attention!
  • Integrate this sequence with an Amazon Echo – for example “Alexa, I am leaving for work” turns off lights, but should a person be detected outside a radio starts playing inside, and a light comes on – to simulate someone being at home. Or turns lights on and off randomly during darker hours.

Or you could use a sequence like this to trigger smart home items during a danger scenario – for example, if Smoke is detected (via something like Nest Protect) then all house lights come on, regardless of the time of the day, and anything like a TV or Radio connected to a smart plug turns off. (So the only noise heard is the smoke alarm).

Hopefully this has been useful and gives an idea of how powerful the integration of these types of devices can be when linked with the right system to automate them. Until next time – thanks for reading! 🙂

 

Azure Storage Sync – the easiest branch office file sync solution?

Azure Storage Sync provides the means to synchronise files from various locations into an Azure Storage account and to endpoints running the Azure Storage Sync agent. In this post I will give a quick overview of how it can be setup to service branch office requirements whereby VPN connectivity does not exist. For further information see here: https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-files-planning. Here’s my environment which I will be testing with:

Both of my “Branch offices” are actually VMs running on my home lab – but on isolated networks, so they can only communicate outbound to the internet with no LAN access.

Essentially – the goal for this article is to show how to configure Azure Storage Sync to replicate files between Branch Office 1 and Branch Office 2, using Azure Storage as the intermediary.

There are a number of key components to this deployment:

  • An Azure Storage Account – this is where our file share will be hosted
  • An Azure File Share – this is where our replicated data will reside
  • An Azure Storage Sync Group – this is where the synchronisation will be configured and managed
  • Two installations of the Azure Storage Sync Agent that will sit on our Branch Office servers

To start, I have created a folder on Branch Office 1 called “HeadOfficeDocs” – this is the folder that I want to replicate, and it contains a couple of folders and files of “business data” that we need replicated across to Branch Office 2:

Next – we need to setup an Azure Storage account, and an Azure File Share to host the data. First up, I will create a storage account:

Once this has been deployed we can create a File Share – this is where our replicated data will reside:

Next we create a file share and give it a quota:

Once this has been created, we can setup the synchronisation! To do this, we need to create a new Azure File Sync resource:

And fill in a few details – note that the location should be the same as where your Azure File Share is hosted:

Once this has been created, we can access the Storage Sync Resource and create a Sync Group:

A Sync Group defines the sync topology that will replicate our files – so if you require different sets of data replicated then you will need to use different sync groups. For example we could sync completely different sets of data, but on the same or different file servers, in completely different locations, using the same Storage Sync Resource, but separate Sync Groups. This provides plenty of flexibility, and the option use the local file servers (in our Branch sites) to act as a cache for the Azure File Share, and thus reduce the amount of data that is replicated within the topology: https://docs.microsoft.com/en-us/azure/storage/files/storage-sync-cloud-tiering

Creating the Sync Group is easy – just a few basic details to fill in; the Storage Account we have created, and the File Share that will host the data:

Once this is completed – we have the basics for our topology in place, and we need to register our first server into the topology. The registered servers pane will be blank at this point, as we have not yet installed the Azure File Sync Agent onto any Servers:

To start the process – log onto the first server and download the Azure File Sync Agent from this URL: https://www.microsoft.com/en-us/download/details.aspx?id=57159. Installation of the File Sync Agent is straightforward and simple – just keep clicking next (you can input proxy settings if you require)… the only configuration is the account we wish to associate the server to, which is completed after setup:

Just click “Sign in” and then follow the instructions – you’ll need to sign into your Azure account, and will then be prompted to select the Azure Subscription, Resource Group, and Storage Sync Service we are registering this server to:

Once this is done – just click “Register”. (You may get prompted to sign in again here – I did) – once completed, the registration process is completed:

Our newly registered server now shows up in the “Registered Servers” pane:

Next we need to configure the Sync Group – so that this server is added and starts to replicate data into our topology. To do this, browse to the Sync Group settings:

From here, we can add the server:

This is just a case of selecting the server from a drop down, and then entering the path where our data exists. Note – I am not using Cloud Tiering here as I want all data on all replication points within the topology:

Once this is done – our server is added to the Sync Group, initially it will show as provisioning, then pending, and then as healthy:

If we now look in the Azure File Share – we can see that the data from the server has been replicated into the Cloud:

So – we now have a single server in a replication topology between itself and an Azure File Share. If I add a new folder to the Branch Server – we see this replicated onto the Azure File Share after a short time:

Next – I will deploy the Agent to a new server (in another Branch) to test out the replication and initial sync. To do this it’s just a case of installing the Agent exactly as we did before – and ensuring that we register this server to the same Storage Sync Service. Once this has been completed, we register the server again (exactly as before – but with a different path if required) and then we will start seeing data being synchronised:

Note that I have used a different path – you can change this on a per server basis if you require, so there is no need to have all servers setup in an identical disk arrangement. If we look in the Sync Group pane after a short while for the sync to take place, we can see both servers are setup:

We can also see that the data has been replicated to the 2nd Server I have added:

Bingo – we now have a working topology that will keep data in sync between our offices using Azure File Sync. No VPNs required, no complicated configuration, just two Agent installations, and some basic Azure Configuration. This provides a simple and effective way to keep branch site data in Sync and provides a number of potential use cases whereby complicated setups would otherwise be required.

Until next time, Cheers!

Azure Lab Services – creating an effective and reliable testing environment

Azure Lab Services (formerly DevTest Labs) is designed to allow for the rapid creation of Virtual Machines for testing environments. A variety of purposes and use cases can be serviced using DevTest Labs, for example, Development Teams, Classrooms, and various Testing environments.

The basic idea is that the owner of the Lab creates VMs or provides a means to create VMs, which are driven by settings and policy, all of which is configurable via the Azure Portal.

The key capabilities of Azure Lab Services are:

  • Fast & Flexible Lab Setup – Lab Services can be quickly setup, but also provides a high level of customization if required. The service also provides built in scaling and resiliency, which is automatically managed by the Labs Service.
  • Simplified Lab Experience for Users – Users can access the labs in methods that are suitable, for example with a registration code in a classroom lab. Within DevTest Labs an owner can assign permissions to create Virtual Machines, manage and reuse data disks and setup reusable secrets.
  • Cost Optimization and Analysis – A lab owner can define schedules to start up and shut down Virtual Machines, and also set time schedules for machine availability. The ability to set usage policies on a per-user or per-lab basis to optimize costs. Analysis allows usage and trends to be investigated.
  • Embedded Security – Labs can be setup with private VNETs and Subnets, and also shared Public IPs can be used. Lab users can access resources in existing VNETs using ExpressRoute or S2S VPNs so that private resources can be accessed if required. (Note – this is currently in DevTest Labs only).
  • Integration into your workflows and tools – Azure Lab Services provides integration into other tools and management systems. Environments can automatically be provisioned using continuous integration/deployment tools in Azure DevTest Labs.

You can read more about Lab Services here: https://docs.microsoft.com/en-us/azure/lab-services/lab-services-overview

I’m going to run through the setup of the DevTest Labs environment, and also run through a few key elements and run through the use cases for these:

Creating the Environment:

This can be done from the Azure Portal – just search for “DevTest Labs” and then we can create the Lab Account. Note – I have left Auto-shutdown enabled (this is on by default at 1900 with no notification):

Once this has been deployed, we are able to view the DevTest Labs Resource overview:

From here we can start to build out the environment and create the various policies and settings that we require.

Configuring the Environment

The first port of call is the “Configuration and Policies” pane at the bottom of the above screenshot:

I’m going to start with some basic configuration – specifically to limit the number of Virtual Machines that are allowed in the lab (total VMs per Lab) and also per user (VMs per user). At this point I will also be setting the allowed VM sizes. These are important configuration parameters, as with these settings in place we effectively limit our maximum compute cost:

[total number of VMs allowed in the lab] x [maximum/most expensive VM size permitted] = the maximum compute cost of the lab environment

This is done using the panes below:

First up, setting the allowed VM sizes. For this you need to enable the setting and then select any sizes you wish to be available in the Lab. I have limited mine to just Standard_B2s VMs:

Once we have set this up as required we just need to click “Save” and then we can move onto the “Virtual Machines per user” setting. I am going to limit my users to 2 Virtual Machines each at any time:

You’ll notice you can also limit the number of Virtual Machines using Premium OS disks if required. Once again – just click “Save” and then we can move onto the maximum number of Virtual Machines per Lab:

As you can see I have limited the number of VMs in the Lab to a maximum of 5. Once we have clicked “Save” we have now configured a few of the basic elements for our Lab.

Defining VM Images for our Lab

Next up – it’s time to configure some images that we want to use in our Lab. We have three options here – which provide a number of different configurations that suit different Lab requirements:

Marketplace images – these are images from the Azure Marketplace, much like we are used to selecting when creating Virtual Machines in the Portal

Custom images – these are custom images uploaded into the Lab, for example containing bespoke software or settings not available via the Marketplace or a Formula.

Formulas – these allow for the creation of a customised deployment based on a number of values. These values can be used as-is or adjusted to change the machine deployed. Formulas provide scope for customisation within defined variables and can be based on both Marketplace and Custom images.

For more information on choosing between custom images and formulas this is well worth a read: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-comparing-vm-base-image-types

I’ve defined a single Marketplace Image from the Portal – and provided only Windows 10 Pro 1803:

Next, I am going to create a Formula based on this image, but also with a number of customisations. This is done by clicking on “Formulas” and then “Add”:

Next, we can configure the various settings of our Formula, but first we need to setup the Administrator username and password in the “My secrets” section of the lab. This data is stored in a Key Vault created as part of the Lab setup, so that they can be securely used in Formulas:

Next, I am going to create a Windows 10 Formula with a number of applications installed as part of the Formula, to simulate a client PC build. This would be useful for testing out applications against PCs deployed in a corporate environment for example. When we click “Formulas” and then  “Add” we are presented with the Marketplace Images we defined as available in the earlier step:

Marketplace Image selection as part of the Formula creation:

Once the base has been selected we are presented with the Formula options:

There are a couple of things to note here:

  • The user name can be entered, but the password is what we previously defined in the “My secrets” section
  • The Virtual Machine size must adhere to the sizes defined as available for the lab

Further down the options pane we can define the artifacts and advanced settings:

Artifacts are configuration and software items that are applied to the machines when built – for example, applications, runtimes, Domain Join options etc. I’m going to select a few software installation options to simulate a client machine build:

There are a few very useful options within other artifacts, which I feel deserve a mention here:

  • Domain Join – this requires credentials and a VNET connected to a Domain Controller
  • Download a File from a URI – for example if we need to download some custom items from a specific location
  • Installation of Azure PowerShell Modules
  • Adding a specified Domain User to the Local Admins group – very useful if we need all testing to be done using Domain Accounts and don’t want to give out Local Administrator credentials
  • Create an AD Domain – if we need a Lab domain spun up on a Windows Server Machine. Useful if an AD Domain is required temporarily for testing
  • Create a shortcut to a URL on the Public Desktop – useful for testing a Web Application on different client bases. For example we could test a specified Website against a number of different client builds.
  • Setup basic Windows Firewall configuration – for example to enable RDP or to enable/disable the Firewall

It is also worth noting that we can define “Mandatory Artifacts” within the Configuration and Policies section – these are artifacts that are applied to all Windows or Linux VMs created with the Lab:

After artifact selection we can specify the advanced settings for the Lab:

It is worth noting here that we can specify an existing VNET if required – this is particularly useful if we need to integrate the Lab VMs into existing environments – for example an existing Active Directory Domain. Here we can also configure the IP address allocation, automatic delete settings, machine claim settings, and the number of instances to be created when the formula is run.

Once the Formula is created we can see the status:

Granting access to the Lab

We can now provide access to end users – this is done from the Lab Configuration and Policies pane of the Portal:

We can then add users from our Azure Active Directory to the Lab Environment:

Visit this URL for an overview of the DevTest Lab Permissions: https://docs.microsoft.com/en-us/azure/lab-services/devtest-lab-add-devtest-user

Now we can start testing the Lab environment logged in as a Lab User.

Testing the Lab Environment

We can now start testing out the Lab Environment – to do this, head over to the Azure Portal and log in as a Lab User – in this case I am going to log in as “Labuser1”. Once logged into the Portal we can see the Lab is assigned to this user:

The first item I am going to do is to define a local username and password using the “My secrets” section – I won’t show this here but you need to follow the same process as I did earlier in this post.

Once we have accessed the Lab, we can then create a Virtual Machine using the “Add” button:

This presents the Lab user with a selection of Base Images – both Marketplace (as we previously defined) and Formulas (that we have previously setup):

I’m going to make my life easy – I’m a lab user who just wants to get testing and doesn’t have time to install any software… so a Formula is the way to go! After clicking on the “Windows10-1803_ClientMachine” Formula I just need to fill out a few basic details and the VM is then ready to provision. Note that the 5 artifacts we setup as part of the Formula and the VM size is already setup:

Once we have clicked create the VM is then built and we can see the status is set to “Creating”:

After some time the VM will show as Running:

Once the VM has been created we can connect via RDP and start testing. When creating this VM I left all of the advanced settings as defaults – which means as part of the first VM deployment, a Public IP and Load Balancer (so that the IP can be shared across multiple Lab VMs) has been created. When we now look at the VM overview window, we can just click connect as we normally would to an Azure VM:

Once we have authenticated, we can then use the VM as we would any other VM – note in the screenshot below, both Chrome and 7Zip (previously specified artifacts) are visible and have been installed (along with other items) for us before we access the VM:

When we have finished our testing or work on this VM – we have a number of options we can use:

  • Delete the VM – fairly self explanatory this one… the VM gets deleted
  • Unclaim the VM – the VM is then placed into the pool of Claimable VMs so that other Lab users can claim and use them. This is useful if you wish to simply have a pool of VMs that people use and then return to a Pool. For example – in a development team testing different OS versions or Browsers etc.
  • Stop the VM – this is the same as deallocating any Azure VM – we’d only pay for the Storage use when stopped

Hopefully this has been a useful overview of the DevTest Labs offering within Azure… congratulations if you made it all the way to the end of the post! Any questions/comments feel free to reach out to me via my contact form or @jakewalsh90 🙂