Azure DevOps – Azure NetApp Files (Part 2, Terraform)

For the next post in this series around using Azure DevOps and WVD, I wanted continue focusing on Azure NetApp Files. Continuing on from Part 1 in this series, I will be covering the setup of Azure NetApp Files (ANF) within an Azure DevOps Pipeline, but this time using Terraform to deploy ANF. In Part 1, Azure CLI was used – both are perfectly valid methods, but have slightly different configurations and uses, so can be used as required based upon your needs.

Task Overview

Setting up Azure NetApp Files within Terraform is quite a simple task (just like it is with the Azure CLI). We just need to carry out the following 4 Steps:

  1. Creation of a Resource Group to host the Azure NetApp Files Account
  2. Creation of an Azure NetApp Files Account, with Active Directory connection
  3. Creation of an Azure NetApp Files Capacity Pool
  4. Creation of the Azure NetApp Files Volume
A few things to consider…

There are a few things to note about the code below – mainly things that may need to be in place first or require awareness so that everything works as expected. These aren’t gotchas as such – just things to be aware of! ūüôā

  • A subnet will need to be delegated to Azure NetApp Files – see here. We can also do this during Step 4 if required!

  • The VNET that your delegated Subnet resides in, will require communication with a Domain Controller. Also ensure DNS is set correctly!

  • Your Domain Controller will also need to be operational – as we have to provide Azure NetApp Files a username and password to setup the Active Directory Connection. The account provided needs to be able to join computer objects to the Domain, in the OU specified.

Once we have these in place – we can setup Terraform within our DevOps Pipeline. This is usually done using a few steps – I’m also using the Replace Tokens Task, so that I can obscure certain secure values (Domain credentials for example), and have these as a Pipeline Variable instead. This is something I covered in my previous post around Domain Controller setup within Azure DevOps.

Just show me the code already!

Within your TF file(s) you will need the following elements in place to completely setup Azure NetApp Files:

    1. A Resource Group for Azure NetApp Files:
#Create Resource Group
resource "azurerm_resource_group" "rg1" {
  name     = var.azure-rg-1
  location = var.loc1
  tags = {
    Environment = var.environment_tag
  }
}

As you can see in the above, I am using variables to define Name, Location, and the Environment Tag.

2. An Azure NetApp Files Account, with Active Directory connection:

#Create Azure NetApp Files Account
resource "azurerm_netapp_account" "region1-anf" {
  name                = "region1-anf"
  resource_group_name = azurerm_resource_group.rg1.name
  location            = var.loc1

active_directory {
    username            = "__adusername__"
    password            = "__aduserpwd__"
    smb_server_name     = "ANFOBJ"
    dns_servers         = ["10.10.10.10"]
    domain              = "ad.lab"
    organizational_unit = "OU=ANF,OU=LAB"
  }
}
}

Again, you can see that location is based on a variable (var.loc1), and I refer to the Resource Group RG1 created in Step 1. You’ll also need to populate the details for SMB Server Name, DNS Servers, Domain, and (optionally), Organizational Unit.

Note: you will also see in the above that I have reference the username and password as Tokens, so that the Replace Tokens task can be used. This means the AD Username and Password do not need to be included within our repo and can be replaced automatically when the Pipeline is run.

Using Replace Tokens is a simple way to remove security information from your Repository and ensure it’s handled when the Pipeline is run. You just need to run this before Terraform within the Pipeline:

Remember to set the token prefix and suffix so it picks the correct values up:

Within your Pipeline you’ll also need to set these variables. If desired, these could be pulled from Azure Key Vault – which I covered in Part 1 of this Azure NetApp Files in Azure DevOps Series.

3. An Azure NetApp Files Capacity Pool:

#Create Azure NetApp Files Capacity Pool
resource "azurerm_netapp_pool" "region1-anf-pool1" {
  name                = "pool1"
  account_name        = azurerm_netapp_account.region1-anf.name
  location            = var.loc1
  resource_group_name = azurerm_resource_group.rg1.name
  service_level       = "Standard"
  size_in_tb          = 4
}

Here you can see that location is again based on a variable, and also reference is made to the Azure NetApp Files Account, and Resource Group.

4. An Azure NetApp Files Volume:

resource "azurerm_netapp_volume" "volune1" {
  lifecycle {
    prevent_destroy = true
  }

  name                = "volume1"
  location            = var.loc1
  resource_group_name = azurerm_resource_group.rg1.name
  account_name        = azurerm_netapp_account.region1-anf.name
  pool_name           = azurerm_netapp_pool.region1-anf-pool1.name
  volume_path         = "anfvolume"
  service_level       = "Standard"
  subnet_id           = azurerm_subnet.anfsubnet.id
  protocols           = ["CIFS"]
  storage_quota_in_gb = 4096

Should you also need to create/delegate a subnet at this point – the following Terraform can be used. Note that in the above I refer to subnet_id with “azurerm_subnet.anfsubnet.id” – this needs to match the subnet that has been delegated to Azure NetApp Files (as is shown below!).

resource "azurerm_subnet" "anfsubnet" {
  name                 = "anfsubnet"
  resource_group_name  = azurerm_resource_group.rg1.name
  virtual_network_name = azurerm_virtual_network.vnet1.name
  address_prefixes     = ["10.10.1.0/24"]

  delegation {
    name = "netapp"

    service_delegation {
      name    = "Microsoft.Netapp/volumes"
      actions = ["Microsoft.Network/networkinterfaces/*", "Microsoft.Network/virtualNetworks/subnets/join/action"]
    }
  }
}

Once we have our 4 elements setup within our Terraform code, we can move onto running Terraform within a Pipeline.

Setting up our Pipeline Tasks

Now that we have our Terraform ready to go – we just need to setup the required tasks within our Pipeline. Fortunately, these are all simple tasks:

Essentially – we use the Replace Tokens task first, so that any required usernames/passwords are updated within our build artifacts directory, and then move onto installing, initialising, and running Terraform. I’ve covered the Replace Tokens task above, so will move right onto the Terraform Tasks. Firstly – we need to install Terraform!

To do this – add a new task and search for Terraform:

The installation is a simple step to configure – we just provide the version required:

Next up, we need to initialise Terraform. There are a few things to consider within this Task – we need to initialise Terraform, but also define the backend configuration to ensure the state file is stored securely:

To ensure that our state file is stored securely – we need to define the backend configuration for this task. The Storage account here and container should be in place, or be created using a CLI Task earlier in the Pipeline:

We can now move onto the Plan Task for Terraform! Configuring this Task is straightforward – we just need to add a few details:

Finally – we are ready to apply our configuration! This Task, again, is a simple one:

We now have a set of Pipeline Tasks ready to run:

When our Pipeline runs and completes – we then have a functional Azure NetApp Files Account, Capacity Pool, Active Directory connection and Volume that we can use in our environment, provisioned via Terraform:

Conclusion

I hope this has been helpful – as always, please feel free to reach out if you have any questions, and watch out for more posts in this series soon! ūüôā