Alk3my

Azure Functions environment set-up with Terraform

October 31, 2019

Since starting down the path of Infrastructure as Code I can't spin anything up manually anymore. I need everything defined and committed with the codebase so I can easily spin up, tear down and replicate all my environments easily.

I've been using Terraform a bit at work and found it quite robust. It's cloud agnostic so it works just as well with Azure as with AWS (and likely some of the other major cloud platforms but I haven't had experience with others). It abstracts away the underlying templates (e.g. Azure Resource Manager and CloudFormation) with a succinct DSL. It keeps track of state and as such can always easily compare between what you've defined and what is actually in your live environment. Recently it's started to provide a great UI through app.terraform.io to keep the remote state as well as create/review/apply plans through the UI. You can stick with the CLI though also.

As for Azure Functions, these are MicroSoft's Serverless offering akin to AWS Lambda. I've been using them lately for my own project outside of work. Both are great platforms although I'm a bit more familiar with Azure and as a .NET developer it's a little easier in terms of the developer experience.

Terraform install

This part is very easy. Grab the executable from the website and chuck it in a suitable place like Program Files\Terraform then add it to your path (Environment Variables). Open a new commandline and see if it runs (it will just spit out the different commands).

terraform

Create Service Principle in Azure and assign role in subscription RBAC

I covered this in a previous post so follow those steps and then come back here.

Add Terraform scripts

As mentioned in the service principle post, I usually create an "environment" folder at the root of the repository and then perhaps folders for dev, test, prod etc. You might even have folders per cloud provider if you're using multi-cloud resources. Add a provider.tf file with the following content.

variable "client_secret" {}

provider "azurerm" {
  version = "~> 1.36"
  subscription_id             = "00000000-0000-0000-0000-000000000000"
  client_id                   = "00000000-0000-0000-0000-000000000000"
  client_secret               = "${var.client_secret}"
  tenant_id                   = "00000000-0000-0000-0000-000000000000"
}

At this point you should be able to initialise Terraform with terraform init. You don't need the version setting initially but it's good practice to add it later so the runs are more deterministic across environments.

Remote backend

Next up I usually configure a remote backend. This is a way to sve and track state changes into a remote repository rather than keeping it locally. Without a remote backend Terraform will create and alter a state file so it can compare changes between runs with what's actually deployed. Unfortunately it can end up with sensitive settings in there (e.g. deployment credentials for an Azure App Service) and when you're in a team you all need access to the state. This is easily solved with a remote backend and Terraform provides it's own which is great. There's a UI showing your workspaces, all your previous runs, any runs waiting to be applied, etc. You can even have pull requests trigger runs and require approvals to go ahead.

I'll come back to this with another post but for now you can read up on it at Terraform's docs site.

It might look something like this if you add a backend.tf file (by the way the filenames aren't important).

terraform {
  backend "remote" {
    hostname = "app.terraform.io"
    organization = "YourOrganizationName"

    workspaces {
      name = "yourappworkspace-environment-etc"
    }
  }
}

Resource group

It's good practice in Azure to have a resource group as a logical container for your solution's services. Again create a file e.g. resource_group.tf.

resource "azurerm_resource_group" "yourapp-functions-rg" {
  name = "${var.application_name}-${var.environment_shortname}-${var.location_shortname}"
  location = var.location
}

So the name is made up of variables which you define and in most cases set the values of. We'll come back to this later. I tend to name things by app name + an environment short name like "dev" and also the location of the service e.g. "ae" for AustraliaEast. The location is the full name used by Azure, e.g. AustraliaEast.

App Service Plan

Although Functions are Azure's "Serverless" offering they actually run within App Service Plans. However Azure give you some flexibility in how you'd like them to run. You can either make use of a new / existing plan where you're paying a set amount for always on resources to run your Functions within there. Or you can use the Consumption model where they will run in some other App Service on demand. This will affect the cost (a set fee vs. as used) but also the resources available. In most cases you'll want the Consumption model as it's more scalable and likely more cost effective unless you've got a well known workload perhaps.

Create an app_service_plan.tf file with something like the following.

resource "azurerm_app_service_plan" "yourapp-functions-asp" {
  name                = "${var.application_name}-${var.environment_shortname}-${var.location_shortname}"
  location            = "${azurerm_resource_group.yourapp-functions-rg.location}"
  resource_group_name = "${azurerm_resource_group.yourapp-functions-rg.name}"
  kind                = "FunctionApp"
  sku {
    tier = "Dynamic"
    size = "Y1"
  }
}

So name should be familiar now and you can see in location and resourcegroupname how we can make use of existing resource definitions values (also outputs but I'm not showing that here). The kind and sku define that we want a Functions application and to use the Dynamic (Consumption based) app service "plan".

Storage Account

Although there are a few ways to deploy a Function, I'm most interesting in using a Storage Account as ultimately I'd like to configure them to just run from zip file within the storage account.

Create a storage_account.tf file and fill out as below.

resource "azurerm_storage_account" "yourapp-functions-sa" {
  name                     = "${var.application_name}${var.environment_shortname}${var.location_shortname}"
  resource_group_name      = "${azurerm_resource_group.yourapp-functions-rg.name}"
  location                 = "${azurerm_resource_group.yourapp-functions-rg.location}"
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

You might change the type for a production app but this is fine for now.

Function App

Now we get to the most important part, the actual Function App. Below is an example definition in a function_app.tf file.

resource "azurerm_function_app" "yourapp-functions-fa" {
  name                      = "${var.application_name}-${var.environment_shortname}-${var.location_shortname}"
  location                  = azurerm_resource_group.yourapp-functions-rg.location
  resource_group_name       = azurerm_resource_group.yourapp-functions-rg.name
  app_service_plan_id       = "${azurerm_app_service_plan.yourapp-functions-asp.id}"
  storage_connection_string = "${azurerm_storage_account.yourapp-functions-sa.primary_connection_string}"
  version                   = "~2"
  https_only                = true

  app_settings = {
    FUNCTIONS_WORKER_RUNTIME    = "dotnet"
  }
}

In this example you can see where we are starting to use outputs such as the App Service Plan ID and Storage Account Connection String for the resources we're creating. Terraform is great at handling these dependencies for you and the order things need to be creating in.

I'm using the version 2 Functions runtime as well as dotnet. Azure Functions support a whole bunch of languages but I'm most comfortable in this space.

We haven't defined a cors (Cross Origin Resource Sharing) section but you may chose to if you're using HTTP triggered Functions called from a web app UI perhaps.

Variables

OK so scattered throughout these files are variables that we need to define. You can put these in another file but I often define them at the start of the provider.tf file as below.

variable "client_secret" {}
variable "application_name" {}
variable "location_shortname" {
  default = "ae"
}
variable "location" {
  default = "AustraliaEast"
}
variable "environment_shortname" {
  default = "dev"
}

provider "azurerm" {
  version = "~> 1.36"
  ...

As you can see, you can also set defaults. If you don't provide defaults then you'll be prompted for the values when you run terraform plan or terraform apply.

Finally I usually have a separate file with these values. I use the convention based terraform.auto.tfvars as then Terraform knows where to find them (otherwise you can pass another filename via the commandline).

This might look like this...

application_name        = "yourapp"
location_shortname      = "ae"
location                = "AustraliaEast"
environment_shortname   = "dev"

Notice how client_secret isn't here? You should ensure any sensitive values aren't in your files. Terraform will prompt you when it runs or you can define in your remote backend. You don't want secrets committed in your repo. You can choose to exclude this file but it makes working in a team harder and ultimately the secret is on your machine in plain text.

Plan and Apply

I always run terraform plan first to see what the changes will be whether it's the first run or not. Although apply will also show changes and prompt you to continue, I still prefer this flow. If you're happy, run terraform apply and enter 'yes' to create your resources. Have a look in the Azure portal and they should be there awaiting some code to be pushed in a few minutes.