Using Terraform to deploy, configure and maintain (managed) Azure Kubernetes clusters


Get the slides:


Senior Cloud and DevOps Engineer @ Capgemini Norway


15+ years in infrastructure and software development


Linux nerd


Restoring and modifying old gaming systems


Managed Kubernetes clusters?


Get the slides:



Get the slides:


Infrastructure-as-code tool

Predictably create, change, and improve infrastructure

Terraform providers


Get the slides:

Responsible for exposing and creating resources

Logical abstraction of upstream APIs

2200+ available providers (June 2022)

What does it look like?

resource "provider_resource" "any_name" {

        key = "value"

        other_key = data.from.different.resource



resource "azurerm_kubernetes_cluster" "my_cluster" {

    name = "${var.environment}-cluster"

    resource_group_name =

    location = "westeurope"


    default_node_pool {

        min_count = "3"

        max_count = "6"





The Process


Get the slides:




Cluster creation


Get the slides:

Minimum resources needed:

  • Resource group
  • Virtual network + subnet
  • Kubernetes cluster


Terraform provider: azurerm

resource "azurerm_resource_group" "demo_rg" {}


resource "azurerm_virtual_network" "demo_vnet" {}


resource "azurerm_subnet" "demo_subnet" {}


resource "azurerm_kubernetes_cluster" "demo_cluster" {}

Optional resources:

  • Azure Container Registry
  • Key Vault

Terraform provider: azurerm

resource "azurerm_container_registry" "demo_acr" {}


resource "azurerm_key_vault" "demo_kv" {}


resource "azurerm_role_assignment" "pull_from_acr" {}


resource "azurerm_role_assignment" "read_kv_secrets" {}

Cluster configuration


Get the slides:

  • RBAC
  • Prometheus
  • Linkerd
  • CSI Secrets Provider
  • Ingress
  • ... any other helper tools

Terraform providers: kubernetes, helm, azurerm, others as needed

resource "kubernetes_namespace" "linkerd_ns" {}


resource "helm_release" "linkerd_chart" {}


resource "tls_self_signed_cert" "linkerd_tls_cert" {}

Example: Linkerd

resource "kubernetes_cluster_role" "read_access" {}


resource "kubernetes_cluster_role_binding" "k8s_to_ad" {}


resource "azurerm_role_assignment" "ad_assignment" {}


Populating the cluster


Get the slides:

Deployment of microservices to cluster

Terraform providers: kubernetes, helm

resource "kubernetes_namespace" "service_ns" {}


resource "helm_release" "microservice" {}

...the sky is the limit

Tips and Warnings


Get the slides:

Separate infrastructure and configuration/population

Plan for growth from the beginning

Use Terraform's capabilities to reduce costs

Use terraform plan to visualize changes

Terraform will perform the following actions:


# module.demoK8s.azurerm_kubernetes_cluster.cluster will be created

  + resource "azurerm_kubernetes_cluster" "cluster" {

  + dns_prefix = "demo-cluster-dns"

  + fqdn = (known after apply)

  + kubernetes_version = "1.22.6"

  + location = "westeurope"

  + name = "demo-cluster"

  + sku_tier = "Free"





Plan: 19 to add, 0 to change, 0 to destroy.

To perform exactly these actions, run the following command to apply:

terraform apply "terraform.tfplan"


Terraform plan - changes

Terraform has been successfully initialized!


Creating plan...


module.grantsK8s.azurerm_kubernetes_cluster.cluster: Refreshing state... [id=REDACTED]




No changes. Your infrastructure matches the configuration.


Terraform has compared your real infrastructure against your configuration

and found no differences, so no changes are needed.


Terraform plan - no changes



Get the slides:

Changing Kubernetes APIs

Certain cluster changes require destruction/recreation

Unmaintained custom providers

Edge cases not yet covered by official providers



Get the slides:

Full visibility into clusters and Cloud resources

Identical environments

Disaster recovery

Drift prevention / idempotency

More information

Azure Kubernetes Service




Terraform providers

Get in touch!


(And come visit us in the Capgemini stand)

Ask me

for a