Using SSL/TLS certificates from Azure Key Vault in Kubernetes pods
How to make Kubernetes pods trust internal HTTPS services #
Photo by Parsoa Khorsand on Unsplash
NOTE: This post builds upon my previous post Accessing Azure Key Vault secrets from Kubernetes, and assumes understanding of the subject discussed there.
A common task I face as a DevOps engineer has to do with injecting TLS (formerly SSL) certificates into an application or service. Why would this be needed? There can be many reasons, although by far the one I’ve encountered most, has to do with internal TLS certificates.
Most medium-to-large enterprises use internal TLS certificates to authenticate internal connections. By internal, I mean certificates which have not been obtained from a known Certificate Signing Authority, but instead, have been locally generated, for use by internal applications. This model requires that any client or user attempting to connect have a “Certificate Authority Certificate” installed, which makes the system trust certificates generated by that particular (internally created, and unique to the organization) Certificate Authority.
For example, an organization might have an internal log sink/aggregator which accepts connections over HTTPS. If this sink is only available within the internal network, its TLS certificate will probably have been generated in-house. Now, imagine a microservice running in Kubernetes needs to send logs to this sink. How can we make the Kubernetes pod trust the internal Certificate Authority, so that connections to the log sink are properly secured?
Although there are probably a few different ways of achieving this result, this is one that I have used which has worked well for me, and does not require any additional helper tools/sidecar containers/etc.
To start off, the CA certificate to be installed in the microservice should be stored in an Azure Key Vault. For simplicity, I will assume that this certificate has been saved as a
secret. This method should also work if it has been saved as a
certificate, although the syntax might be different. Refer to the documentation for more information on how to reference the saved cert.
Next, our Kubernetes cluster should already have the Kubernetes Secrets Store CSI Driver set up. For instructions on how to do that, check my previous post on the subject.
The certificate to be used should be in a format that our microservice understands. Since I am using Linux-based microservices, I need to make sure my cert is available as a PEM/CRT file.
Finally, I am going to assume our microservice is based on some flavor of Debian. If it isn’t, the location to mount the certificate or the command to be run might be slightly different. Refer to your distribution’s docs for specific instructions on how to update the local certificate store.
Querying the certificate #
The certificate can be queried in the same way as any other key vault object. One thing to notice is that we do not create a Kubernetes secret from the Azure secret (notice the missing
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: azure-secrets spec: provider: azure parameters: keyvaultName: $KEYVAULT_NAME tenantId: $SERVICE_PRINCIPAL_TENANT_ID # Name of the secret containing the certificate objects: | array: - | objectName: internal-ca-certificate objectType: secret
The secret should now be available for use in our cluster.
Mounting the certificate in our microservice #
With the cert now available, we can use the
volume functionality in Kubernetes to mount it in our pod. First, we need to declare our secrets provider as an eligible volume:
volumes: # Can be anything - name: secrets-provider csi: driver: secrets-store.csi.k8s.io volumeAttributes: # Should match the name of the SecretProviderClass secretProviderClass: azure-secrets # Credentials for authenticating to the key vault, # see previous post nodePublishSecretRef: name: $SERVICE_PRINCIPAL_CREDENTIALS
With that out of the way, we should then mount the secret as a file in our pod. This uses a little trick found in the
volumeMounts functionality of Kubernetes, where a single file can be mounted into a directory, instead of mounting on top of a directory and overriding its contents. To achieve this, we use the full path of the mounted file, and use the
subPath field to indicate the specific file in the volume we wish to mount. In this case, the
subPath should match the name of the secret we are querying with our
volumeMounts: # Should match the name of the volume - name: secrets-provider # Full path of the mounted file. # For Debian-based images, it should be # inside the /usr/local/share/ca-certificates/ # folder, since that is where the system's # CA certificates are stored mountPath: "/usr/local/share/ca-certificates/internal-ca.crt" # Name of the secret containing the certificate subPath: "internal-ca-certificate" readOnly: true
With this, the certificate will be available as a file in our pod. However, most Linux-based systems do not just use whatever files are in that folder at any given moment. Instead, the system needs to be told to update the local certificate store, which is built from whatever files are in that directory. We will do that in the next step.
Updating the certificate store #
To update the microservice’s certificate store, we use the
update-ca-certificates command. To make sure that our new cert is available for our service from the moment it starts up, we can run this command as part of a
spec.containers.lifecycle.postStart instruction. PostStart events are sent immediately after a container is started, which means that our command will be run as soon as possible. Additionally, since volume mounts are performed before startup, we can be sure that our cert will be ready to be included in the local certificate store:
lifecycle: postStart: exec: command: ["/bin/sh", "-c", "update-ca-certificates"]
This is the last piece of the puzzle. Putting it all together, our pod deployment should look like this:
apiVersion: v1 kind: Pod metadata: name: lifecycle-pod spec: volumes: - name: secrets-provider csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: azure-secrets nodePublishSecretRef: name: $SERVICE_PRINCIPAL_CREDENTIALS containers: - name: app-with-certificate image: ubuntu command: ["sleep", "300"] volumeMounts: - name: secrets-provider mountPath: "/usr/local/share/ca-certificates/internal-ca.crt" subPath: "internal-ca-certificate" readOnly: true lifecycle: postStart: exec: command: ["/bin/sh", "-c", "update-ca-certificates"]
At this point, our service should be able to perform HTTPS calls to any other internal services using the same CA provider.
To verify if our certificate is indeed working, we can exec into our pod:
kubectl exec --stdin --tty app-with-certificate -- /bin/bash
Once inside, we can try
curling into a known internal service:
If the CA certificate has been set up correctly,
curl should be able to successfully connect to the HTTPS service without complaining about insecure certificates.