AKS and Azure Key Vault - your options and decision making

Using a containerized application often means variables and secrets need to be provided to the container at runtime. With Azure Kubernetes Service we are obviously going to use Azure Key Vault for that. But the decision making process is not that easy. The question is which AKS and Azure Key Vault configuration will we use? There's always a best practice but let's forget about that for a moment. Your solution may not be ready for that best practice. Maybe there's a different approach that suits your needs. What we need to do is look for the best possible practice.
We are going to look at simplifying the decision making process by looking at several options we have based on a use case. Technically there are more possibilities and we can go above and beyond and build the most epic and overly complex configurations. No worries, we can still do that but for this post we will focus on the different technologies we have available to us and how the customer requirements impact our options.
Use cases
Quite often you will read or hear about the epic best practice you need to implement when using technology X. The same goes for using Azure Kubernetes Service and Azure Key Vault. And that best practice may even be different depending on who you are talking to. A common recommendation for example is to use the Microsoft Authentication Library and in combination with Workload Identity on AKS that is a very solid recommendation.
Unfortunately not every application can use such technologies out of the box. Unfortunate but fortunate, imagine everyone could use all best practices documented as is, many of us would be out of a job 😄
The fact is that a lot of solutions are not build for AKS from scratch but are modernized / transformed and refactored from another platform. Some applications have been around for decades.
Implementing a feature such as the Microsoft Authentication Library might even take significant development effort that you or the customer currently does not want to spend. This can be for a number of reasons but in the end it comes down to money and prioritization. Does that then mean we shouldn't deploy to AKS and leverages technologies such as Azure Key Vault? Definitely not. In fact, we can just pick a different configuration. The best practice can be the end goal, and we can prioritize it when we are ready for it.
Scenario
Let's use the following scenario for our decision making process:
We have a customer with a Python application. The application can be containerized with minimum effort. The application requires a variable is currently passed as an environment variable. On the original platform (a virtual machine), this variable contained a secret/password that was leveraged by the application. To mimic this behavior we have the following proof of concept code:
import os
from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/')
def environment_variable():
# Fetch the value of the specific environment variable
variable_value = os.getenv("variable_from_keyvault", "Environment variable not set")
# Return the value as JSON
return jsonify({"variable_from_keyvault": variable_value})
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port='80')
The proof of concept code retrieves a variable from the operating system (os.getenv) and for debugging purposes, returns it to the user.
The customer has stated they can make changes to the application in the future but currently the priority is to get the application containerized and up and running as soon as possible.
We want to use AKS and Azure Key Vault. We know that we will use CSI Store secrets driver and that comes with a number of options. Let's take a look at those options
Option 1: Workload Identity and MSAL
We can leverage the workload as discussed earlier but this will require the customer to implement the Microsoft Authentication Library. For me, connecting to Azure resources directly from the solution itself, is always the best way to go as it eliminates all intermediate technologies that we normally introduce on an infrastructure level. However, it takes a bit more effort. The customer also stated that the priority currently is to get everything up and running as soon as possible. As we don't know how much development effort is required on the customer end to implement the library, we will put this on the future roadmap as a best practice to persuade.
Option 2: Mount Secrets into the container
Using the CSI store secrets driver, configuring the volumes and by configuring the Secret Provider Class accordingly we can mount secrets into the container. We can do this in one of two ways. We can leverage the service connector (currently in preview) or we can manually configure the addons and permissions.
In our example we have a secret named "mybigsecret" in Azure Key Vault.

Using the secret provider class we tell it which secrets it should pull from the Key Vault when required.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-cloudadventures-kv-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: <REDACTED>
keyvaultName: kv-aks-cloudadventures
cloudName: ""
objects: |
array:
- |
objectName: mybigsecret
objectType: secret
objectVersion: ""
tenantId: <REDACTED>
We can then reference the Secret Provider Class, define the volume and mount the secrets into the pod like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
serviceAccountName: sa-keyvault-aks-cloudadventures
containers:
- name: demo-container
image: ghcr.io/whaakman/python-variables:v1.0
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store01-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-cloudadventures-kv-user-msi"
Upon starting the container, the secret is pulled from Azure Key Vault and mounted into the container inside the path /mnt/secrets-store. That results in the presence of a file called "mybigsecret" inside /mnt/secrets-store.
A quick inspection confirms that:

Okay cool. Small problem though: the application is looking for an environment variable and not a file. So even though this works, this still requires the customer to modify the application or the container to call the file and provide its contents to the application.
It's definitely possible but let's see if there's an alternative.
Option 3: Create Kubernetes Secrets from Azure Key Vault secrets
As a third option, we can actually create secrets using the CSI Store secrets driver. It's definitely not the best practice as we will have a copy of our secrets inside Kubernetes, whereas the other two options either mount the secret into the container or pull the secret directly from Azure Key Vault. However, this does allow us to use to modify our deployment configuration to provide and environment variable with the contents of our secret.
First we need to tell our SecretProviderClass that it will create secretObjects containing our secret value from Azure Key Vault:
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-cloudadventures-kv-user-msi
spec:
provider: azure
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: <REDACTED>
keyvaultName: kv-aks-cloudadventures
cloudName: ""
objects: |
array:
- |
objectName: mybigsecret
objectType: secret
objectVersion: ""
tenantId: <REDACTED>
secretObjects:
- secretName: mybigsecret
type: Opaque
data:
- objectName: mybigsecret
key: mybigsecret
After that we can add the environment variable to our container, referring to secret we defined in our secretObjects.
env:
- name: variable_from_keyvault
valueFrom:
secretKeyRef:
name: mybigsecret
key: mybigsecret
When we deploy our pod, a secret named "mybigsecret" should be created

Now that we have provided the configuration to set the value of variable_from_keyvault to match mybigsecret, we are fulfilling the requirement of the application:
variable_value = os.getenv("variable_from_keyvault", "Environment variable not set")
When visiting our debug page we can see the value is now set:

We have just successfully provided an environment variable to our customer application without the requirement to change code. Not using a best practice but using the best possible practice!
Wrapping up
What we have done is looked at different options for providing Azure Key Vault secrets to our containerized application in Azure Kubernetes Service. Our "customer" requirement was minimum effort, option three (Option 3: Create Kubernetes Secrets from Azure Key Vault secrets) looks to be meeting that criteria. However, this is not the best practice that we would originally look at it. In fact, we introducing an additional copy of a secret as a Kubernetes secret. Definitely not ideal. But, for this example it is the best possible practice. We can simply not expect a customer to switch gears, change all currently backlog priorities and refactor parts of code they had no anticipated. In fact, we can be of value by introducing pieces of infrastructure to prevent changes in code short term and provide valuable recommendations for the solution roadmap long term.