Authenticate to Azure Resources with Azure Managed Identities

In this post, we will take a look at managed identities in general and system-assigned managed identity in particular. Managed identities can be used by your code to authenticate to Azure AD resources from Azure compute resources that support it, like virtual machines and containers.

But first, let’s look at the other option and why you should avoid it if you can: service principals.

Service Principals

If you have code that needs to authenticate to Azure AD-protected resources such as Azure Key Vault, you can always create a service principal. It’s the option that always works. It has some caveats that will be explained further in this post.

The easiest way to create a service principal is with the single Azure CLI command below:

az ad sp create-for-rbac

The command results in the following output:

{
  "appId": "APP_ID",
  "displayName": "azure-cli-2023-01-06-11-18-45",
  "password": "PASSWORD",
  "tenant": "TENANT_ID"
}

If the service principal needs access to, let’s say, Azure Key Vault, you could use the following command to grant that access:

APP_ID="appId from output above"
$SUBSCRIPTION_ID="your subscription id"
$RESOURCE_GROUP="your resource group"
$KEYVAULT_NAME="short name of your key vault"

az role assignment create --assignee $APP_ID \
  --role "Key Vault Secrets User" \
  --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.KeyVault/vaults/$KEYVAULT_NAME"

The next step is to configure your application to use the service principal and its secret to obtain an Azure AD token (or credential) that can be passed to Azure Key Vault to retrieve secrets or keys. That means you need to find a secure way to store the service principal secret with your application, which is something you want to avoid.

In a Python app, you can use the ClientSecretCredential class and pass your Azure tenant id, the service principal appId (or client Id) and the secret. You can then use the secret with a SecretClient like in the snippet below.

# Create a credential object
credential = ClientSecretCredential(tenant_id, client_id, client_secret)

# Create a SecretClient using the credential
client = SecretClient(vault_url=VAULT_URL, credential=credential)

Other languages and frameworks have similar libraries to reach the same result. For instance JavaScript and C#.

This is quite easy to do but again, where do you store the service principal’s secret securely?

The command az ad sp create-for-rbac also creates an App Registration (and Enterprise application) in Azure AD:

Azure AD App Registration

The secret (or password) for our service principal is partly displayed above. As you can see, it expires a year from now (blog post written on January 6th, 2023). You will need to update the secret and your application when that time comes, preferably before that. We all know what expiring secrets and certificates give us: an app that’s not working because we forgot to update the secret or certificate!

💡 Note that one year is the default. You can set the number of years with the --years parameter in az ad sp create-for-rbac.

💡 There will always be cases where managed identities are not supported such as connecting 3rd party systems to Azure. However, it should be clear that whenever managed identity is supported, use it to provide your app with the credentials it needs.

In what follows, we will explain managed identities in general, and system-assigned managed identity in particular. Another blog post will discuss user-assigned managed identity.

Managed Identities Explained

Azure Managed Identities allow you to authenticate to Azure resources without the need to store credentials or secrets in your code or configuration files.

There are two types of Managed Identities:

  • system-assigned
  • user-assigned

System-assigned Managed Identities are tied to a specific Azure resource, such as a virtual machine or Azure Container App. When you enable a system-assigned identity for a resource, Azure creates a corresponding identity in the Azure Active Directory (AD) for that resource, similar to what you have seen above. This identity can be used to authenticate to any service that supports Azure AD authentication. The lifecycle of a system-assigned identity is tied to the lifecycle of the Azure resource. When the resource is deleted, the corresponding identity is also deleted. Via a special token endpoint, your code can request an access token for the resource it wants to access.

User-assigned Managed Identities, on the other hand, are standalone identities that can be associated with one or more Azure resources. This allows you to use the same identity across multiple resources and manage the identity’s lifecycle independently from the resources it is associated with. In your code, you can request an access token via the same special token endpoint. You will have to specify the appId (client Id) of the user-managed identity when you request the token because multiple identities could be assigned to your Azure resource.

In summary, system-assigned Managed Identities are tied to a specific resource and are deleted when the resource is deleted, while user-assigned Managed Identities are standalone identities that can be associated with multiple resources and have a separate lifecycle.

System-assigned managed identity

Virtual machines support system and user-assigned managed identity and make it easy to demonstrate some of the internals.

Let’s create a Linux virtual machine and enable a system-assigned managed identity. You will need an Azure subscription and be logged on with the Azure CLI. I use a Linux virtual machine here to demonstrate how it works with bash. Remember that this also works on Windows VMs and many other Azure resources such as App Services, Container Apps, and more.

Run the code below. Adapt the variables for your environment.

RG="rg-mi"
LOCATION="westeurope"
PASSWORD="oE2@pl9hwmtM"

az group create --name $RG --location $LOCATION

az vm create \
  --name vm-demo \
  --resource-group $RG \
  --image UbuntuLTS \
  --size Standard_B1s \
  --admin-username azureuser \
  --admin-password $PASSWORD \
  --assign-identity


After the creation of the resource group and virtual machine, the portal shows the system assigned managed identity in the virtual machine’s Identity section:

System assigned managed identity

We can now run some code on the virtual machine to obtain an Azure AD token for this identity that allows access to a Key Vault. Key Vault is just an example here.

We will first need to create a Key Vault and a secret. After that we will grant the managed identity access to this Key Vault. Run these commands on your own machine, not the virtual machine you just created:

# generate a somewhat random name for the key vault
KVNAME=kvdemo$RANDOM

# create with vault access policy which grants creator full access
az keyvault create --name $KVNAME --resource-group $RG

# with full access, current user can create a secret
az keyvault secret set --vault-name $KVNAME --name mysecret --value "TOPSECRET"

# show the secret; should reveal TOPSECRET
az keyvault secret show --vault-name $KVNAME --name mysecret

# switch the Key Vault to AAD authentication
az keyvault update --name $KVNAME --enable-rbac-authorization

Now we can grant the system assigned managed identity access to Key Vault via Azure RBAC. Let’s look at the identity with the command below:

az vm identity show --resource-group $RG --name vm-demo

This returns the information below. Note that principalId was also visible in the portal as Object (principal) ID. Yes, not confusing at all… 🤷‍♂️

{
  "principalId": "YOUR_PRINCIPAL_ID",
  "tenantId": "YOUR_TENANT_ID",
  "type": "SystemAssigned",
  "userAssignedIdentities": null
}

Now assign the Key Vault Secrets User role to this identity:

PRI_ID="principal ID above"
SUB_ID="Azure subscription ID"

# below, scope is the Azure Id of the Key Vault 

az role assignment create --assignee $PRI_ID \
  --role "Key Vault Secrets User" \
  --scope "/subscriptions/$SUB_ID/resourceGroups/$RG/providers/Micr
osoft.KeyVault/vaults/$KVNAME"

If you check the Key Vault in the portal, in IAM, you should see:

System assigned identity of VM has Secrets User role

Now we can run some code on the VM to obtain an Azure AD token to read the secret from Key Vault. SSH into the virtual machine using its public IP address with ssh azureuser@IPADDRESS. Next, use the commands below:

# install jq on the vm for better formatting; you will be asked for your password
sudo snap install jq

curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq

It might look weird but by sending the curl request to that special IP address on the VM, you actually request an access token to access Key Vault resources (in this case, it could also be another type of resource). There’s more to know about this special IP address and the other services it provides. Check Microsoft Learn for more information.

The result of the curl command is JSON below (nicely formatted with jq):

{
  "access_token": "ACCESS_TOKEN",
  "client_id": "CLIENT_ID",
  "expires_in": "86038",
  "expires_on": "1673095093",
  "ext_expires_in": "86399",
  "not_before": "1673008393",
  "resource": "https://vault.azure.net",
  "token_type": "Bearer"
}

Note that you did not need any secret to obtain the token. Great!

Now run the following code but first replace <YOUR VAULT NAME> with the short name of your Key Vault:

# build full URL to your Key Vault
VAULTURL="https://<YOUR VAULT NAME>.vault.azure.net"

ACCESS_TOKEN=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fvault.azure.net' -H Metadata:true | jq -r .access_token)

curl -s "$VAULTURL/secrets/mysecret?api-version=2016-10-01" -H "Authorization: Bearer $ACCESS_TOKEN" | jq -r .value

First, we set the vault URL to the full URL including https://. Next, we retrieve the full JSON token response but use jq to only grab the access token. The -r option strips the " from the response. Next, we use the Azure Key Vault REST API to read the secret with the access token for authorization. The result should be TOPSECRET! 😀

Instead of this raw curl code, which is great for understanding how it works under the hood, you can use Microsoft’s identity libraries for many popular languages. For example in Python:

from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient

# Authenticate using a system-assigned managed identity
credential = DefaultAzureCredential()

# Create a SecretClient using the credential and the key vault URL
secret_client = SecretClient(vault_url="https://YOURKVNAME.vault.azure.net", credential=credential)

# Retrieve the secret
secret = secret_client.get_secret("mysecret")

# Print the value of the secret
print(secret.value)

If you are somewhat used to Python, you know you will need to install azure-identity and azure-keyvault-secrets with pip. The DefaultAzureCredential class used in the code automatically works with system managed identity in virtual machines but also other compute such as Azure Container Apps. The capabilities of this class are well explained in the docs: https://learn.microsoft.com/en-us/python/api/overview/azure/identity-readme?view=azure-python. The identity libraries for other languages work similarly.

What about Azure Arc-enabled servers?

Azure Arc-enabled servers also have a managed identity. It is used to update the properties of the Azure Arc resource in the portal. You can grant this identity access to other Azure resources such as Key Vault and then grab the token in a similar way. Similar but not quite identical. The code with curl looks like this (from the docs):

ChallengeTokenPath=$(curl -s -D - -H Metadata:true "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fvault.azure.net" | grep Www-Authenticate | cut -d "=" -f 2 | tr -d "[:cntrl:]")

ChallengeToken=$(cat $ChallengeTokenPath)

if [ $? -ne 0 ]; then
    echo "Could not retrieve challenge token, double check that this command is run with root privileges."
else
    curl -s -H Metadata:true -H "Authorization: Basic $ChallengeToken" "http://127.0.0.1:40342/metadata/identity/oauth2/token?api-version=2019-11-01&resource=https%3A%2F%2Fvault.azure.net"
fi

On an Azure Arc-enabled machine that runs on-premises or in other clouds, the special IP address 169.254.169.254 is not available. Instead, the token request is sent to http://localhost:40342. The call is designed to fail and respond with a Www-Authenticate header that contains the path to a file on the machine (created dynamically). Only specific users and groups on the machine are allowed to read the contents of that file. This step was added for extra security so that not every process can read the contents of this file.

The second command retrieves the contents of the file and uses it for basic authentication purposes in the second curl request. It’s the second curl request that will return the access token.

Note that this works for both Linux and Windows Azure Arc-enabled systems. It is further explained here: https://learn.microsoft.com/en-us/azure/azure-arc/servers/managed-identity-authentication.

In contrast with managed identity on Azure compute, I am not aware of support for Azure Arc in the Microsoft identity libraries. To obtain a token with Python, check the following gist with some sample code: https://gist.github.com/gbaeke/343b14305e468aa433fe90441da0cabd.

The great thing about this is that managed identity can work on servers not in Azure as long if you enable Azure Arc on them! 🎉

Conclusion

In this post, we looked at what managed identities are and zoomed in on system-assigned managed identity. Azure Managed Identities are a secure and convenient way to authenticate to Azure resources without having to store credentials in code or configuration files. Whenever you can, use managed identity instead of service principals. And as you have seen, it even works with compute that’s not in Azure, such as Azure Arc-enabled servers.

Stay tuned for the next post about user-assigned managed identity.

AKS Workload Identity Revisited

A while ago, I blogged about Workload Identity. Since then, Microsoft simplified the configuration steps and enabled Managed Identity, in addition to app registrations.

But first, let’s take a step back. Why do you need something like workload identity in the first place? Take a look at the diagram below.

Workloads (deployed in a container or not) often need to access Azure AD protected resources. In the diagram, the workload in the container wants to read secrets from Azure Key Vault. The recommended option is to use managed identity and grant that identity the required role in Azure Key Vault. Now your code just needs to obtain credentials for that managed identity.

In Kubernetes, that last part presents a challenge. There needs to be a mechanism to map such a managed identity to a pod and allow code in the container to obtain an Azure AD authentication token. The Azure AD Pod Identity project was a way to solve this but as of 24/10/2022, AAD Pod Identity is deprecated. It is now replaced by Workload Identity. It integrates with native Kubernetes capabilities to federate with external identity providers such as Azure AD. It has the following advantages:

  • Not an AKS feature, it’s a Kubernetes feature (other cloud, on-premises, edge); similar functionality exists for GKE for instance
  • Scales better than AAD Pod Identity
  • No need for custom resource definitions
  • No need to run pods that intercept IMDS (instance metadata service) traffic; instead, there are webhook pods that run when pods are created/updated

If the above does not make much sense, check https://learn.microsoft.com/en-us/azure/aks/use-azure-ad-pod-identity. But don’t use it OK? 😉

At a basic level, Workload Identity works as follows:

  • Your AKS cluster is configured to issue tokens. Via an OIDC (OpenID Connect) discovery document, published by AKS, Azure AD can validate the tokens it receives from the cluster.
  • A Kubernetes service account is created and properly annotated and labeled. Pods are configured to use the service account via the serviceAccount field.
  • The Azure Managed Identity is configured with Federated credentials. The federated credential contains a link to the OIDC discovery document (Cluster Issuer URL) and configures the namespace and service account used by the Kubernetes pod. That generates a subject identifier like system:serviceaccount:namespace_name:service_account_name.
  • Tokens can now be generated for the configured service account and swapped for an Azure AD token that can be picked up by your workload.
  • A Kubernetes mutating webhook is the glue that makes all of this work. It ensures the token is mapped to a file in your container and sets needed environment variables.

Creating a cluster with OIDC and Workload Identity

Create a basic cluster with one worker node and both features enabled. You need an Azure subscription and the Azure CLI. Ensure the prerequisites are met and that you are logged in with az login. Run the following in a Linux shell:

RG=your_resource_group
CLUSTER=your_cluster_name

az aks create -g $RG -n $CLUSTER --node-count 1 --enable-oidc-issuer \
  --enable-workload-identity --generate-ssh-keys

After deployment, find the OIDC Issuer URL with:

export AKS_OIDC_ISSUER="$(az aks show -n $CLUSTER -g $RG --query "oidcIssuerProfile.issuerUrl" -otsv)"

When you add /.well-known/openid-configuration to that URL, you will see something like:

OIDC discovery document

The field jwks_uri contains a link to key information, used by AAD to verify the tokens issued by Kubernetes.

In earlier versions of Workload Identity, you had to install a mutating admission webhook to project the Kubernetes token to a volume in your workload. In addition, the webhook also injected several environment variables:

  • AZURE_CLIENT_ID: client ID of an AAD application or user-assigned managed identity
  • AZURE_TENANT_ID: tenant ID of Azure subscription
  • AZURE_FEDERATED_TOKEN_FILE: the path to the federated token file; you can do cat $AZURE_FEDERATED_TOKEN_FILE to see the token. Note that this is the token issued by Kubernetes, not the exchanged AAD token (exchanging the token happens in your code). The token is a jwt. You can use https://jwt.io to examine it:
Decoded jwt issued by Kubernetes

But I am digressing… In the current implementation, you do not have to install the mutating webhook yourself. When you enable workload identity with the CLI, the webhook is installed automatically. In kube-system, you will find pods starting with azure-wi-webhook-controller-manager. The webhook kicks in whenever you create or update a pod. The end result is the same. You get the projected token + the environment variables.

Creating a service account

Ok, now we have a cluster with OIDC and workload identity enabled. We know how to retrieve the issuer URL and we learned we do not have to install anything else to make this work.

You will have to configure the pods you want a token for. Not every pod has containers that need to authenticate to Azure AD. To configure your pods, you first create a Kubernetes service account. This is a standard service account. To learn about service accounts, check my YouTube video.

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    azure.workload.identity/client-id: CLIENT ID OF MANAGED IDENTITY
  labels:
    azure.workload.identity/use: "true"
  name: sademo
  namespace: default

The label ensures that the mutating webhook will do its thing when a pod uses this service account. We also indicate the managed identity we want a token for by specifying its client ID in the annotation.

Note: you need to create the managed identity yourself and grab its client id. Use the following commands:

RG=your_resource_group
IDENTITY=your_chosen_identity_name
LOCATION=your_azure_location (e.g. westeurope)

export SUBSCRIPTION_ID="$(az account show --query "id" -otsv)"

az identity create --name $IDENTITY --resource-group $RG \
  --location $LOCATION --subscription $SUBSCRIPTION_ID

export USER_ASSIGNED_CLIENT_ID="$(az identity show -n $IDENTITY -g $RG --query "clientId" -otsv)"

echo $USER_ASSIGNED_CLIENT_ID

The last command prints the id to use in the service account azure.workload.identity/client-id annotation.

Creating a pod that uses the service account

Let’s create a deployment that deploys pods with an Azure CLI image:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: azcli-deployment
  namespace: default
  labels:
    app: azcli
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azcli
  template:
    metadata:
      labels:
        app: azcli
    spec:
      # needs to refer to service account used with federation
      serviceAccount: sademo
      containers:
        - name: azcli
          image: mcr.microsoft.com/azure-cli:latest
          command:
            - "/bin/bash"
            - "-c"
            - "sleep infinity"

Above, the important line is serviceAccount: sademo. When the pod is created or modified, the mutating webhook will check the service account and its annotations. If it is configured for workload identity, the webhook will do its thing: projecting the Kubernetes token file and setting the environment variables:

The webhook did its work 😉

How to verify it works?

We can use the Azure CLI support for federated tokens as follows:

az login --federated-token "$(cat $AZURE_FEDERATED_TOKEN_FILE)" \
--service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID

After running the command, the error below appears:

Oh no…

Clearly, something is wrong and there is. We have forgotten to configure the managed identity for federation. In other words, when we present our Kubernetes token, Azure AD needs information to validate it and return an AAD token.

Use the following command to create a federated credential on the user-assigned managed identity you created earlier:

RG=your_resource_group
IDENTITY=your_chosen_identity_name
AKD_OIDC_ISSUER=your_oidc_issuer
SANAME=sademo

az identity federated-credential create --name fic-sademo \
  --identity-name $IDENTITY \
  --resource-group $RG --issuer ${AKS_OIDC_ISSUER} \
  --subject system:serviceaccount:default:$SANAME

After running the above command, the Azure Managed Identity has the following configuration:

Federated credentials on the Managed Identity

More than one credential is possible. Click on the name of the federated credential. You will see:

Details of the federated credential

Above, the OIDC Issuer URL is set to point to our cluster. We expect a token with a subject identifier (sub) of system:serviceaccount:default:sademo. You can check the decoded jwt earlier in this post to see that the sub field in the token issued by Kubernetes matches the one above. It needs to match or the process will fail.

Now we can run the command again:

az login --federated-token "$(cat $AZURE_FEDERATED_TOKEN_FILE)" \
--service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID

You will be logged in to the Azure CLI with the managed identity credentials:

But what about your own apps?

Above, we used the Azure CLI. The most recent versions (>= 2.30.0) support federated credentials and use MSAL. But what about your custom code?

The code below is written in Python and uses the Python Azure identity client library with DefaultAzureCredential. This code works with managed identity in Azure Container Apps or Azure App Service and was not modified. Here’s the code for reference:

import threading
import os
import logging
import time
import signal
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential

from azure.appconfiguration.provider import (
    AzureAppConfigurationProvider,
    SettingSelector,
    AzureAppConfigurationKeyVaultOptions
)

logging.basicConfig(encoding='utf-8', level=logging.WARNING)

def get_config(endpoint):
  selects = {SettingSelector(key_filter=f"myapp:*", label_filter="prd")}
  trimmed_key_prefixes = {f"myapp:"}
  key_vault_options = AzureAppConfigurationKeyVaultOptions(secret_resolver=retrieve_secret)
  app_config = {}
  try:
    app_config = AzureAppConfigurationProvider.load(
            endpoint=endpoint, credential=CREDENTIAL, selects=selects, key_vault_options=key_vault_options, 
            trimmed_key_prefixes=trimmed_key_prefixes)
  except Exception as ex:
    logging.error(f"error loading app config: {ex}")

  return app_config

def run():
    try:
      global CREDENTIAL 
      CREDENTIAL = DefaultAzureCredential(exclude_visual_studio_code_credential=True)
    except Exception as ex:
      logging.error(f"error setting credentials: {ex}")

    endpoint = os.getenv('AZURE_APPCONFIGURATION_ENDPOINT')

    if not endpoint:
        logging.error("Environment variable 'AZURE_APPCONFIGURATION_ENDPOINT' not set")

    app_config =  {}
    while True:
        if not app_config:
            logging.warning("trying to load app config")
            app_config = get_config(endpoint)
        else:
            config_value=app_config['appkey']
            logging.warning(f"doing useful work with {config_value}")
            # if key exists in app_config, do something with it
            if 'mysecret' in app_config:
                logging.warning(f"and hush hush, there's a secret: {app_config['mysecret']}")
        time.sleep(5)


class GracefulKiller:
  kill_now = False
  def __init__(self):
    signal.signal(signal.SIGINT, self.exit_gracefully)
    signal.signal(signal.SIGTERM, self.exit_gracefully)

  def exit_gracefully(self, *args):
    self.kill_now = True


def retrieve_secret(uri):
    try:
        # uri is in format: https://<keyvaultname>.vault.azure.net/secrets/<secretname>
        # retrieve key vault uri and secret name from uri
        vault_uri = "https://" + uri.split('/')[2]
        secret_name = uri.split('/')[-1]
        logging.warning(f"Retrieving secret {secret_name} from {vault_uri}...")

        # retrieve the secret from Key Vault; CREDENTIAL was set globally
        secret_client = SecretClient(vault_url=vault_uri, credential=CREDENTIAL)

        # get secret value from Key Vault
        secret_value = secret_client.get_secret(secret_name).value

    except Exception as ex:
        print(f"retrieving secret: {ex}")
    
    return secret_value

# main function
def main():
    # create a Daemon tread
    t = threading.Thread(daemon=True, target=run, name="worker")
    t.start()
    

    killer = GracefulKiller()
    while not killer.kill_now:
        time.sleep(1)

    logging.info("Doing some important cleanup before exiting")
    logging.info("Gracefully exiting")


if __name__ == "__main__":
    main()

On Docker Hub, the gbaeke/worker:1.0.0 image runs this code. The following manifest runs the code on Kubernetes with the same managed identity as the Azure CLI example (same service account):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: worker
  namespace: default
  labels:
    app: worker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: worker
  template:
    metadata:
      labels:
        app: worker
    spec:
      # needs to refer to service account used with federation
      serviceAccount: sademo
      containers:
        - name: worker
          image: gbaeke/worker:1.0.0
          env:
            - name: AZURE_APPCONFIGURATION_ENDPOINT
              value: https://ac-appconfig-vr6774lz3bh4i.azconfig.io

Note that the code tries to connect to Azure App Configuration. The managed identity has been given the App Configuration Data Reader role on a specific instance. The code tries to read the value of key myapp:appkey with label prd from that instance:

App Config key and values

To make the code work, the environment variable AZURE_APPCONFIGURATION_ENDPOINT is set to the URL of the App Config instance.

In the container logs, we can see that the value was successfully retrieved:

Log stream of worker

And yes, the code just works! It successfully connected to App Config and retrieved the value. The environment variables, set by the webhook discussed earlier, make this work, together with the Python Azure identity library!

Conclusion

Workload Identity works like a charm and is relatively easy to configure. At the time of writing (end of November 2022), I guess we are pretty close to general availability and we finally will have a fully supported managed identity solution for AKS and beyond!

Quick Guide to Kubernetes Workload Identity on AKS

IMPORTANT: the steps below are not relevant anymore; the steps in the quick guide have been updated; see https://github.com/gbaeke/quick-guides/blob/main/workload-identity/README.md for the correct steps.

Some things that have changed:

  • you can now use managed identities instead of app registrations; federated token configuration is at the managed identity level
  • you do not need to install the webhook
  • the azwi CLI is not required

I recently had to do a demo about Workload Identity on AKS and threw together some commands to enable and verify the setup. It contains bits and pieces from the documentation plus some extras. I wrote another post some time ago with more background.

All commands are for bash and should be run sequentially in the same shell to re-use the variables.

Step 1: Enable OIDC issuer on AKS

You need an existing AKS cluster for this. You can quickly deploy one from the portal. Note that workload identity is not exclusive to AKS.

CLUSTER=<AKS_CLUSTER_NAME>
RG=<AKS_CLUSTER_RESOURCE_GROUP>

az aks update -n $CLUSTER -g $RG --enable-oidc-issuer

After enabling OIDC, retrieve the issuer URL with ISSUER_URL=$(az aks show -n $CLUSTER -g $RG --query oidcIssuerProfile.issuerUrl -o tsv). To check, run echo $ISSUER_URL. It contains a URL like https://oidc.prod-aks.azure.com/GUID/. You can issue the command below to obtain the OpenID configuration. It will list other URLs that can be used to retrieve keys that allow Azure AD to verify tokens it receives from Kubernetes.

curl $ISSUER_URL/.well-known/openid-configuration

Step 2: Install the webhook on AKS

Use the Helm chart to install the webhook. First, save the Azure AD tenant Id to a variable. The tenantId will be retrieved with the Azure CLI so make sure you are properly logged in. You also need Helm installed and a working Kube config for your cluster.

AZURE_TENANT_ID=$(az account show --query tenantId -o tsv)
 
helm repo add azure-workload-identity https://azure.github.io/azure-workload-identity/charts
 
helm repo update
 
helm install workload-identity-webhook azure-workload-identity/workload-identity-webhook \
   --namespace azure-workload-identity-system \
   --create-namespace \
   --set azureTenantID="${AZURE_TENANT_ID}"

Step 3: Create an Azure AD application

Although you can create the application directly in the portal or with the Azure CLI, workload identity has a CLI to make the whole process less error-prone and easier to script. Install azwi with brew: brew install Azure/azure-workload-identity/azwi.

Run the following commands. First, we save the application name in a variable. Use any name you like.

APPLICATION_NAME=WorkloadDemo
azwi serviceaccount create phase app --aad-application-name $APPLICATION_NAME

You can now check the application registrations in Azure AD. In my case, WorkloadDemo was created.

App registration in Azure AD

If you want to grant this application access rights to resources in Azure, first grab the appId:

APPLICATION_CLIENT_ID="$(az ad sp list --display-name $APPLICATION_NAME --query '[0].appId' -otsv)"

Now you can use commands such as az role assignment create to grant access rights. For example, here is how to grant the Reader role to your current Azure CLI subscription:

SUBSCRIPTION_ID=$(az account show --query id -o tsv)

az role assignment create --assignee-object-id $APPLICATION_CLIENT_ID --role "Reader" --scope /subscriptions/$SUBSCRIPTION_ID

Step 4: Create a Kubernetes service account

Although you can create the service account with kubectl or via a YAML manifest, azwi can help here as well:

SERVICE_ACCOUNT_NAME=sademo
SERVICE_ACCOUNT_NAMESPACE=default

azwi serviceaccount create phase sa \
  --aad-application-name "$APPLICATION_NAME" \
  --service-account-namespace "$SERVICE_ACCOUNT_NAMESPACE" \
  --service-account-name "$SERVICE_ACCOUNT_NAME"

This creates a service account that looks like the below YAML manifest:

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    azure.workload.identity/client-id: <value of APPLICATION_CLIENT_ID>
  labels:
    azure.workload.identity/use: "true"
  name: sademo
  namespace: default

This is a regular Kubernetes service account. Later, you will configure your pod to use the service account.

The label is important because the webhook we installed earlier acts on service accounts with this label to perform all the behind-the-scenes magic! 😉

Note that workload identity does not use the Kubernetes service account token. That token is used to authenticate to the Kubernetes API server. The webhook will ensure that there is another token, its path is in $AZURE_FEDERATED_TOKEN_FILE, which is the token sent to Azure AD.

Step 5: Configure the Azure AD app for token federation

The application created in step 5 needs to be configured to trust specific tokens issued by your Kubernetes cluster. When AAD receives such a token, it returns an Azure AD token that your application in Kubernetes can use to authenticate to Azure.

Although you can manually configure the Azure AD app, azwi can be used here as well:

SERVICE_ACCOUNT_NAMESPACE=default

azwi serviceaccount create phase federated-identity \
  --aad-application-name "$APPLICATION_NAME" \
  --service-account-namespace "$SERVICE_ACCOUNT_NAMESPACE" \
  --service-account-name "$SERVICE_ACCOUNT_NAME" \
  --service-account-issuer-url "$ISSUER_URL"

In the AAD app, you will see:

Azure AD app federated credentials config

You find the above by clicking Certificates & Secrets and then Federated credentials.

Step 6: Deploy a workload

Run the following command to create a deployment and apply it in one step:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: azcli-deployment
  namespace: default
  labels:
    app: azcli
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azcli
  template:
    metadata:
      labels:
        app: azcli
    spec:
      serviceAccount: sademo
      containers:
        - name: azcli
          image: mcr.microsoft.com/azure-cli:latest
          command:
            - "/bin/bash"
            - "-c"
            - "sleep infinity"
EOF

This runs the latest version of the Azure CLI in Kubernetes.

Grab the first pod name (there is only one) and exec into the pod’s container:

POD_NAME=$(kubectl get pods -l "app=azcli" -o jsonpath="{.items[0].metadata.name}")

kubectl exec -it $POD_NAME -- bash

Step 7: Test the setup

In the container, issue the following commands:

echo $AZURE_CLIENT_ID
echo $AZURE_TENANT_ID
echo $AZURE_FEDERATED_TOKEN_FILE
cat $AZURE_FEDERATED_TOKEN_FILE
echo $AZURE_AUTHORITY_HOST

# list the standard Kubernetes service account secrets
cd /var/run/secrets/kubernetes.io/serviceaccount
ls 

# check the folder containing the AZURE_FEDERATED_TOKEN_FILE
cd /var/run/secrets/azure/tokens
ls

# you can use the AZURE_FEDERATED_TOKEN_FILE with the Azure CLI
# together with $AZURE_CLIENT_ID and $AZURE_TENANT_ID
# a password is not required since we are doing federated token exchange

az login --federated-token "$(cat $AZURE_FEDERATED_TOKEN_FILE)" \
--service-principal -u $AZURE_CLIENT_ID -t $AZURE_TENANT_ID

# list resource groups
az group list

If the last command works, that means you successfully logged on with workload identity ok AKS. You can list resource groups because you granted the Azure AD app the Reader role on your subscription.

Note that the option to use token federation was added to Azure CLI quite recently. At the time of this writing, May 2022, the image mcr.microsoft.com/azure-cli:latest surely has that capability.

Conclusion

I hope the above commands are useful if you want to quickly test or demo Kubernetes workload identity on AKS. If you spot errors, be sure to let me know!

AKS Pod Identity with the Azure SDK for Go

File:Go Logo Blue.svg - Wikimedia Commons

In an earlier post, I wrote about the use of AKS Pod Identity (Preview) in combination with the Azure SDK for Python. Although that works fine, there are some issues with that solution:

Vulnerabilities as detected by SNYK

In order to reduce the size of the image and reduce/remove the vulnerabilities, I decided to rewrite the solution in Go. Just like the Python app (with FastAPI), we will expose an HTTP endpoint that displays all resource groups in a subscription. We will use a specific pod identity that has the Contributor role at the subscription level.

If you are more into videos, here’s the video version:

The code

The code is on GitHub @ https://github.com/gbaeke/go-msi in main.go. The code is kept as simple as possible. It uses the following packages:

github.com/Azure/azure-sdk-for-go/profiles/latest/resources/mgmt/resources
github.com/Azure/go-autorest/autorest/azure/auth

The resources package is used to create a GroupsClient to work with resource groups (check the samples):

groupsClient := resources.NewGroupsClient(subID)

subID contains the subscription ID, which is retrieved via the SUBSCRIPTION_ID environment variable. The container requires that environment variable to be set.

To authenticate to Azure and obtain proper authorization, the auth package is used with the NewAuthorizerFromEnvironment() method. That method supports several authentication mechanisms, one of which is managed identities. When we run this code on AKS, the pods can use a pod identity as explained in my previous post, if the pod identity addon is installed and configured. To obtain the authorization:

authorizer, err := auth.NewAuthorizerFromEnvironment()

authorizer is then passed to groupsClient via:

groupsClient.Authorizer = authorizer

Now we can use groupsClient to iterate through the resource groups:

ctx := context.Background()
log.Println("Getting groups list...")
groups, err := groupsClient.ListComplete(ctx, "", nil)
if err != nil {
	log.Println("Error getting groups", err)
}

log.Println("Enumerating groups...")
for groups.NotDone() {
	groupList = append(groupList, *groups.Value().Name)
	log.Println(*groups.Value().Name)
	err := groups.NextWithContext(ctx)
	if err != nil {
		log.Println("error getting next group")
	}
}

Note that the groups are printed and added to the groups slice. We can now serve the groupz endpoint that lists the groups (yes, the groups are only read at startup 😀):

log.Println("Serving on 8080...")
http.HandleFunc("/groupz", groupz)
http.ListenAndServe(":8080", nil)

The result of the call to /groupz is shown below:

My resource groups mess in my test subscription 😀

Running the code in a container

We can now build a single statically linked executable with go build and package it in a scratch container. If you want to know if your executable is statically linked, run file on it (e.g. file myapp). The result should be like:

myapp: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped

Here is the multi-stage Dockerfile:

# argument for Go version
ARG GO_VERSION=1.14.5

# STAGE 1: building the executable
FROM golang:${GO_VERSION}-alpine AS build

# git required for go mod
RUN apk add --no-cache git

# certs
RUN apk --no-cache add ca-certificates

# Working directory will be created if it does not exist
WORKDIR /src

# We use go modules; copy go.mod and go.sum
COPY ./go.mod ./go.sum ./
RUN go mod download

# Import code
COPY ./ ./


# Build the statically linked executable
RUN CGO_ENABLED=0 go build \
	-installsuffix 'static' \
	-o /app .

# STAGE 2: build the container to run
FROM scratch AS final

# copy compiled app
COPY --from=build /app /app

# copy ca certs
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# run binary
ENTRYPOINT ["/app"]

In the above Dockerfile, it is important to add the ca certificates to the build container and later copy them to the scratch container. The code will need to connect to https://management.azure.com and requires valid root CA certificates to do so.

When you build the container with the Dockerfile, it will result in a docker image of about 8.7MB. SNYK will not report any known vulnerabilities. Great success!

Note: container will run as root though; bad! 😀 Nico Meisenzahl has a great post on containerizing .NET Core apps which also shows how to configure the image to not run as root.

Let’s add some YAML

The GitHub repo contains a workflow that builds and pushes a container to GitHub container registry. The most recent version at the time of this writing is 0.1.1. The YAML file to deploy this container as part of a deployment is below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mymsi-deployment
  namespace: mymsi
  labels:
    app: mymsi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mymsi
  template:
    metadata:
      labels:
        app: mymsi
        aadpodidbinding: mymsi
    spec:
      containers:
        - name: mymsi
          image: ghcr.io/gbaeke/go-msi:0.1.1
          env:
            - name: SUBSCRIPTION_ID
              value: SUBSCRIPTION ID
            - name: AZURE_CLIENT_ID
              value: APP ID OF YOUR MANAGED IDENTITY
            - name: AZURE_AD_RESOURCE
              value: "https://management.azure.com"
          ports:
            - containerPort: 8080

It’s possible to retrieve the subscription ID at runtime (as in the Python code) but I chose to just supply it via an environment variable.

For the above manifest to work, you need to have done the following (see earlier post):

  • install AKS with the pod identity add-on
  • create a managed identity that has the necessary Azure roles (in this case, enumerate resource groups)
  • create a pod identity that references the managed identity

In this case, the created pod identity is mymsi. The aadpodidbinding label does the trick to match the identity with the pods in this deployment.

Note that, although you can specify the AZURE_CLIENT_ID as shown above, this is not really required. The managed identity linked to the mymsi pod identity will be automatically matched. In any case, the logs of the nmi pod will reflect this.

In the YAML, AZURE_AD_RESOURCE is also specified. In this case, this is not required either because the default is https://management.azure.com. We need that resource to enumerate resource groups.

Conclusion

In this post, we looked at using the Azure SDK for Go together with managed identity on AKS, via the AAD pod identity addon. Similar to the Azure SDK for Python, the Azure SDK for Go supports managed identities natively. The difference with the Python solution is the size of the image and better security. Of course, that is an advantage stemming from the use of a language like Go in combination with the scratch image.

Azure AD pod-managed identities in AKS revisited

A long time ago, I wrote a blog post about assigning managed identities to pods in Azure Kubernetes Services (AKS) to authenticate to Azure Storage. The implementation was based on the aad-pod-identity project on GitHub. You can look at the walkthrough to see how it worked.

Microsoft recently released a preview that enables you to turn on pod identity during cluster creation. It uses the same building blocks as before but makes it fully supported and part of AKS (although preview now). To create a basic cluster with pod identity enabled, you can use the following commands:

az group create -n RESOURCEGROUP -l LOCATION
az aks create -g RESOURCEGROUP -n CLUSTERNAME --enable-managed-identity --enable-pod-identity --network-plugin azure

Note: you need to use Azure CNI networking here; kubenet will not work

Before you deploy the cluster, make sure you follow the prerequisites in the documentation (Before you begin). At the time of writing (December 2020), the section in the documentation that tells you how to create the AKS cluster does not use the Azure CNI plugin. Make sure you add that!

What does –enable-pod-identity do?

When you use –enable-pod-identity, you should see nmi pods on your cluster in the kube-system namespace:

NMI pods

These pods are created from a DaemonSet so you will have one pod per cluster node (Linux nodes only ). When your application wants to use a managed identity, it does a request to the Instance Metadata Service (IMDS) endpoint which is 169.254.169.254. Requests to that IP address are intercepted by the NMI pods via iptables rules. The NMI pod that intercepts the request then makes an Azure AD Authentication Library (ADAL) request to Azure AD to obtain a token for the managed identity and returns it to your application.

Next to the NMI pods, other things are added as well, such as custom resource definitions. Some of those are discussed below.

How to request the token?

It’s great to know that the NMI pods intercept requests to the IMDS endpoint but how do you make such a request? I put together a small example in Python in the following git repository: https://github.com/gbaeke/python-msi. The code is in the rg-api folder in server.py:

from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from fastapi import FastAPI

app = FastAPI()

try:
    credentials = DefaultAzureCredential()
    subscription_client = SubscriptionClient(credentials)
    subscription = next(subscription_client.subscriptions.list())
    subscription_id = subscription.subscription_id
    resource_client = ResourceManagementClient(credentials, subscription_id)
except:
    print("error obtaining credentials")

@app.get("/")
def read_root():
    groups=[]
    try:
        for resource_group in resource_client.resource_groups.list():
            groups.append(resource_group.name)
    except:
        print("error obtaining groups")
    
    return groups

The code does the following:

  • use the azure-identity Python library to obtain credentials via DefaultAzureCredential() function. Note that that function tries multiple authentication options. If you run the code on your local computer and you are logged on to Azure with the Azure CLI, it will also work
  • use the azure-mgmt-resource Python library to enumerate resource groups in the current subscription
  • create a very simple API with FastAPI to ask for the list of resource groups; we can use a kubectl port forward later to obtain the JSON response; if authentication fails, the call will return an empty list instead of HTTP errors as you normally would

On my system, this is the result of the call when pod identity is working:

A bunch of resource groups in my test subscription… messy as usual

The repo also contains a Dockerfile to build a container with the app. I built and pushed that container to Docker Hub as gbaeke/rgapi.

Creating and using the identity

If we want the pod that runs the above code to use a specific identity, we have to create the identity and then tell the pod to use it. To create the managed identity, use the following command:

 az identity create --resource-group  rg-clu-msi --name rgapi 

The output of this command contains an id field that we need in another command later. The result of the above command is a User Assigned Managed Identity called rgapi. I already granted the Contributor role at the subscription level.

User Assigned Managed Identity rgapi

Note that this has nothing to do with AKS. To create a pod identity to use in AKS, you will need to run another command:

az aks pod-identity add --resource-group rg-clu-msi --cluster-name clu-msi --namespace  rgapi  --name rgapi --identity-resource-id "id field from previous command" 

The above command creates a pod identity called rgapi in the namespace rgapi. This namespace will be created if it does not exist. You can see the pod identity by running the below command:

 kubectl get azureidentities.aadpodidentity.k8s.io

If you look inside such an object, you would find the reference to the managed identity by its resource id (the id field from earlier). There are other custom resource definitions used by pod identity that we will not bother with now.

Now we need to create a pod and associate it with the pod identity. You can do so with the following YAML:

apiVersion: v1
kind: Pod
metadata:
  name: rgapi
  namespace: rgapi
  labels:
    aadpodidbinding: rgapi
spec:
  containers:
  - name: rgapi
    image: gbaeke/rgapi
  nodeSelector:
    kubernetes.io/os: linux

The important bit above is the aadpodidbinding label which refers to the pod identity we created earlier. When the above pod gets scheduled, it will call out to the IMDS endpoint. You should see that in the logs of the NMI pod on the same node as your application pod. For example:

no clientID or resourceID in request. rgapi/rgapi has been matched with azure identity rgapi/rgapi
status (200) took 12677813 ns for req.method=GET reg.path=/metadata/identity/oauth2/token req.remote=10.240.0.36

The first line indicates that I did not specifically set a clientID in my request but that the request is matched to the rgapi identity. The second line shows the NMI pod requesting a token for the identity from the Azure AD token endpoint.

Great! We now have a pod running that can retrieve resource groups with our custom managed identity. We did not have to add credentials manually or grab them from Key Vault. Our pod automatically picks up the pod identity. 🎉

Conclusion

Although it is still not super simple (is identity ever simple really?), the new method to enable pod identities is a definite improvement. It is currently in preview so it should not be used in production. Once it goes GA however, you will have a fully supported method of using user assigned managed identity with your pods and use specific identities per pod following least privilege methods.

Azure SQL, Azure Active Directory and Seamless SSO: An Overview

Instead of pure lift-and-shift migrations to the cloud, we often encounter lift-shift-tinker migrations. In such a migration, you modify some of the application components to take advantage of cloud services. Often, that’s the database but it could also be your web servers (e.g. replaced by Azure Web App). When you replace SQL Server on-premises with SQL Server or Managed Instance on Azure, we often get the following questions:

  • How does Azure SQL Database or Managed Instance integrate with Active Directory?
  • How do you authenticate to these databases with an Azure Active Directory account?
  • Is MFA (multi-factor authentication) supported?
  • If the user is logged on with an Active Directory account on a domain-joined computer, is single sign-on possible?

In this post, we will look at two distinct configuration options that can be used together if required:

  • Azure AD authentication to SQL Database
  • Single sign-on to Azure SQL Database from a domain-joined computer via Azure AD Seamless SSO

In what follows, I will provide an overview of the steps. Use the links to the Microsoft documentation for the details. There are many!!! 😉

Visually, it looks a bit like below. In the image, there’s an actual domain controller in Azure (extra Active Directory site) for local authentication to Active Directory. Later in this post, there is an example Python app that was run on a WVD host joined to this AD.

Azure AD Authentication

Both Azure SQL Database and Managed Instances can be integrated with Azure Active Directory. They cannot be integrated with on-premises Active Directory (ADDS) or Azure Active Directory Domain Services.

For Azure SQL Database, the configuration is at the SQL Server level:

SQL Database Azure AD integration

You should read the full documentation because there are many details to understand. The account you set as admin can be a cloud-only account. It does not need a specific role. When the account is set, you can logon with that account from Management Studio:

Authentication from Management Studio

There are several authentication schemes supported by Management Studio but the Universal with MFA option typically works best. If your account has MFA enabled, you will be challenged for a second factor as usual.

Once connected with the Azure AD “admin”, you can create contained database users with the following syntax:

CREATE USER [user@domain.com] FROM EXTERNAL PROVIDER;

Note that instead of a single user, you can work with groups here. Just use the group name instead of the user principal name. In the database, the user or group appears in Management Studio like so:

Azure AD user (or group) in list of database users

From an administration perspective, the integration steps are straightforward but you create your users differently. When you migrate databases to the cloud, you will have to replace the references to on-premises ADDS users with references to Azure AD users!

Seamless SSO

Now that Azure AD is integrated with Azure SQL Database, we can configure single sign-on for users that are logged on with Active Directory credentials on a domain-joined computer. Note that I am not discussing Azure AD joined or hybrid Azure AD joined devices. The case I am discussing applies to Windows Virtual Desktop (WVD) as well. WVD devices are domain-joined and need line-of-sight to Active Directory domain controllers.

Note: seamless SSO is of course optional but it is a great way to make it easier for users to connect to your application after the migration to Azure

To enable single sign-on to Azure SQL Database, we will use the Seamless SSO feature of Active Directory. That feature works with both password-synchronization and pass-through authentication. All of this is configured via Azure AD Connect. Azure AD Connect takes care of the synchronization of on-premises identities in Active Directory to an Azure Active Directory tenant. If you are not familiar with Azure AD Connect, please check the documentation as that discussion is beyond the scope of this post.

When Seamless SSO is configured, you will see a new computer account in Active Directory, called AZUREADSSOACC$. You will need to turn on advanced settings in Active Directory Users and Computers to see it. That account is important as it is used to provide a Kerberos ticket to Azure AD. For full details, check the documentation. Understanding the flow depicted below is important:

Seamless Single Sign On - Web app flow
Seamless SSO flow (from Microsoft @ https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-how-it-works)

You should also understand the security implications and rotate the Kerberos secret as discussed in the FAQ.

Before trying SSO to Azure SQL Database, log on to a domain-joined device with an identity that is synced to the cloud. Make sure, Internet Explorer is configured as follows:

Add https://autologon.microsoftazuread-sso.com to the Local Intranet zone

Check the docs for more information about the Internet Explorer setting and considerations for other browsers.

Note: you do not need to configure the Local Intranet zone if you want SSO to Azure SQL Database via ODBC (discussed below)

With the Local Intranet zone configured, you should be able to go to https://myapps.microsoft.com and only provide your Azure AD principal (e.g. first.last@yourdomain.com). You should not be asked to provide your password. If you use https://myapps.microsoft.com/yourdomain.com, you will not even be asked your username.

With that out of the way, let’s see if we can connect to Azure SQL Database using an ODBC connection. Make sure you have installed the latest ODBC Driver for SQL Server on the machine (in my case, ODBC Driver 17). Create an ODBC connection with the Azure SQL Server name. In the next step, you see the following authentication options:

ODBC Driver 17 authentication options

Although all the options for Azure Active Directory should work, we are interested in integrated authentication, based on the credentials of the logged on user. In the next steps, I only set the database name and accepted all the other options as default. Now you can test the data source:

Testing the connection

Great, but what about your applications? Depending on the application, there still might be quite some work to do and some code to change. Instead of opening that can of worms 🥫, let’s see how this integrated connection works from a sample Pyhton application.

Integrated Authentication test with Python

The following Python program uses pyodbc to connect with integrated authentication:

import pyodbc 

server = 'tcp:AZURESQLSERVER.database.windows.net' 
database = 'AZURESQLDATABASE' 

cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL Server};SERVER='+server+';DATABASE='+database+';authentication=ActiveDirectoryIntegrated')
cursor = cnxn.cursor()

cursor.execute("SELECT * from TEST;") 
row = cursor.fetchone() 
while row: 
    print(row[0])
    row = cursor.fetchone()

My SQL Database contains a simple table called test. The logged on user has read and write access. As you can see, there is no user and password specified. In the connection string, “authentication=ActiveDirectoryIntegrated” is doing the trick. The result is just my name (hey, it’s a test):

Result returned from table

Conclusion

In this post, I have highlighted how single sign-on works for domain-joined devices when you use Azure AD Connect password synchronization in combination with the Seamless SSO feature. This scenario is supported by SQL Server ODBC driver version 17 as shown with the Python code. Although I used SQL Database as an example, this scenario also applies to a managed instance.

AKS Managed Pod Identity and access to Azure Storage

When you need to access Azure Storage (or other Azure resources) from a container in AKS (Kubernetes on Azure), you have many options. You can put credentials in your code (nooooo!), pass credentials via environment variables, use Kubernetes secrets, obtain secrets from Key Vault and so on. Usually, the credentials are keys but you can also connect to a Storage Account with an Azure AD account. Instead of a regular account, you can use a managed identity that you set up specifically for the purpose of accessing the storage account or a specific container.

A managed identity is created as an Azure resource and will appear in the resource group where it was created:

User assigned managed identity

This managed identity can be created from the Azure Portal but also with the Azure CLI:

az identity create -g storage-aad-rg -n demo-pod-id -o json 

The managed identity can subsequently be granted access rights, for instance, on a storage account. Storage accounts now also support Azure AD accounts (in preview). You can assign roles such as Blob Data Reader, Blob Data Contributor and Blob Data Owner. The screenshot below shows the managed identity getting the Blob Data Reader role on the entire storage account:

Granting the managed identity access to a storage account

When you want to use this specific identity from a Kubernetes pod, you can use the aad-pod-identity project. Note that this is an open source project and that it is not quite finished. The project’s README contains all the instructions you need but here are the highlights:

  • Deploy the infrastructure required to support managed identities in pods; these are the MIC and NMI containers plus some custom resource definitions (CRDs)
  • Assign the AKS service principle the role of Managed Identity Operator over the scope of the managed identity created above (you would use the resource id of the managed identity in the scope such as  /subscriptions/YOURSUBID/resourcegroups/YOURRESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/YOURMANAGEDIDENTITY
  • Define the pod identity via the AzureIdentity custom resource definition (CRD); in the YAML file you will refer to the managed identity by its resource id (/subscr…) and client id
  • Define the identity binding via the AzureIdentityBinding custom resource definition (CRD); in the YAML file you will setup a selector that you will use later in a pod definition to associate the managed identity with the pod; I defined a selector called myapp

Here is the identity definition (uses one of the CRDs defined earlier):

apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: aks-pod-id
spec:
type: 0
ResourceID: /subscriptions/SUBID/resourcegroups/RESOURCEGROUP/providers/Microsoft.ManagedIdentity/userAssignedIdentities/demo-pod-id
ClientID: c35040d0-f73c-4c4e-a376-9bb1c5532fda

And here is the binding that defines the selector (other CRD defined earlier):

apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: aad-identity-binding
spec:
AzureIdentity: aks-pod-id
Selector: myapp

Note that the installation of the infrastructure containers depends on RBAC being enabled or not. To check if RBAC is enabled on your AKS cluster, you can use https://resources.azure.com and search for your cluster. Check for the enableRBAC. In my cluster, RBAC was enabled:

Yep, RBAC enabled so make sure you use the RBAC YAML files

With everything configured, we can spin up a container with a label that matches the selector defined earlier:

apiVersion: v1
kind: Pod
metadata:
name: ubuntu
labels:
aadpodidbinding: myapp
spec:
containers:
name: ubuntu
image: ubuntu:latest
command: [ "/bin/bash", "-c", "--"]
args: [ "while true; do sleep 30; done;"]

Save the above to a file called ubuntu.yaml and use kubectl apply -f ubuntu.yaml to launch the pod. The pod will keep running because of the forever while loop. The pod can use the managed identity because of the aadpodidbinding label of myapp. Next, get a shell to the container:

kubectl exec -it ubuntu /bin/bash

To check if it works, we have to know how to obtain an access token (which is a JWT or JSON Web Token). We can obtain it via curl. First use apt-get update and then use apt-get install curl to install it. Then issue the following command to obtain a token for https://azure.storage.com:

curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fstorage.azure.com%2F' -H Metadata:true -s

TIP: if you are not very familiar with curl, use https://curlbuilder.com. As a precaution, do not paste your access token in the command builder.

The request to 169.254.169.254 goes to the Azure Instance Metadata Service which provides, among other things, an API to obtain a token. The result will be in the following form:

{"access_token":"THE ACTUAL ACCESS TOKEN","refresh_token":"", "expires_in":"28800","expires_on":"1549083688","not_before":"1549054588","resource":"https://storage.azure.com/","token_type":"Bearer"

Note that many of the SDKs that Microsoft provides, have support for managed identities baked in. That means that the SDK takes care of calling the Instance Metadata Service for you and presents you a token to use in subsequent calls to Azure APIs.

Now that you have the access token, you can use it in a request to the storage account, for instance to list containers:

curl -XGET -H 'Authorization: Bearer THE ACTUAL ACCESS TOKEN' -H 'x-ms-version: 2017-11-09' -H "Content-type: application/json" 'https://storageaadgeba.blob.core.windows.net/?comp=list 

The result of the call is some XML with the container names. I only had a container called test:

OMG… XML

Wrap up

You have seen how to bind an Azure managed identity to a Kubernetes pod running on AKS. The aad-pod-identity project provides the necessary infrastructure and resources to bind the identity to a pod using a label in its YAML file. From there, you can work with the managed identity as you would on a virtual machine, calling the Instance Metadata Service to obtain the token (a JWT). Once you have the token, you can include it in REST calls to the Azure APIs by adding an authorization header. In this post we have used the storage APIs as an example.

Note that Microsoft has AKS Pod Identity marked as in development on the updates site. I am not aware if this is based on the aad-pod-identity project but it does mean that the feature will become an official part of AKS pretty soon!

Deploying Azure resources using webhookd

In the previous blog post, I discussed adding SSL to webhookd. In this post, I will briefly show how to use this solution to deploy Azure resources.

To run webhookd, I deployed a small Standard_B1s machine (1GB RAM, 1 vCPU) with a system assigned managed identity. After deployment, information about the managed identity is available via the Identity link.

Code running on a machine with a managed identity needs to do something specific to obtain information about the identity like a token. With curl, you would issue the following command:

curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true -s

The response would be JSON that contains a field called access_token. You could parse out the access_token and then use the token in a call to the Azure Resource Manager APIs. You would use the token in the autorization header. Full details about acquiring these tokens can be found here. On that page, you will find details about acquiring the token with Go, JavaScript and several other languages.

Because we are using webhookd and shell scripts, the Azure CLI is the ideal way to create Azure resources. The Azure CLI can easily authenticate with the managed identity using a simple command: az login –identity. Here’s a shell script that uses it to create a virtual machine:

#!/bin/bash echo "Authenticating...`az login --identity`" 

echo "Creating the resource group...`az group create -n $rg -l westeurope`"

echo "Creating the vm...`az vm create --no-wait --size Standard_B1s --resource-group $rg --name $vmname --image win2016datacenter --admin-username azureuser --admin-password $pw`"

The script expects three parameters: rg, vmname and pw. We can pass these parameters as HTTP query parameters. If the above script would be in the ./scripts/vm folder as create.sh, I could do the following call to webhookd:

curl --user api -XPOST "https://<public_server_dns>/vm/create?vmname=myvm&rg=myrg&pw=Abcdefg$$$$!!!!" 

The response to the above call would contain the output from the three az commands. The az login command would output the following:

 data:   {
data: "environmentName": "AzureCloud",
data: "id": "<id>",
data: "isDefault": true,
data: "name": "<subscription name>",
data: "state": "Enabled",
data: "tenantId": "<tenant_id>",
data: "user": {
data: "assignedIdentityInfo": "MSI",
data: "name": "systemAssignedIdentity",
data: "type": "servicePrincipal"
data: }

Notice the user object, which clearly indicates we are using a system-assigned managed identity. In my case, the managed identity has the contributor role on an Azure subscription used for testing. With that role, the shell script has the required access rights to deploy the virtual machine.

As you can see, it is very easy to use webhookd to deploy Azure resources if the Azure virtual machine that runs webhookd has a managed identity with the required access rights.

%d bloggers like this: