Azure Policy for Kubernetes: Contraints and ConstraintTemplates

In one on my videos on my YouTube channel, I talked about Kubernetes authentication and used the image below:

Securing access to the Kubernetes API Server

To secure access to the Kubernetes API server, you need to be authenticated and properly authorized to do what you need to do. The third mechanism to secure access is admission control. Simply put, admission control allows you to inspect requests to the API server and accept or deny the request based on rules you set. You will need an admission controller, which is just code that intercepts the request after authentication and authorization.

There is a list of admission controllers that are compiled-in with two special ones (check the docs):

  • MutatingAdmissionWebhook
  • ValidatingAdmissionWebhook

With the two admission controllers above, you can develop admission plugins as extensions and configure them at runtime. In this post, we will look at a ValidatingAdmissionWebhook that is used together with Azure Policy to inspect requests to the AKS API Server and either deny or audit these requests.

Note that I already have a post about Azure Policy and pod security policies here. There is some overlap between that post and this one. In this post, we will look more closely at what happens on the cluster.

Want a video instead?

Azure Policy

Azure has its own policy engine to control the Azure Resource Manager (ARM) requests you can make. A common rule in many organizations for instance is the prohibition of creation of expensive resources or even creating resources in unapproved regions. For example, a European company might want to only create resources in West Europe or North Europe. Azure Policy is the engine that can enforce such a rule. For more information, see Overview of Azure Policy. In short, you select from an ever growing list of policies or you create your own. Policies can be grouped in policy initiatives. A single policy or an initiative gets assigned to a scope, which can be a management group, a subscription or a resource group. In the portal, you then check for compliance:

Compliancy? What do I care? It’s just my personal subscription 😁

Besides checking for compliance, you can deny the requests in real time. There are also policies that can create resources when they are missing.

Azure Policy for Kubernetes

Although Azure Policy works great with Azure Resource Manager (ARM), which is basically the API that allows you to interact with Azure resources, it does not work with Kubernetes out of the box. We will need an admission controller (see above) that understands how to interpret Kubernetes API requests in addition to another component that can sync policies in Azure Policy to Kubernetes for the admission controller to pick up. There is a built-in list of supported Kubernetes policies.

For the admission controller, Microsoft uses Gatekeeper v3. There is a lot, and I do mean a LOT, to say about Gatekeeper and its history. We will not go down that path here. Check out this post for more information if you are truly curious. For us it’s enough to know that Gatekeeper v3 needs to be installed on AKS. In order to do that, we can use an AKS add-on. In fact, you should use the add-on if you want to work with Azure Policy. Installing Gatekeeper v3 on its own will not work.

Note: there are ways to configure Azure Policy to work with Azure Arc for Kubernetes and AKS Engine. In this post, we only focus on the managed Azure Kubernetes Service (AKS)

So how do we install the add-on? It is very easy to do with the portal or the Azure CLI. For all details, check out the docs. With the Azure CLI, it is as simple as:

az aks enable-addons --addons azure-policy --name CLUSTERNAME --resource-group RESOURCEGROUP

If you want to do it from an ARM template, just add the add-on to the template as shown here.

What happens after installing the add-on?

I installed the add-on without active policies. In kube-system, you will find the two pods below:

azure-policy and azure-policy-webhook

The above pods are part of the add-on. I am not entirely sure what the azure-policy-webhook does, but the azure-policy pod is responsible for checking Azure Policy for new assignments and translating that to resources that Gatekeeper v3 understands (hint: constraints). It also checks policies on the cluster and reports results back to Azure Policy. In the logs, you will see things like:

  • No audit results found
  • Schedule running
  • Creating constraint

The last line creates a constraint but what exactly is that? Constraints tell GateKeeper v3 what to check for when a request comes to the API server. An example of a constraint is that a container should not run privileged. Constraints are backed by constraint templates that contain the schema and logic of the constraint. Good to know, but where are the Gatekeeper v3 pods?

Gatekeeper pods in the gatekeeper-system namespace

Gatekeeper was automatically installed by the Azure Policy add-on and will work with the constraints created by the add-on, synced from Azure Policy. When you remove these pods, the add-on will install them again.

Creating a policy

Although you normally create policy initiatives, we will create a single policy and see what happens on the cluster. In Azure Policy, choose Assign Policy and scope the policy to the resource group of your cluster. In Policy definition, select Kubernetes cluster should not allow privileged containers. As discussed, that is one of the built-in policies:

Creating a policy that does not allow privileged containers

In the next step, set the effect to deny. This will deny requests in real time. Note that the three namespaces in Namespace exclusions are automatically added. You can add extra namespaces there. You can also specifically target a policy to one or more namespaces or even use a label selector.

Policy parameters

You can now select Review and create and then select Create to create the policy assignment. This is the result:

Policy assigned

Now we have to wait a while for the change to be picked up by the add-on on the cluster. This can take several minutes. After a while, you will see the following log entry in the azure-policy pod:

Creating constraint: azurepolicy-container-no-privilege-blablabla

You can see the constraint when you run k get constraints. The constraint is based on a constraint template. You can list the templates with k get constrainttemplates. This is the result:

constraint templates

With k get constrainttemplates k8sazurecontainernoprivilege -o yaml, you will find that the template contains some logic:

the template’s logic

The block of rego contains the logic of this template. Without knowing rego, which is the policy language used by Open Policy Agent (OPA) which is used by Gatekeeper v3 on our cluster, you can actually guess that the privileged field inside securityContext is checked. If that field is true, that’s a violation of policy. Although it is useful to understand more details about OPA and rego, Azure Policy hides the complexity for you.

Does it work?

Let’s try to deploy the following deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2
          ports:
            - containerPort: 80
          securityContext:
            privileged: true

After running kubectl apply -f deployment.yaml, everything seems fine. But when we run kubectl get deploy:

Pods are not coming up

Let’s run kubectl get events:

Oops…

Notice that validation.gatekeeper.sh denied the request because privileged was set to true.

Adding more policies

Azure Security Center comes with a large initiative, Azure Security Benchmark, that also includes many Kubernetes policies. All of these policies are set to audit for compliance. On my system, the initiative is assigned at the subscription level:

Azure Security Benchmark assigned at subscription level with name Security Center

The Azure Policy add-on on our cluster will pick up the Kubernetes policies and create the templates and constraints:

Several new templates created

Now we have two constraints for k8sazurecontainernoprivilege:

Two constraints: one deny and the other audit

The new constraint comes from the larger initiative. In the spec, the enforcementAction is set to dryrun (audit). Although I do not have pods that violate k8sazurecontainernoprivilege, I do have pods that violate another policy that checks for host path mapping. That is reported back by the add-on in the compliance report:

Yes, akv2k8s maps to /etc/kubernetes on the host

Conclusion

In this post, you have seen what happens when you install the AKS policy add-on and enable a Kubernetes policy in Azure Policy. The add-on creates constraints and constraint templates that Gatekeeper v3 understands. The rego in a constraint template contains logic used to define the policy. When the policy is set to deny, Gatekeeper v3, which is an admission controller denies the request in real-time. When the policy is set to audit (or dry run at the constraint level), audit results are reported by the add-on to Azure Policy.

Azure Kubernetes Service authentication with Azure AD

If you have ever installed Kubernetes on your own hardware or you have worked with Kubernetes on the desktop with a tool like kind, you probably know that you need a config file that tells the Kubernetes CLI (kubectl) how to talk to the Kubernetes API server. It contains the address of the API server, the cert of the CA that issued the API Server’s SSL certificate and more. Check the docs for more information. Tools like kind make it very easy because they create the file automatically or merge connection information into an existing config file.

For example, when you run kind create cluster, you will see the following message at the end:

kind output

The message Set kubectl context to kind-kind indicates that the config file in $HOME/.kube was modified. If you were to check the config file, you would find a client certificate and client key to authenticate to kind. Client certificate authentication is a very common way to authenticate to Kubernetes.

Azure AD authentication

In an enterprise context, you should not rely on client certificate authentication. It would be too cumbersome to create and manage all these client certificates. The level of control over these certificates is limited as well. In a Microsoft context with users, groups and service principals (think service accounts) in Azure Active Directory, Kubernetes should be integrated with that. If you are using Azure-managed Kubernetes with AKS, that is very easy to do with AKS-managed AAD authentication. There is also a manual method of integrating with AAD but you should not use that anymore. There is still some time to move away from that method though. 😀

To illustrate how you logon with Azure AD and how to bypass AAD, I created a video on my YouTube channel:

Azure AD Authentication in a pipeline

If you watched the video, you know you need to interactively provide your credentials when you perform an action that needs to be authenticated. After providing your credentials, kubectl has an access token (JWT) to pass to the Kubernetes API server.

In a pipeline or other automated process, you want to logon non-interactively. That is possible via the client-go credentials plugin kubelogin. When you search for kubelogin, you will find several of those plugins. You will want to use Azure/kubelogin to logon to Azure AD. In the video above, I demonstrate the use of kubelogin around the 14:40 mark.

Azure Policy: Kubernetes pod security baseline explained

When you deploy Azure Kubernetes Service (AKS) in an enterprise context, you will probably be asked about policies that can be applied to AKS for compliance and security. In this post, we will discuss Azure Policy for Kubernetes briefly and then proceed to explaining a group of policies that implement baseline security settings.

Azure Policy for Kubernetes

To apply policies to Kubernetes, Microsoft decided to integrate their existing Azure Policy solution with Gatekeeper v3. Gatekeeper is an admission controller webhook for Open Policy Agent (OPA). An admission controller webhook is a piece of software, running in Kubernetes, that can inspect incoming requests to the Kubernetes API server and decide to either allow or deny it. Open Policy Agent is a general solution for policy based control that goes way beyond just Kubernetes. It uses a language, called rego, that allows you to write policies that allow or deny requests. You can check the gatekeeper library for examples.

Although you can install Gatekeeper v3 on Kubernetes yourself, Microsoft provides an add-on to AKS that installs Gatekeeper for you. Be aware that you either install it yourself or let the add-on do it, but not both. The AKS add-on can be installed via the Azure CLI or an ARM template. It can also be enabled via the Azure Portal:

The policy add-on can easily be enabled and disabled via the Azure Portal; above it is enabled

When you enable the add-on, there will be an extra namespace on your cluster called gatekeeper-system. It contains the following workloads:

Gatekeeper v3 workloads

If, for some reason, you were to remove the above deployments, the add-on would add them back.

Enabling policies

Once the add-on is installed, you can enable Kubernetes policies via Azure Policy. Before we get started, keep in mind the following:

  • Policies can be applied at scale to multiple clusters: Azure Policy can be attached to resource groups, a subscription or management groups. When there are multiple AKS clusters at those levels, policy can be applied to all of those clusters
  • Linux nodes only
  • You can only use built-in policies provided by Azure

That last point is an important one. Microsoft provides several policies out of the box that are written with rego as discussed earlier. However, writing your own policies with rego is not supported.

Let’s add a policy initiative, which is just a fancy name for a group of policies. We will apply the policy to the resource group that contains my AKS cluster. From Azure Policy, click assignments:

Click Assign Initiative. The following screen is shown:

Above, the imitative will be linked to the rg-gitops-demo resource group. You can change the scope to the subscription or a management group as well. Click the three dots (…) next to Basics – Initiative definition. In the Search box, type kubernetes. You should see two initiatives:

We will apply the baseline standards. The restricted standards include extra policies. Click the baseline standards and click Select. A bit lower in the screen, make sure Policy Enforcement is enabled:

Now click Next. Because we want to deny the policy in real-time, select the deny effect:

Note that several namespaces are excluded by default. You can add namespaces here that you trust but run pods that will throw policy violations. On my cluster, there is a piece of software that will definitely cause a violation. You can now follow the wizard till the end and create the assignment. The assignment should be listed on the main Azure Policy screen:

You should now give Azure Policy some time to evaluate the policies. After a while, in the Overview screen, you can check the compliance state:

Above, you can see that the Kubernetes policies report non-compliance. In the next section, we will describe some of the policies in more detail.

Note that although these policies are set to deny, they will not kill existing workloads that violate the policy. If you were to kill a running pod that violates the policies, it will not come back up!

Important: in this article, we apply the default initiative. As a best practice however, you should duplicate the initiative. You can then change the policy parameters specific to your organization. For instance, you might want to allow host paths, allow capabilities and more. Host paths and capabilities are explained a bit more below.

Policy details

Let’s look at the non-compliant policy first, by clicking on the policy. This is what I see:

The first policy, Kubernetes cluster pod hostPath volumes should only use allowed host paths, results in non-compliance. This policy requires you to set the paths on the host that can be mapped to the pod. Because we did not specify any host paths, any pod that mounts a host path in any of the namespaces that policy applies too will generate a violation. In my case, I deployed Azure Key Vault to Kubernetes, which mounts the /etc/kubernetes/azure.json file. That file contains the AKS cluster service principal credentials! Indeed, the policy prohibits this.

To learn more about a policy, you can click it and then select View Definition. The definition in JSON will be shown. Close to the end of the JSON, you will find a link to a contraintTemplate:

When you click the link, you will find the rego behind this policy. Here is a snippet:

targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8sazurehostfilesystem

        violation[{"msg": msg, "details": {}}] {
            volume := input_hostpath_volumes[_]
            allowedPaths := get_allowed_paths(input)
            input_hostpath_violation(allowedPaths, volume)
            msg := sprintf("HostPath volume %v is not allowed, pod: %v. Allowed path: %v", [volume, input.review.object.metadata.name, allowedPaths])
        }

Even if you have never worked with rego, it’s pretty clear that it checks an array of allowed paths and then checks for host paths that are not in the list. There are other helper functions in the template.

Let’s look at another policy, Do not allow privileged containers in Kubernetes cluster. This one is pretty clear. It prevents you from creating a pod that has privileged: true in its securityContext. Suppose you have the following YAML:

apiVersion: v1
kind: Pod
metadata:
  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      privileged: true

If you try to apply the above YAML, the following error will be thrown:

Oops, privileged: true is not allowed (don’t look at the capabilities yet 😀)

As you can see, because we set the initiative to deny, the requests are denied in real-time by the Gatekeeper admission controller!

Let’s look at one more policy: Kubernetes cluster containers should only use allowed capabilities. With this policy, you can limit the Linux capabilities that can be added to your pod. An example of a capability is NET_BIND_SERVICE, which allows you to bind to a port below 1024, something a non-root user cannot do. By default, there is an array of allowedCapabilities which is empty. In addition, there is an array of requiredDropCapabilities which is empty as well. Note that this policy does not impact the default capabilities you pods will get. It does impact the additional ones you want to add. For example, if you use the securityContext below, you are adding additional capabilities NET_ADMIN and SYS_TIME:

securityContext:
      capabilities:
        add: ["NET_ADMIN", "SYS_TIME"]

This is not allowed by the policy. You will get:

By checking the contraint policy of the other templates, it will be quite straightforward to see what the policy checks for.

Note: when I export the policy initiative to GitHub (preview feature) I do see the default capabilities; see the snippet below (capabilities match the list that Gatekeeper reports above)

"allowedCapabilities": {
      "value": [
       "CHOWN",
       "DAC_OVERRIDE",
       "FSETID",
       "FOWNER",
       "MKNOD",
       "NET_RAW",
       "SETGID",
       "SETUID",
       "SETFCAP",
       "SETPCAP",
       "NET_BIND_SERVICE",
       "SYS_CHROOT",
       "KILL",
       "AUDIT_WRITE"
      ]

Conclusion

In most cases, you will want to enable Azure Policy for Kubernetes to control what workloads can do. We have only scratched the surface here. Next to the two initiatives, there are several other policies to control things such as GitOps configurations, the creation of external load balancers, require pod requests and limits and much much more!

Deploying Helm Charts with Azure DevOps pipelines

I recently uploaded a video to my YouTube channel about this topic:

Youtube video; direct link to demo https://youtu.be/1bC-fZEFodU?t=756

In this post, I will provide some more information about the pipelines. Again, many thanks to this post on which the solution is based.

The YAML pipelines can be found in my go-template repository. The application is basically a starter template to create a Go web app or API with full configuration, zap logging, OpenAPI spec and more. The Azure DevOps pipelines are in the azdo folder.

The big picture

Yes, this is the big picture

The pipelines are designed to deploy to a qa environment and subsequently to production after an approval is given. The ci pipeline builds a container image and a Helm chart and stores both in Azure Container Registry (ACR). When that is finished, a pipeline artifact is stored that contains the image tag and chart version in a JSON file.

The cd pipeline triggers on the ci pipeline artifact and deploys to qa and production. It waits for approval before deployment to production. It uses environments to achieve that.

CI pipeline

In the “ci” pipeline, the following steps are taken:

  • Retrieve the git commit SHA with $(build.SourceVersion) and store it in a variable called imageTag. To version the images, we simply use git commit SHAs which is a valid approach. Imho you do not need to use semantic versioning tags with pipelines that deploy often.
  • Build the container image. Note that the Dockerfile is a two stage build and that go test is used in the first stage. Unit tests are not run outside the image building process but you could of course do that as well to fail faster in case there is an issue.
  • Scan the image for vulnerabilities with Snyk. This step is just for reference because Snyk will not find issues with the image as it is based on the scratch image.
  • Push the container image to Azure Container Registry (ACR). Pipeline variables $(registryLogin) and $(registryPassword) are used with docker login instead of the Azure DevOps task.
  • Run helm lint to check the chart in /charts/go-template
  • Run helm package to package the chart (this is not required before pushing the chart to ACR; it is just an example)

When the above steps have finished, we are ready to push the chart to ACR. It is important to realize that storing charts in OCI compliant registries is an experimental feature of Helm. You need to turn on these features with:

export HELM_EXPERIMENTAL_OCI=1

After turning on this support, we can login to ACR and push the chart. These are the steps:

  • Use helm registry login and use the same login and password as with docker login
  • Save the chart in the checked out sources (/charts/go-template) locally with helm chart save. This is similar to building and storing a container image locally as you also use the full name to the chart. For example: myacr.azurecr.io/helm/go-template:0.0.1. In our pipeline, the below command is used:
chartVersion=`helm chart save charts/go-template $(registryServerName)/helm/$(projectName) | grep version | awk -F ': ' '{print $2}'`
  • Above, we run the helm chart save command but we also want to retrieve the version of the chart. That version is inside /charts/go-template/Chart.yaml and is output as version. With grep and awk, we grab the version and store it in the chartVersion variable. This is a “shell variable”, not a pipeline variable.
  • With the chart saved locally, we can now push the chart to ACR with:
helm chart push $(registryServerName)/helm/$(projectName):$chartVersion
  • Now we just need to save the chart version and the container image tag as a pipeline artifact. We can save these two values to a json file with:
echo $(jq -n --arg chartVersion "$chartVersion" --arg imgVersion "$(imageTag)" '{chartVersion: $chartVersion, imgVersion: $imgVersion}') > $(build.artifactStagingDirectory)/variables.json
  • As a last step, we publish the pipeline artifact

Do you have to do it this way? Of course not and there are many alternatives. For instance, because OCI support is experimental in helm and storing charts in ACR is in preview, you might want to install your chart directly from your source files. In that case, you can just build the container image and push it to ACR. The deployment pipeline can then checkout the sources and use /charts/go-template as the source for the helm install or helm upgrade command. The deployment pipeline could be triggered on the image push event.

Note that the pipeline uses templates for both the variables and the steps. The entire pipeline is the three files below:

  • azdo/ci.yaml
  • azdo/common/ci-vars.yaml
  • azdo/common/ci-steps.yaml

The ci-vars template defines and accepts a parameter called projectName which is go-template in my case. To call the template and set the parameter:

variables:
- template: ./common/ci-vars.yaml
  parameters:
      projectName: go-template

To use the parameter in ci-vars.yaml:

parameters:
  projectName: ''

variables:
  helmVersion: 3.4.1
  registryServerName: '$(registryName).azurecr.io'
  projectName: ${{ parameters.projectName }}
  imageName: ${{ parameters.projectName }}

CD pipeline

Now that we have both the chart and the container image in ACR, we can start our deployment. The screenshot below shows the repositories in ACR:

ACR repos for both the image and Helm chart

The deployment pipeline is defined in cd.yaml and uses cd-vars.yaml and cd-steps.yaml as templates. It pays off to use a template here because we execute the same steps in each environment.

The deployment pipeline triggers on the pipeline artifact from ci, by using resources as below:

resources: 
  pipelines:
  - pipeline: ci
    source: ci
    trigger:
      enabled: true
      branches:
        include:
          - main

When the pipeline is triggered, the stages can be started, beginning with the qa stage:

- stage: qa
  displayName: qa
  jobs:
  - deployment: qa
    displayName: 'deploy helm chart on AKS qa'
    pool:
      vmImage: ubuntu-latest
    variables:
      k8sNamespace: $(projectName)-qa
      replicas: 1
    environment: qa-$(projectName)
    strategy:
      runOnce:
        deploy:
          steps:
          - template: ./common/cd-steps.yaml

This pipeline deploys both qa and production to the same cluster but uses different namespaces. The namespace is defined in the stage’s variables, next to a replicas variable. Note that we are using an environment here. We’ll come back to that.

The actual magic (well, sort of…) happens in cd-steps.yaml:

  • Do not checkout the source files; we do not need them
  • Install helm with the HelmInstaller task
  • Download the pipeline artifact

After the download of the pipeline artifact, there is one final bash script that logs on to Kubernetes and deploys the chart:

  • Use az login to login with Azure CLI. You can also use an AzureCLI task with a service connection to authenticate. I often just use bash but that is personal preference.
  • az login uses a service principal; the Id and secret of the service principal are in pipeline secrets
  • In my case, the service principal is member of a group that was used as an admin group for managed AAD integration with AKS; as such the account has full access to the AKS cluster; that also means I can obtain a kube config using –admin in az aks get-credentials without any issue
  • If you want to use a custom RBAC role for the service principal and an account that cannot use –admin, you will need to use kubelogin to obtain the AAD tokens to modify your kube config; see the comments in the bash script for more information

Phew, with the login out of the way, we can grab the Helm chart and install it:

  • Use export HELM_EXPERIMENTAL_OCI=1 to turn on the experimental support
  • Login to ACR with helm registry login
  • Grab the chart version and image version from the pipeline artifact:
chartVersion=$(jq .chartVersion $(pipeline.workspace)/ci/build-artifact/variables.json -r)
imgVersion=$(jq .imgVersion $(pipeline.workspace)/ci/build-artifact/variables.json -r)
  • Pull the chart with:
helm chart pull $(registryServerName)/helm/$(projectName):$chartVersion
  • Export and install the chart:
# export the chart to ./$(projectName)
    helm chart export $(registryServerName)/helm/$(projectName):$chartVersion

    # helm upgrade with fallback to install
    helm upgrade \
        --namespace $(k8sNamespace) \
        --create-namespace \
        --install \
        --wait \
        --set image.repository=$(registryServerName)/$(projectName) \
        --set image.tag=$imgVersion \
        --set replicaCount=$(replicas) \
        $(projectName) \
        ./$(projectName)

Of course, to install the chart, we use helm upgrade but fall back to installation if this is the first time we run the command (–install). Note that we have to set some parameters at install time such as:

  • image.repository: in the values.yaml file, the image refers to ghcr.io; we need to change this to myacr.azurecr.io/go-template
  • image.tag: set this to the git commit SHA we grabbed from variables.json
  • replicaCount: set this to the stage variable replicas
  • namespace: set this to the stage variable k8sNamespace and use –create-namespace to create it if it does not exist; in many environments, this will not work as the namespaces are created by other teams with network policies, budgets, RBAC, etc…

Environments

As discussed earlier, the stages use environments. This shows up in Azure DevOps as follows:

Environments in Azure DevOps

You can track the deployments per environment:

Deployments per environment

And of course, you can set approvals and checks on an environment:

Approvals and checks; above we only configured an approval check on production

When you deploy, you will need to approve manually to deploy to production. You can do that from the screen that shows the stages of the pipeline run:

This now shows the check is passed; but this is the place where you can approve the stage

Note that you do not have to create environments before you use them in a pipeline. They will be dynamically created by the pipeline Usually though, they are created in advance with the appropriate settings such as approvals and checks.

You can also add resources to the environment such as your Kubernetes cluster. This gives you a view on Kubernetes, directly from Azure DevOps. However, if you deploy a private cluster, as many enterprises do, that will not work. Azure DevOps needs line of sight to the API server to show the resources properly.

Summary

What can I say? 😀 I hope that this post, the video and the sample project and pipelines can get you started with deployments to Kubernetes using Helm. If you have questions, feel free to drop them in the comments.

Kubernetes Canary Deployments with GitHub Actions

In the previous post, we looked at some of the GitHub Actions you can use with Microsoft Azure. One of those actions is the azure/k8s-deploy action which is currently at v1.4 (January 2021). To use that action, include the following snippet in your workflow:

- uses: azure/k8s-deploy@v1.4
  with:
    namespace: go-template
    manifests: ${{ steps.bake.outputs.manifestsBundle }}
    images: |
      ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}

The above snippet uses baked manifests from an earlier azure/k8s-bake action that uses kustomize as the render engine. This is optional and you can use individual manifests without kustomize. It also replaces the image it finds in the baked manifest with an image that includes a specific tag that is set as a variable at the top of the workflow. Multiple images can be replaced if necessary.

The azure/k8s-deploy action supports different styles of deployments, defined by the strategy action input:

  • none: if you do not specify a strategy, a standard Kubernetes rolling update is performed
  • canary: deploy a new version and direct a part of the traffic to the new version; you need to set a percentage action input to control the traffic split; in essence, a percentage of your users will use the new version
  • blue-green: deploy a new version next to the old version; after testing the new version, you can switch traffic to the new version

In this post, we will only look at the canary deployment. If you read the description above, it should be clear that we need a way to split the traffic. There are several ways to do this, via the traffic-split-method action input:

  • pod: the default value; by tweaking the amount of “old version” and “new version” pods, the standard load balancing of a Kubernetes service will approximate the percentage you set; pod uses standard Kubernetes features so no additional software is needed
  • smi: you will need to implement a service mesh that supports TrafficSplit; the traffic split is controlled at the request level by the service mesh and will be precise

Although pod traffic split is the easiest to use and does not require additional software, it is not very precise. In general, I recommend using TrafficSplit in combination with a service mesh like linkerd, which is very easy to implement. Other options are Open Service Mesh and of course, Istio.

With this out of the way, let’s see how we can implement it on a standard Azure Kubernetes Service (AKS) cluster.

Installing linkerd

Installing linkerd is easy. First install the cli on your system:

curl -sL https://run.linkerd.io/install | sh

Alternatively, use brew to install it:

brew install linkerd

Next, with kubectl configured to connect to your AKS cluster, run the following commands:

linkerd check --pre
linkerd install | kubectl apply -f -
linkerd check

Preparing the manifests

We will use three manifests, in combination with kustomize. You can find them on GitHub. In namespace.yaml, the linkerd.io/inject annotation ensures that the entire namespace is meshed. Every pod you create will get the linkerd sidecar injected, which is required for traffic splitting.

In the GitHub workflow, the manifests will be “baked” with kustomize. The result will be one manifest file:

- uses: azure/k8s-bake@v1
  with:
    renderEngine: kustomize
    kustomizationPath: ./deploy/
  id: bake

The action above requires an id. We will use that id to refer to the resulting manifest later with:

${{ steps.bake.outputs.manifestsBundle }}

Important note: I had some trouble using the baked manifest and later switched to using the individual manifests; I also deployed namespace.yaml in one action and then deployed service.yaml and deployment.yaml is a separate action; to deploy multiple manifests, use the following syntax:

- uses: azure/k8s-deploy@v1.4
  with:
    namespace: go-template
    manifests: |
      ./deploy/service.yaml
      ./deploy/deployment.yaml

First run

We got to start somewhere so we will deploy version 0.0.1 of the ghcr.io/gbaeke/go-template image. In the deployment workflow, we set the IMAGE_TAG variable to 0.0.1 and have the following action:

- uses: azure/k8s-deploy@v1.4
  with:
    namespace: go-template
    manifests: ${{ steps.bake.outputs.manifestsBundle }}
    # or use individual manifests in case of issues 🙂
    images: |
      ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}
    strategy: canary
    traffic-split-method: smi
    action: deploy #deploy is the default; we will later use this to promote/reject
    percentage: 20
    baseline-and-canary-replicas: 2

Above, the action inputs set the canary strategy, using the smi method with 20% of traffic to the new version. The deploy action is used which results in “canary” pods of version 0.0.1. It’s not actually a canary because there is no stable deployment yet and all traffic goes to the “canary”.

This is what gets deployed:

Initial, canary-only release

There only is a canary deployment with 2 canary pods in the deployment (we asked for 2 explicitly in the action). There are four services: the main go-template-service and then a service for baseline, canary and stable. Instead of deploy, you can use promote directly (action: promote) to deploy a stable version right away.

If we run linkerd dashboard we can check the namespace and the Traffic Split:

TrafficSplit in linkerd; all traffic to canary

Looked at in another way:

TrafficSplit

All traffic goes to the canary. In the Kubernetes TrafficSplit object, the weight is actually set to 1000m which is shown as 1 above.

Promotion

We can now modify the pipeline, change the action input action of azure/k8s-deploy to promote and trigger the workflow to run. This is what the action should look like:

- uses: azure/k8s-deploy@v1.4
  with:
    namespace: go-template
    manifests: ${{ steps.bake.outputs.manifestsBundle }}
    images: |
      ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}
    strategy: canary
    traffic-split-method: smi
    action: promote  #deploy is the default; we will later use this to promote/reject
    percentage: 20
    baseline-and-canary-replicas: 2

This is the result of the promotion:

After promotion

As expected, we now have 5 pods of the 0.0.1 deployment. This is the stable deployment. The canary pods have been removed. We get five pods because that is the number of replicas in deployment.yaml. The baseline-and-canary-replicas action input is not relevant now as there are no canary and baseline deployments.

The TrafficSplit now directs 100% of traffic to the stable service:

All traffic to “promoted” stable service

Deploying v0.0.2 with 20% split

Now we can deploy a new version of our app, version 0.0.2. The action is the same as the initial deploy but IMAGE_TAG is set to 0.0.2:

- uses: azure/k8s-deploy@v1.4
  with:
    namespace: go-template
    manifests: ${{ steps.bake.outputs.manifestsBundle }}
    images: |
      ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}
    strategy: canary
    traffic-split-method: smi
    action: deploy #deploy is the default; we will later use this to promote/reject
    percentage: 20
    baseline-and-canary-replicas: 2

Running this action results in:

Canary deployment of 0.0.2

The stable version still has 5 pods but canary and baseline pods have been added. More info about baseline below.

TrafficSplit is now:

TrafficSplit: 80% to stable and 20% to baseline & canary

Note that the baseline pods uses the same version as the stable pods (here 0.0.1). The baseline should be used to compare metrics with the canary version. You should not compare the canary to stable because factors such as caching might influence the comparison. This also means that, instead of 20%, only 10% of traffic goes to the new version.

Wait… I have to change the pipeline to promote/reject?

Above, we manually changed the pipeline and ran it manually from VS Code or the GitHub website. This is possible with triggers such as repository_dispatch and workflow_dispatch. There are (or should be) some ways to automate this better:

  • GitHub environments: with Azure DevOps, it is possible to run jobs based on environments and the approve/reject status; I am still trying to figure out if this is possible with GitHub Actions but it does not look like it (yet); if you know, drop it in the comments; I will update this post if there is a good solution
  • workflow_dispatch inputs: if you do want to run the workflow manually, you can use workflow_dispatch inputs to approve/reject or do nothing

Should you use this?

While I think the GitHub Action works well, I am not in favor of driving all this from GitHub, Azure DevOps and similar solutions. There’s just not enough control imho.

Solutions such as flagger or Argo Rollouts are progressive delivery operators that run inside the Kubernetes cluster. They provide more operational control, are fully automated and can be integrated with Prometheus and/or service mesh metrics. For an example, check out one of my videos. When you need canary and/or blue-green releases and you are looking to integrate the progression of your release based on metrics, surely check them out. They also work well for manual promotion via a CLI or UI if you do not need metrics-based promotion.

Conclusion

In this post we looked at the “mechanics” of canary deployments with GitHub Actions. An end-to-end solution, fully automated and based on metrics, in a more complex production application is quite challenging. If your particular application can use simpler deployment methods such as standard Kubernetes deployments or even blue-green, then use those!

A look at GitHub Actions for Azure and AKS deployments

In the past, I wrote about using Azure DevOps to deploy an AKS cluster and bootstrap it with Flux v2, a GitOps solution. In an older post, I also described bootstrapping the cluster with Helm deployments from the pipeline.

In this post, we will take a look at doing the above with GitHub Actions. Along the way, we will look at a VS Code extension for GitHub Actions, manually triggering a workflow from VS Code and GitHub and manifest deployment to AKS.

Let’s dive is, shall we?

Getting ready

What do you need to follow along:

Deploying AKS

Although you can deploy Azure Kubernetes Service (AKS) in many ways (manual, CLI, ARM, Terraform, …), we will use ARM and the azure/arm-deploy@v1 action in a workflow we can trigger manually. The workflow (without the Flux bootstrap section) is shown below:

name: deploy

on:
  repository_dispatch:
    types: [deploy]
  workflow_dispatch:
    

env:
  CLUSTER_NAME: CLUSTERNAME
  RESOURCE_GROUP: RESOURCEGROUP
  KEYVAULT: KVNAME
  GITHUB_OWNER: GUTHUBUSER
  REPO: FLUXREPO


jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - uses: azure/login@v1
        with:
          creds: ${{ secrets.AZURE_CREDENTIALS }}

      - uses: azure/arm-deploy@v1
        with:
          subscriptionId: ${{ secrets.SUBSCRIPTION_ID }}
          resourceGroupName: rg-gitops-demo
          template: ./aks/deploy.json
          parameters: ./aks/deployparams.json

      - uses: azure/setup-kubectl@v1
        with:
          version: 'v1.18.8'

      - uses: azure/aks-set-context@v1
        with:
          creds: '${{ secrets.AZURE_CREDENTIALS }}'
          cluster-name: ${{ env.CLUSTER_NAME }}
          resource-group: ${{ env.RESOURCE_GROUP }}

To create this workflow, add a .yml file (e.g. deploy.yml) to the .github/workflows folder of the repository. You can add this directly from the GitHub website or use VS Code to create the file and push it to GitHub.

The above workflow uses several of the Azure GitHub Actions, starting with the login. The azure/login@v1 action requires a GitHub secret that I called AZURE_CREDENTIALS. You can set secrets in your repository settings. If you use an organization, you can make it an organization secret.

GitHub Repository Secrets

If you have the GitHub Actions VS Code extension, you can also set them from there:

Setting and reading the secrets from VS Code

If you use the gh command line, you can use the command below from the local repository folder:

gh secret set SECRETNAME --body SECRETVALUE

The VS Code integration and the gh command line make it easy to work with secrets from your local system rather than having to go to the GitHub website.

The secret should contain the full JSON response of the following Azure CLI command:

az ad sp create-for-rbac --name "sp-name" --sdk-auth --role ROLE \
     --scopes /subscriptions/SUBID

The above command creates a service principal and gives it a role at the subscription level. That role could be contributor, reader, or other roles. In this case, contributor will do the trick. Of course, you can decide to limit the scope to a lower level such as a resource group.

After a successful login, we can use an ARM template to deploy AKS with the azure/arm-deploy@v1 action:

      - uses: azure/arm-deploy@v1
        with:
          subscriptionId: ${{ secrets.SUBSCRIPTION_ID }}
          resourceGroupName: rg-gitops-demo
          template: ./aks/deploy.json
          parameters: ./aks/deployparams.json

The action’s parameters are self-explanatory. For an example of an ARM template and parameters to deploy AKS, check out this example. I put my template in the aks folder of the GitHub repository. Of course, you can deploy anything you want with this action. AKS is merely an example.

When the cluster is deployed, we can download a specific version of kubectl to the GitHub runner that executes the workflow. For instance:

     - uses: azure/setup-kubectl@v1
        with:
          version: 'v1.18.8'

Note that the Ubuntu GitHub runner (we use ubuntu-latest here) already contains kubectl version 1.19 at the time of writing. The azure/setup-kubectl@v1 is useful if you want to use a specific version. In this specific case, the azure/setup-kubectl@v1 action is not really required.

Now we can obtain credentials to our AKS cluster with the azure/aks-set-context@v1 task. We can use the same credentials secret, in combination with the cluster name and resource group set as a workflow environment variable:

      - uses: azure/aks-set-context@v1
        with:
          creds: '${{ secrets.AZURE_CREDENTIALS }}'
          cluster-name: ${{ env.CLUSTER_NAME }}
          resource-group: ${{ env.RESOURCE_GROUP }}

In this case, the AKS API server has a public endpoint. When you use a private endpoint, run the GitHub workflow on a self-hosted runner with network access to the private API server.

Bootstrapping with Flux v2

To bootstrap the cluster with tools like nginx and cert-manager, Flux v2 is used. The commands used in the original Azure DevOps pipeline can be reused:

- name: Flux bootstrap
        run: |
          export GITHUB_TOKEN=${{ secrets.GH_TOKEN }}
          msi="$(az aks show -n ${{ env.CLUSTER_NAME }} -g ${{ env.RESOURCE_GROUP }} --query identityProfile.kubeletidentity.objectId -o tsv)"
          az keyvault set-policy --name ${{ env.KEYVAULT }} --object-id $msi --secret-permissions get
          curl -s https://toolkit.fluxcd.io/install.sh | bash
          flux bootstrap github --owner=${{ env.GITHUB_OWNER }} --repository=${{ env.REPO }} --branch=main --path=demo-cluster --personal

For an explanation of these commands, check this post.

Running the workflow manually

As noted earlier, we want to be able to run the workflow from the GitHub Actions extension in VS Code and the GitHub website instead of pushes or pull requests. The following triggers make this happen:

on:
  repository_dispatch:
    types: [deploy]
  workflow_dispatch:

The VS Code extension requires the repository_dispatch trigger. Because I am using multiple workflows in the same repo with this trigger, I use a unique event type per workflow. In this case, the type is deploy. To run the workflow, just right click on the workflow in VS Code:

Running the workflow from VS Code

You will be asked for the event to trigger and then the event type:

Selecting the deploy event type

The workflow will now be run. Progress can be tracked from VS Code:

Tracking workflow runs

Update Jan 7th 2021: after writing this post, the GitHub Action extension was updated to also support workflow_dispatch which means you can use workflow_dispatch to trigger the workflow from both VS Code and the GitHub website ⬇⬇⬇

To run the workflow from the GitHub website, workflow_dispatch is used. On GitHub, you can then run the workflow from the web UI:

Running the workflow from GitHub

Note that you can specify input parameters to workflow_dispatch. See this doc for more info.

Deploying manifests

As shown above, deploying AKS from a GitHub workflow is rather straightforward. The creation of the ARM template takes more effort. Deploying a workload from manifests is easy to do as well. In the repo, I created a second workflow called app.yml with the following content:

name: deployapp

on:
  repository_dispatch:
    types: [deployapp]
  workflow_dispatch:

env:
  CLUSTER_NAME: clu-gitops
  RESOURCE_GROUP: rg-gitops-demo
  IMAGE_TAG: 0.0.2

jobs:
  deployapp:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - uses: azure/aks-set-context@v1
        with:
          creds: '${{ secrets.AZURE_CREDENTIALS }}'
          cluster-name: ${{ env.CLUSTER_NAME }}
          resource-group: ${{ env.RESOURCE_GROUP }}

      - uses: azure/container-scan@v0
        with:
          image-name: ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}
          run-quality-checks: true

      - uses: azure/k8s-bake@v1
        with:
          renderEngine: kustomize
          kustomizationPath: ./deploy/
        id: bake

      - uses: azure/k8s-deploy@v1
        with:
          namespace: go-template
          manifests: ${{ steps.bake.outputs.manifestsBundle }}
          images: |
            ghcr.io/gbaeke/go-template:${{ env.IMAGE_TAG }}   
          

In the above workflow, the following actions are used:

  • actions/checkout@v2: checkout the code on the GitHub runner
  • azure/aks-set-context@v1: obtain credentials to AKS
  • azure/container-scan@v0: scan the container image we want to deploy; see https://github.com/Azure/container-scan for the types of scan
  • azure/k8s-bake@v1: create one manifest file using kustomize; note that the action uses kubectl kustomize instead of the standalone kustomize executable; the action should refer to a folder that contains a kustomization.yaml file; see this link for an example
  • azure/k8s-deploy@v1: deploy the baked manifest (which is an output from the task with id=bake) to the go-template namespace on the cluster; replace the image to deploy with the image specified in the images list (the tag can be controlled with the workflow environment variable IMAGE_TAG)

Note that the azure/k8s-deploy@v1 task supports canary and blue/green deployments using several techniques for traffic splitting (Kubernetes, Ingress, SMI). In this case, a regular Kubernetes deployment is used, equivalent to kubectl apply -f templatefile.yaml.

Conclusion

I only touched upon a few of the Azure GitHub Actions such as azure/login@v1 and azure/k8s-deploy@v1. There are many more actions available that allow you to deploy to Azure Container Instances, Azure Web App and more. We have also looked at running the workflows from VS Code and the GitHub website, which is easy to do with the repository_dispatch and workflow_dispatch triggers.

AKS Pod Identity with the Azure SDK for Go

File:Go Logo Blue.svg - Wikimedia Commons

In an earlier post, I wrote about the use of AKS Pod Identity (Preview) in combination with the Azure SDK for Python. Although that works fine, there are some issues with that solution:

Vulnerabilities as detected by SNYK

In order to reduce the size of the image and reduce/remove the vulnerabilities, I decided to rewrite the solution in Go. Just like the Python app (with FastAPI), we will expose an HTTP endpoint that displays all resource groups in a subscription. We will use a specific pod identity that has the Contributor role at the subscription level.

If you are more into videos, here’s the video version:

The code

The code is on GitHub @ https://github.com/gbaeke/go-msi in main.go. The code is kept as simple as possible. It uses the following packages:

github.com/Azure/azure-sdk-for-go/profiles/latest/resources/mgmt/resources
github.com/Azure/go-autorest/autorest/azure/auth

The resources package is used to create a GroupsClient to work with resource groups (check the samples):

groupsClient := resources.NewGroupsClient(subID)

subID contains the subscription ID, which is retrieved via the SUBSCRIPTION_ID environment variable. The container requires that environment variable to be set.

To authenticate to Azure and obtain proper authorization, the auth package is used with the NewAuthorizerFromEnvironment() method. That method supports several authentication mechanisms, one of which is managed identities. When we run this code on AKS, the pods can use a pod identity as explained in my previous post, if the pod identity addon is installed and configured. To obtain the authorization:

authorizer, err := auth.NewAuthorizerFromEnvironment()

authorizer is then passed to groupsClient via:

groupsClient.Authorizer = authorizer

Now we can use groupsClient to iterate through the resource groups:

ctx := context.Background()
log.Println("Getting groups list...")
groups, err := groupsClient.ListComplete(ctx, "", nil)
if err != nil {
	log.Println("Error getting groups", err)
}

log.Println("Enumerating groups...")
for groups.NotDone() {
	groupList = append(groupList, *groups.Value().Name)
	log.Println(*groups.Value().Name)
	err := groups.NextWithContext(ctx)
	if err != nil {
		log.Println("error getting next group")
	}
}

Note that the groups are printed and added to the groups slice. We can now serve the groupz endpoint that lists the groups (yes, the groups are only read at startup 😀):

log.Println("Serving on 8080...")
http.HandleFunc("/groupz", groupz)
http.ListenAndServe(":8080", nil)

The result of the call to /groupz is shown below:

My resource groups mess in my test subscription 😀

Running the code in a container

We can now build a single statically linked executable with go build and package it in a scratch container. If you want to know if your executable is statically linked, run file on it (e.g. file myapp). The result should be like:

myapp: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped

Here is the multi-stage Dockerfile:

# argument for Go version
ARG GO_VERSION=1.14.5

# STAGE 1: building the executable
FROM golang:${GO_VERSION}-alpine AS build

# git required for go mod
RUN apk add --no-cache git

# certs
RUN apk --no-cache add ca-certificates

# Working directory will be created if it does not exist
WORKDIR /src

# We use go modules; copy go.mod and go.sum
COPY ./go.mod ./go.sum ./
RUN go mod download

# Import code
COPY ./ ./


# Build the statically linked executable
RUN CGO_ENABLED=0 go build \
	-installsuffix 'static' \
	-o /app .

# STAGE 2: build the container to run
FROM scratch AS final

# copy compiled app
COPY --from=build /app /app

# copy ca certs
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# run binary
ENTRYPOINT ["/app"]

In the above Dockerfile, it is important to add the ca certificates to the build container and later copy them to the scratch container. The code will need to connect to https://management.azure.com and requires valid root CA certificates to do so.

When you build the container with the Dockerfile, it will result in a docker image of about 8.7MB. SNYK will not report any known vulnerabilities. Great success!

Note: container will run as root though; bad! 😀 Nico Meisenzahl has a great post on containerizing .NET Core apps which also shows how to configure the image to not run as root.

Let’s add some YAML

The GitHub repo contains a workflow that builds and pushes a container to GitHub container registry. The most recent version at the time of this writing is 0.1.1. The YAML file to deploy this container as part of a deployment is below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mymsi-deployment
  namespace: mymsi
  labels:
    app: mymsi
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mymsi
  template:
    metadata:
      labels:
        app: mymsi
        aadpodidbinding: mymsi
    spec:
      containers:
        - name: mymsi
          image: ghcr.io/gbaeke/go-msi:0.1.1
          env:
            - name: SUBSCRIPTION_ID
              value: SUBSCRIPTION ID
            - name: AZURE_CLIENT_ID
              value: APP ID OF YOUR MANAGED IDENTITY
            - name: AZURE_AD_RESOURCE
              value: "https://management.azure.com"
          ports:
            - containerPort: 8080

It’s possible to retrieve the subscription ID at runtime (as in the Python code) but I chose to just supply it via an environment variable.

For the above manifest to work, you need to have done the following (see earlier post):

  • install AKS with the pod identity add-on
  • create a managed identity that has the necessary Azure roles (in this case, enumerate resource groups)
  • create a pod identity that references the managed identity

In this case, the created pod identity is mymsi. The aadpodidbinding label does the trick to match the identity with the pods in this deployment.

Note that, although you can specify the AZURE_CLIENT_ID as shown above, this is not really required. The managed identity linked to the mymsi pod identity will be automatically matched. In any case, the logs of the nmi pod will reflect this.

In the YAML, AZURE_AD_RESOURCE is also specified. In this case, this is not required either because the default is https://management.azure.com. We need that resource to enumerate resource groups.

Conclusion

In this post, we looked at using the Azure SDK for Go together with managed identity on AKS, via the AAD pod identity addon. Similar to the Azure SDK for Python, the Azure SDK for Go supports managed identities natively. The difference with the Python solution is the size of the image and better security. Of course, that is an advantage stemming from the use of a language like Go in combination with the scratch image.

Managed Identity on Azure Arc Servers

Azure Arc – Azure Management | Microsoft Azure

When you install the Azure Arc agent on any physical or virtual server, either Windows or Linux, the machine suddenly starts living in a cloud world:

  • it appears in the Azure Portal
  • you can apply resource tags
  • you can check for security and regulatory compliance with Azure Policy
  • you can enable Update management
  • and much, much more…

Check Microsoft’s documentation for more information about Azure Arc for Servers to find out more. Below is a screenshot of such an Azure Arc-enabled Windows Server 2019 machine running on-premises with Insights enabled (on my laptop 😀):

Azure Arc-enabled Windows Server 2019

A somewhat lesser-known feature of Azure Arc is that these servers also have Managed Server Identity (MSI). After you have installed the Azure Arc agent, which normally installs to Program Files\AzureConnectedMachineAgent, two environment variables are set:

  • IMDS_ENDPOINT=http://localhost:40342
  • IDENTITY_ENDPOINT=http://localhost:40342/metadata/identity/oauth2/token

IMDS stands for Instance Metadata Service. On a regular Azure virtual machine, this service listens on the non-routable IP address of 169.254.169.254. On the virtual machine, you can make HTTP requests to that IP address without any issue. The traffic never leaves the virtual machine.

On an Azure Arc-enabled server, which can run anywhere, using the non-routable IP address is not feasible. Instead, the IMDS listens on a port on localhost as indicated by the environment variables.

The service can be used for all sorts of things. For example, I can make the following request (PowerShell):

Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://localhost:40342/metadata/instance?api-version=2020-06-01 | ConvertTo-Json

The result will be a JSON structure with most of the fields empty. That is not surprising since this is not an Azure VM and most fields are Azure-related (vmSize, fault domain, update domain, …). But it does show that the IMDS works, just like on a regular Azure VM.

Although there are many other things you can do, one of its most useful features is providing you with an access token to access Azure Resource Manager, Key Vault, or other services.

There are many ways to obtain an access token. The documentation contains an example in PowerShell that uses the environment variables and Invoke-WebRequest to get a token for https://management.azure.com.

A common requirement is code that needs to retrieve secrets from Azure Key Vault. Now we know that we can acquire a token via the IMDS, let’s see how we can do this with the Azure SDK for Python, which has full support for the IMDS on Azure Arc-enabled machines. The code below does the trick:

from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient

credentials = ManagedIdentityCredential()

secret_client = SecretClient(vault_url="https://gebakv.vault.azure.net", credential=credentials)
secret = secret_client.get_secret("notsecret")
print(secret.value)

Of course, you need Python installed with the following packages (use pip install):

  • azure-identity
  • azure-keyvault

Yes, the above code is all you need to use the managed identity of the Azure Arc-enabled server to authenticate to Key Vault and obtain the secret called notsecret. The functionality that makes the Python SDK work with Azure Arc can be seen here.

Of course, you need to make sure that the managed identity has the necessary access rights to Key Vault:

Managed Identity has Get permissions on Secrets

I have not looked at MSI Azure Arc support in the other SDKs but the Python SDK sure makes it easy!

Azure AD pod-managed identities in AKS revisited

A long time ago, I wrote a blog post about assigning managed identities to pods in Azure Kubernetes Services (AKS) to authenticate to Azure Storage. The implementation was based on the aad-pod-identity project on GitHub. You can look at the walkthrough to see how it worked.

Microsoft recently released a preview that enables you to turn on pod identity during cluster creation. It uses the same building blocks as before but makes it fully supported and part of AKS (although preview now). To create a basic cluster with pod identity enabled, you can use the following commands:

az group create -n RESOURCEGROUP -l LOCATION
az aks create -g RESOURCEGROUP -n CLUSTERNAME --enable-managed-identity --enable-pod-identity --network-plugin azure

Note: you need to use Azure CNI networking here; kubenet will not work

Before you deploy the cluster, make sure you follow the prerequisites in the documentation (Before you begin). At the time of writing (December 2020), the section in the documentation that tells you how to create the AKS cluster does not use the Azure CNI plugin. Make sure you add that!

What does –enable-pod-identity do?

When you use –enable-pod-identity, you should see nmi pods on your cluster in the kube-system namespace:

NMI pods

These pods are created from a DaemonSet so you will have one pod per cluster node (Linux nodes only ). When your application wants to use a managed identity, it does a request to the Instance Metadata Service (IMDS) endpoint which is 169.254.169.254. Requests to that IP address are intercepted by the NMI pods via iptables rules. The NMI pod that intercepts the request then makes an Azure AD Authentication Library (ADAL) request to Azure AD to obtain a token for the managed identity and returns it to your application.

Next to the NMI pods, other things are added as well, such as custom resource definitions. Some of those are discussed below.

How to request the token?

It’s great to know that the NMI pods intercept requests to the IMDS endpoint but how do you make such a request? I put together a small example in Python in the following git repository: https://github.com/gbaeke/python-msi. The code is in the rg-api folder in server.py:

from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from fastapi import FastAPI

app = FastAPI()

try:
    credentials = DefaultAzureCredential()
    subscription_client = SubscriptionClient(credentials)
    subscription = next(subscription_client.subscriptions.list())
    subscription_id = subscription.subscription_id
    resource_client = ResourceManagementClient(credentials, subscription_id)
except:
    print("error obtaining credentials")

@app.get("/")
def read_root():
    groups=[]
    try:
        for resource_group in resource_client.resource_groups.list():
            groups.append(resource_group.name)
    except:
        print("error obtaining groups")
    
    return groups

The code does the following:

  • use the azure-identity Python library to obtain credentials via DefaultAzureCredential() function. Note that that function tries multiple authentication options. If you run the code on your local computer and you are logged on to Azure with the Azure CLI, it will also work
  • use the azure-mgmt-resource Python library to enumerate resource groups in the current subscription
  • create a very simple API with FastAPI to ask for the list of resource groups; we can use a kubectl port forward later to obtain the JSON response; if authentication fails, the call will return an empty list instead of HTTP errors as you normally would

On my system, this is the result of the call when pod identity is working:

A bunch of resource groups in my test subscription… messy as usual

The repo also contains a Dockerfile to build a container with the app. I built and pushed that container to Docker Hub as gbaeke/rgapi.

Creating and using the identity

If we want the pod that runs the above code to use a specific identity, we have to create the identity and then tell the pod to use it. To create the managed identity, use the following command:

 az identity create --resource-group  rg-clu-msi --name rgapi 

The output of this command contains an id field that we need in another command later. The result of the above command is a User Assigned Managed Identity called rgapi. I already granted the Contributor role at the subscription level.

User Assigned Managed Identity rgapi

Note that this has nothing to do with AKS. To create a pod identity to use in AKS, you will need to run another command:

az aks pod-identity add --resource-group rg-clu-msi --cluster-name clu-msi --namespace  rgapi  --name rgapi --identity-resource-id "id field from previous command" 

The above command creates a pod identity called rgapi in the namespace rgapi. This namespace will be created if it does not exist. You can see the pod identity by running the below command:

 kubectl get azureidentities.aadpodidentity.k8s.io

If you look inside such an object, you would find the reference to the managed identity by its resource id (the id field from earlier). There are other custom resource definitions used by pod identity that we will not bother with now.

Now we need to create a pod and associate it with the pod identity. You can do so with the following YAML:

apiVersion: v1
kind: Pod
metadata:
  name: rgapi
  namespace: rgapi
  labels:
    aadpodidbinding: rgapi
spec:
  containers:
  - name: rgapi
    image: gbaeke/rgapi
  nodeSelector:
    kubernetes.io/os: linux

The important bit above is the aadpodidbinding label which refers to the pod identity we created earlier. When the above pod gets scheduled, it will call out to the IMDS endpoint. You should see that in the logs of the NMI pod on the same node as your application pod. For example:

no clientID or resourceID in request. rgapi/rgapi has been matched with azure identity rgapi/rgapi
status (200) took 12677813 ns for req.method=GET reg.path=/metadata/identity/oauth2/token req.remote=10.240.0.36

The first line indicates that I did not specifically set a clientID in my request but that the request is matched to the rgapi identity. The second line shows the NMI pod requesting a token for the identity from the Azure AD token endpoint.

Great! We now have a pod running that can retrieve resource groups with our custom managed identity. We did not have to add credentials manually or grab them from Key Vault. Our pod automatically picks up the pod identity. 🎉

Conclusion

Although it is still not super simple (is identity ever simple really?), the new method to enable pod identities is a definite improvement. It is currently in preview so it should not be used in production. Once it goes GA however, you will have a fully supported method of using user assigned managed identity with your pods and use specific identities per pod following least privilege methods.

Azure Key Vault Provider for Secrets Store CSI Driver

In the previous post, I talked about akv2k8s. akv2k8s is a Kubernetes controller that synchronizes secrets and certificates from Key Vault. Besides synchronizing to a regular secret, it can also inject secrets into pods.

Instead of akv2k8s, you can also use the secrets store CSI driver with the Azure Key Vault provider. As a CSI driver, its main purpose is to mount secrets and certificates as storage volumes. Next to that, it can also create regular Kubernetes secrets that can be used with an ingress controller or mounted as environment variables. That might be required if the application was not designed to read the secret from the file system.

In the previous post, I used akv2k8s to grab a certificate from Key Vault, create a Kubernetes secret and use that secret with nginx ingress controller:

certificate in Key Vault ------akv2aks periodic sync -----> Kubernetes secret ------> nginx ingress controller

Let’s briefly look at how to do this with the secrets store CSI driver.

Installation

Follow the guide to install the Helm chart with Helm v3:

helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --generate-name

This will install the components in the current Kubernetes namespace.

Easy no?

Syncing the certificate

Following the same example as with akv2aks, we need to point at the certificate in Key Vault, set the right permissions, and bring the certificate down to Kubernetes.

You will first need to decide how to access Key Vault. You can use the managed identity of your AKS cluster or be more granular and use pod identity. If you have setup AKS with a managed identity, that is the simplest solution. You just need to grab the clientId of the managed identity like so:

az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.clientId -o tsv

Next, create a file with the content below and apply it to your cluster in a namespace of your choosing.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: azure-gebakv
  namespace: YOUR NAMESPACE
spec:
  provider: azure
  secretObjects:
  - secretName: nginx-cert
    type: kubernetes.io/tls
    data:
    - objectName: nginx
      key: tls.key
    - objectName: nginx
      key: tls.crt
  parameters:
    useVMManagedIdentity: "true"
    userAssignedIdentityID: "CLIENTID YOU OBTAINED ABOVE" 
    keyvaultName: "gebakv"         
    objects:  |
      array:
        - |
          objectName: nginx
          objectType: secret        
    tenantId: "ID OF YOUR AZURE AD TENANT"

Compared to the akv2k8s controller, the above configuration is a bit more complex. In the parameters section, in the objects array, you specify the name of the certificate in Key Vault and its object type. Yes, you saw that correctly, the objectType actually has to be secret for this to work.

The other settings are self-explanatory: we use the managed identity, set its clientId and in keyvaultName we set the short name of our Key Vault.

The settings in the parameters section are actually sufficient to mount the secret/certificate in a pod. With the secretObjects section though, we can also ask for the creation of regular Kubernetes secrets. Here, we ask for a secret of type kubernetes.io/tls with name nginx-cert to be created. You need to explicitly set both the tls.key and the tls.crt value and correctly reference the objectName in the array.

The akv2k8s controller is simpler to use as you only need to point it to your certificate in Key Vault (and specify it’s a certificate, not a secret) and set a secret name. There is no need to set the different values in the secret.

Using the secret

The advantage of the secrets store CSI driver is that the secret is only mounted/created when an application requires it. That also means we have to instruct our application to mount the secret explicitly. You do that via a volume as the example below illustrates (part of a deployment):

spec:
      containers:
      - name: realtimeapp
        image: gbaeke/fluxapp:1.0.2
        volumeMounts:
          - mountPath: "/mnt/secrets-store"
            name: secrets-store-inline
            readOnly: true
        env:
        - name: REDISHOST
          value: "redis:6379"
        resources:
          requests:
            cpu: 25m
            memory: 50Mi
          limits:
            cpu: 150m
            memory: 150Mi
        ports:
        - containerPort: 8080
      volumes:
      - name: secrets-store-inline
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: "azure-gebakv"

In the above YAML, the following happens:

  • in volumes: we create a volume called secrets-store-inline and use the csi driver to mount the secrets we specified in the SecretProviderClass we created earlier (azure-gebakv)
  • in volumeMounts: we mount the volume on /mnt/secrets-store

Because we used secretObjects in our SecretProviderClass, this mount is accompanied by the creation of a regular Kubernetes secret as well.

When you remove the deployment, the Kubernetes secret will be removed instead of lingering behind for all to see.

Of course, the pods in my deployment do not need the mounted volume. It was not immediately clear to me how to avoid the mount but still create the Kubernetes secret (not exactly the point of a CSI driver 😀). On the other hand, there is a way to have the secret created as part of ingress controller creation. That approach is more useful in this case because we want our ingress controller to use the certificate. More information can be found here. In short, it roughly works as follows:

  • instead of creating and mounting a volume in your application pod, a volume should be created and mounted on the ingress controller
  • to do so, you modify the deployment of your ingress controller (e.g. ingress-nginx) with extraVolumes: and extraVolumeMounts: sections; depending on the ingress controller you use, other settings might be required

Be aware that you need to enable auto rotation of secrets manually and that it is an alpha feature at this point (December 2020). The akv2k8s controller does that for you out of the box.

Conclusion

Both the akv2k8s controller and the Secrets Store CSI driver (for Azure) can be used to achieve the same objective: syncing secrets, keys and certificates from Key Vault to AKS. In my experience, the akv2k8s controller is easier to use. The big advantage of the Secrets Store CSI driver is that it is a broader solution (not just for AKS) and supports multiple secret stores. Next to Azure Key Vault, it also supports Hashicorp’s Vault for example. My recommendation: for Azure Key Vault and AKS, keep it simple and try akv2k8s first!

%d bloggers like this: