Learn to use the Dapr authorization middleware

Based on a customer conversation, I decided to look into the Dapr middleware components. More specifically, I wanted to understand how the OAuth 2.0 middleware works that enables the Authorization Code flow.

In the Authorization Code flow, an authorization code is a temporary code that a client obtains after being redirected to an authorization URL (https://login.microsoftonline.com/{tenant}/oauth2/authorize) where you provide your credentials interactively (not useful for service-service non-interactive scenarios). That code is then handed to your app which exchanges it for an access token. With the access token, the authenticated user can access your app.

Instead of coding this OAuth flow in your app, we will let the Dapr middleware handle all of that work. Our app can then pickup the token from an HTTP header. When there is a token, access to the app is granted. Otherwise, Dapr (well, the Dapr sidecar next to your app) redirects your client to the authorization server to get a code.

Let’s take a look how this all works with Azure Active Directory. Other authorization servers are supported as well: Facebook, GitHub, Google, and more.

What we will build

Some experience with Kubernetes, deployments, ingresses, Ingress Controllers and Dapr is required.

If you think the explanation below can be improved, or I have made errors, do let me know. Let’s go…

Create an app registration

Using Azure AD means we need an app registration! Other platforms have similar requirements.

First, create an app registration following this quick start. In the first step, give the app a name and, for this demo, just select Accounts in this organizational directory only. The redirect URI will be configured later so just click Register.

After following the quick start, you should have:

  • the client ID and client secret: will be used in the Dapr component
  • the Azure AD tenant ID: used in the auth and token URLs in the Dapr component; Dapr needs to know where to redirect to and where to exchange the authorization code for an access token
App registration in my Azure AD Tenant

There is no need for your app to know about these values. All work is done by Dapr and Dapr only!

We will come back to the app registration later to create a redirect URI.

Install an Ingress Controller

We will use an Ingress Controller to provide access to our app’s Dapr sidecar from the Internet, using HTTP.

In this example, we will install ingress-nginx. Use the following commands (requires Helm):

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

Although you will find articles about daprizing your Ingress Controller, we will not do that here. We will use the Ingress Controller simply as a way to provide HTTP access to the Dapr sidecar of our app. We do not want Dapr-to-Dapr gRPC traffic between the Ingress Controller and our app.

When ingress-nginx is installed, grab the public IP address of the service that it uses. Use kubectl get svc -n ingress-nginx. I will use the IP address with nip.io to construct a host name like app.11.12.13.14.nip.io. The nip.io service resolves such a host name to the IP address in the name automatically.

The host name will be used in the ingress and the Dapr component. In addition, use the host name to set the redirect URI of the app registration: https://app.11.12.13.14.nip.io. For example:

Added a platform configuration for a web app and set the redirect URI

Note that we are using https here. We will configure TLS on the ingress later.

Install Dapr

Install the Dapr CLI on your machine and run dapr init -k. This requires a working Kubernetes context to install Dapr to your cluster. I am using a single-node AKS cluster in Azure.

Create the Dapr component and configuration

Below is the Dapr middleware component we need. The component is called myauth. Give it any name you want. The name will later be used in a Dapr configuration that is, in turn, used by the app.

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: myauth
spec:
  type: middleware.http.oauth2
  version: v1
  metadata:
  - name: clientId
    value: "CLIENTID of your app reg"
  - name: clientSecret
    value: "CLIENTSECRET that you created on the app reg"
  - name: authURL
    value: "https://login.microsoftonline.com/TENANTID/oauth2/authorize"
  - name: tokenURL
    value: "https://login.microsoftonline.com/TENANTID/oauth2/token"
  - name: redirectURL
    value: "https://app.YOUR-IP.nip.io"
  - name: authHeaderName
    value: "authorization"
  - name: forceHTTPS
    value: "true"
scopes:
- super-api

Replace YOUR-IP with the public IP address of the Ingress Controller. Also replace the TENANTID.

With the information above, Dapr can exchange the authorization code for an access token. Note that the client secret is hard coded in the manifest. It is recommended to use a Kubernetes secret instead.

The component on its own is not enough. We need to create a Dapr configuration that references it:

piVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
  name: auth
spec:
  tracing:
    samplingRate: "1"
  httpPipeline:
    handlers:
    - name: myauth # reference the oauth component here
      type: middleware.http.oauth2    

Note that the configuration is called auth. Our app will need to use this configuration later, via an annotation on the Kubernetes pods.

Both manifests can be submitted to the cluster using kubectl apply -f. It is OK to use the default namespace for this demo. Keep the configuration and component in the same namespace as your app.

Deploy the app

The app we will deploy is super-api, which has a /source endpoint to dump all HTTP headers. When authentication is successful, the authorization header will be in the list.

Here is deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: super-api-deployment
  labels:
    app: super-api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: super-api
  template:
    metadata:
      labels:
        app: super-api
      annotations:
        dapr.io/enabled: "true"
        dapr.io/app-id: "super-api"
        dapr.io/app-port: "8080"
        dapr.io/config: "auth" # refer to Dapr config
        dapr.io/sidecar-listen-addresses: "0.0.0.0" # important
    spec:
      securityContext:
        runAsUser: 10000
        runAsNonRoot: true
      containers:
        - name: super-api
          image: ghcr.io/gbaeke/super:1.0.7
          securityContext:
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - all
          args: ["--port=8080"]
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          env:
            - name: IPADDRESS
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: WELCOME
              value: "Hello from the Super API on AKS!!! IP is: $(IPADDRESS)"
            - name: LOG
              value: "true"       
          resources:
              requests:
                memory: "64Mi"
                cpu: "50m"
              limits:
                memory: "64Mi"
                cpu: "50m"
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 15
          readinessProbe:
              httpGet:
                path: /readyz
                port: 8080
              initialDelaySeconds: 5
              periodSeconds: 15

Note the annotations in the manifest above:

  • dapr.io/enabled: injects the Dapr sidecar in the pods
  • dapr.io/app-id: a Dapr app needs an id; a service will automatically be created with that id and -dapr appended; in our case the name will be super-api-dapr; our ingress will forward traffic to this service
  • dapr.io/app-port: Dapr will need to call endpoints in our app (after authentication in this case) so it needs the port that our app container uses
  • dapr.io/config: refers to the configuration we created above, which enables the http middleware defined by our OAuth component
  • dapr.io/sidecar-listen-addresses: ⚠️ needs to be set to “0.0.0.0”; without this setting, we will not be able to send requests to the Dapr sidecar directly from the Ingress Controller

Submit the app manifest with kubectl apply -f.

Check that the pod has two containers: the Dapr sidecar and your app container. Also check that there is a service called super-api-dapr. There is no need to create your own service. Our ingress will forward traffic to this service.

Create an ingress

In the same namespace as the app (default), create an ingress. This requires the ingress-nginx Ingress Controller we installed earlier:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: super-api-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  tls:
    - hosts:
      - app.YOUR-IP.nip.io
      secretName: tls-secret 
  rules:
  - host: app.YOUR-IP.nip.io
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: super-api-dapr
            port: 
              number: 80

Replace YOUR-IP with the public IP address of the Ingress Controller.

For this to work, you also need a secret with a certificate. Use the following commands:

openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=app.YOUR-IP.nip.io"
kubectl create secret tls tls-secret --key tls.key --cert tls.crt

Replace YOUR-IP as above.

Testing the configuration

Let’s use the browser to connect to the /source endpoint. You will need to use the Dapr invoke API because the request will be sent to the Dapr sidecar. You need to speak a language that Dapr understands! The sidecar will just call http://localhost:8080/source and send back the response. It will only call the endpoint when authentication has succeeded, otherwise you will be redirected.

Use the following URL in the browser. It’s best to use an incognito session or private window.

https://app.20.103.17.249.nip.io/v1.0/invoke/super-api/method/source

Your browser will warn you of security risks because the certificate is not trusted. Proceed anyway! 😉

Note: we could use some URL rewriting on the ingress to avoid having to use /v1.0/invoke etc… You can also use different URL formats. See the docs.

You should get an authentication screen which indicates that the Dapr configuration is doing its thing:

Redirection to the authorize URL

After successful authentication, you should see the response from the /source endpoint of super-api:

Response from /source

The response contains an Authorization header. The header contains a JWT after the word Bearer. You can paste that JWT in https://jwt.io to see its content. We can only access the app with a valid token. That’s all we do in this case, ensuring only authenticated users can access our app.

Conclusion

In this article, we used Dapr to secure access to an app without having to modify the app itself. The source code of super-api was not changed in any way to enable this functionality. Via a component and a configuration, we instructed our app’s Dapr sidecar to do all this work for us. App endpoints such as /source are only called when there is a valid token. When there is such a token, it is saved in a header of your choice.

It is important to note that we have to send HTTP requests to our app’s sidecar for this to work. To enable this, we instructed the sidecar to listen on all IP addresses of the pod, not just 127.0.0.1. That allows us to send HTTP requests to the service that Dapr creates for the app. The ingress forwards requests to the Dapr service directly. That also means that you have to call your endpoint via the Dapr invoke API. I admit that can be confusing in the beginning. 😉

Note that, at the time of this writing (June 2022), the OAuth2 middleware in Dapr is in an alpha state.

Publish your AKS Ingress Controller over Azure Private Link

In a previous article, I wrote about the AKS Azure Cloud Provider and its support for Azure Private Link. In summary, the functionality allows for the following:

  • creation of a Kubernetes service of type LoadBalancer
  • via an annotation on the service, the Azure Cloud Provider creates an internal load balancer (ILB) instead of a public one
  • via extra annotations on the service, the Azure Cloud Provider creates an Azure Private Link Service for the Internal Load Balancer (🆕)

In the article, I used Azure Front Door as an example to securely publish the Kubernetes service to the Internet via private link.

Although you could publish all your services using the approach above, that would not be very efficient. In the real world, you would use an Ingress Controller like ingress-nginx to avoid the overhead of one service of type LoadBalancer per application.

Publish the Ingress Controller with Private Link Service

In combination with the Private Link Service functionality, you can just publish an Ingress Controller like ingress-nginx. That would look like the diagram below:

In the above diagram, our app does not use a LoadBalancer service. Instead, the service is of the ClusterIP type. To publish the app externally, an ingress resource is created to publish the app via ingress-nginx. The ingress resource refers to the ClusterIP service super-api. There is nothing new about this. This is Kubernetes ingress as usual:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: super-api-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: www.myingress.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: super-api
            port: 
              number: 80

Note that I am using the host http://www.myingress.com as an example here. In Front Door, I will need to configure a custom host header that matches the ingress host. Whenever Front Door connects to the Ingress Controller via Private Link Service, the host header will be sent to allow ingress-nginx to route traffic to the super-api service.

In the diagram, you can see that it is the ingress-nginx service that needs the annotations to create a private link service. When you install ingress-nginx with Helm, just supply a values file with the following content:

controller:
 service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
      service.beta.kubernetes.io/azure-pls-create: "true"
      service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: IP_IN_SUBNET
      service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1"
      service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: SUBNET_NAME
      service.beta.kubernetes.io/azure-pls-name: PLS_NAME
      service.beta.kubernetes.io/azure-pls-proxy-protocol: "false"
      service.beta.kubernetes.io/azure-pls-visibility: '*'

Via the above annotations, the service created by the ingress-nginx Helm chart will use an internal load balancer. In addition, a private link service for the load balancer will be created.

Front Door Config

The Front Door configuration is almost the same as before, except that we need to configure a host header on the origin:

Host header config in Front Door origin

When I issue the command below (FQDN is the Front Door endpoint):

 curl https://aks-agbyhedaggfpf5bs.z01.azurefd.net/source

the response is the following:

Hello from Super API
Source IP and port: 10.244.0.12:40244
X-Forwarded-For header: 10.224.10.20

All headers:

HTTP header: X-Real-Ip: [10.224.10.20]
HTTP header: X-Forwarded-Scheme: [http]
HTTP header: Via: [2.0 Azure]
HTTP header: X-Azure-Socketip: [MY HOME IP]
HTTP header: X-Forwarded-Host: [www.myingress.com]
HTTP header: Accept: [*/*]
HTTP header: X-Azure-Clientip: [MY HOME IP]
HTTP header: X-Azure-Fdid: [f76ca0ce-32ed-8754-98a9-e6c02a7765543]
HTTP header: X-Request-Id: [5fd6bb9c1a4adf4834be34ce606d980e]
HTTP header: X-Forwarded-For: [10.224.10.20]
HTTP header: X-Forwarded-Port: [80]
HTTP header: X-Original-Forwarded-For: [MY HOME IP, 147.243.113.173]
HTTP header: User-Agent: [curl/7.58.0]
HTTP header: X-Azure-Requestchain: [hops=2]
HTTP header: X-Forwarded-Proto: [http]
HTTP header: X-Scheme: [http]
HTTP header: X-Azure-Ref: [0nPGlYgAAAABefORrczaWQ52AJa/JqbBAQlJVMzBFREdFMDcxMgBmNzZjYTBjZS0yOWVkLTQ1NzUtOThhOS1lNmMwMmE5NDM0Mzk=, 20220612T140100Z-nqz5dv28ch6b76vb4pnq0fu7r40000001td0000000002u0a]

The /source endpoint of super-api dumps all the HTTP headers. Note the following:

  • X-Real-Ip: is the address used for NATting by the private link service
  • X-Azure-Fdid: is the Front Door Id that allows us to verify that the request indeed passed Front Door
  • X-Azure-Clientip: my home IP address; this is the result of setting externalTrafficPolicy: Local on the ingress-nginx service; the script I used to install ingress-nginx happened to have this value set; it is not required unless you want the actual client IP address to be reported
  • X-Forwarded-Host: the host header; the original FQDN aks-agbyhedaggfpf5bs.z01.azurefd.net cannot be seen

In the real world, you would configure a custom domain in Front Door to match the configured host header.

Conclusion

In this post, we published a Kubernetes Ingress Controller (ingress-nginx) via an internal load balancer and Azure Private Link. A service like Azure Front Door can use this functionality to provide external connectivity to the internal Ingress Controller with extra security features such as Azure WAF. You do not have to use Front Door. You can provide access to the Ingress Controller from a Private Endpoint in any network and any subscription, including subscriptions you do not control.

Although this functionality is interesting, it is not automated and integrated with Kubernetes ingress functionality. For that reason alone, I would not recommend using this. It does provide the foundation to create an alternative to Application Gateway Ingress Controller. The only thing that is required is to write a controller that integrates Kubernetes ingress with Front Door instead of Application Gateway. 😉

Azure Kubernetes Service and Azure Private Link Integration

If you have done any work with Azure, you have probably come across terms such as Azure Private Link Service (PLS) and Private Endpoints (PEs). To quickly illustrate what Azure PLS is, let’s look at a diagram from the Microsoft documentation for Azure SQL database:

PLS with Azure SQL

Above, Azure SQL Database uses Azure Private Link Service (PLS) to provide connectivity to the database from inside a virtual network that you control. Without a private link, you would need to connect to Azure SQL via a public IP address over the Internet. In order to connect privately, a private endpoint connection (PE) is created inside a subnet in your virtual network. Above, that interface gets IP address 10.0.0.5. The PE can be seen as a network interface that is connected to Azure SQL Database via Azure PLS. The green arrow from the PE to Azure SQL Database can be seen as the private connection.

Azure SQL Database is not the only service offering this functionality. For example, when you deploy Azure Kubernetes Service (AKS) with a private Kubernetes API service, a private endpoint connection is created to access the Kubernetes control plane via Azure PLS.

When you go to Private Link Center in the Azure Portal, you can see all your private endpoints and their connection state. Below, a private endpoint for a private AKS cluster is shown. It shows as connected via private link.

Private endpoint to access the Microsoft managed AKS control plane

Creating your own Private link services

In the two examples above, Azure SQL Database and AKS use Azure PLS to enable a private connection. But what if you build your own service and you want to offer private connectivity to consumers such as your customers or other Azure services? That is where the creation of your own private link services comes into play. These services can be created from Private Link Center by enabling private connections to a standard load balancer:

Creating your own private link service

More information about this process can be found in the documentation.

In summary, when you have a standard load balancer that load balances traffic to an application, you can offer a private connection to that load balancer via Azure Private Link Service.

The load balancer can be in front of traditional virtual machines or Kubernetes pods. In the next section, we’ll look at the second scenario: creating a private link service from an internal load balancer (ILB) that AKS creates for a Kubernetes service.

Creating a Private Link Service from an AKS internal load balancer

Although it was technically possible to create a Private Link Service from an internal load balancer controlled by AKS in the past, it was a cumbersome process. In addition, AKS was not aware of the Private Link Service configuration. A new capability in the Azure Cloud Provider changes this.

When you create a Kubernetes service of type LoadBalancer, you can now provide annotations that instruct the AKS Azure Cloud Provider to create a private link service from the internal load balancer it creates. Here’s an example:

apiVersion: v1
kind: Service
metadata:
  name: super-api
  annotations:
    # create ILB instead of ELB; this functionality predates the PLS functionality
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-pls-create: "true"
    service.beta.kubernetes.io/azure-pls-name: myPLS
    service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: YOUR SUBNET
    service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1"
    service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: 10.224.10.10
    service.beta.kubernetes.io/azure-pls-proxy-protocol: "false"
    service.beta.kubernetes.io/azure-pls-visibility: "*"
    # does not apply here because we will use Front Door later
    service.beta.kubernetes.io/azure-pls-auto-approval: "YOUR SUBSCRIPTION ID"
spec:
  selector:
    app: super-api
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080

This works with both Kubenet and the Azure CNI. You can use the subnet that your AKS nodes are in. Above, replace YOUR SUBNET with the name of your subnet, not its resource id.

When the above YAML is submitted to Kubernetes, the private link service myPLS gets created. Record the alias for later use:

Creation of the PLS

Note that the annotation service.beta.kubernetes.io/azure-load-balancer-internal: "true" creates the load balancer in the AKS node resource group.

Note that a private link service also creates a network interface in the subnet for NATting purposes. NAT ensures that the networking configuration of the consumer does not lead to IP address conflicts. The NAT IP above is 10.224.10.10. You can configure multiple NAT IP addresses to avoid port exhaustion.

The PLS will be visible in the Private Link Center without connections. Later, when you add services that use this private link service, the number of connections will be shown as below:

myPLS with one connection (from Azure Front Door, see below 😉)

But what can we connect to this? We already know the answer: a private endpoint. You could create a private endpoint in any network, in any subscription, and link it up to myPLS. In fact, other customers from different Azure AD tenants can use myPLS as well, provided that the usage is approved by you. We will not do that in this example, and instead, wire up Azure Front Door to our AKS service.

Azure Front Door Premium

Azure Front Door Premium supports private endpoints that connect to your own private link services. Those private endpoints are not owned by you but by the Front Door service. You will not be able to see those private endpoints in your subscription(s) because they do not live there. It’s as if someone from another organization and tenant connects to your private link service. In this case, that other organization is Microsoft! 😉

With the configuration of Front Door, we get the full picture below:

AKS service via ILB with PLS consumed by Front Door Premium Private Endpoint 🧠

The configuration of the private endpoint and wiring it up to your private link service is done in the origin group configuration, as shown above. When you add an origin to the origin group, one of the options is to connect to a private link service. Below, you see an already configured origin group:

Origin group with a private link service origin

Above, the origin host name is the alias of the private link service created earlier (myPLS).

Here’s a screenshot of the Add an origin UI:

Adding an origin using private link service

The Origin type should be custom, and the Host name should be the private link service alias. Then, you can check Enable private link service and select the private link that was created by AKS based on the service annotations.

Remember that you will still have to approve the usage of the private link service by Azure Front Door! Check Pending Connections in Private Link Center.

Does it work?

In Front Door manager, you should have an endpoint and a route that uses the origin group. In my case, that is aksdemo-agfcfwgkgyctgyhs.z01.azurefd.net. The AKS service publishes a deployment of ghcr.io/gbaeke/super:1.0.7 which just prints Hello from Super API:

Tadaaa, it works!

Conclusion

This new feature makes it super easy to create Azure Private Link Services from internal load balancers created by AKS. Combined with Azure Front Door Premium, you can publish these services to the Internet without having to provide public connectivity at the AKS level. In addition, you can enable other Front Door features such as WAF (web application firewall). Maybe in the future, we’ll see some extra integration with Azure Front Door so it can act as an AKS Ingress Controller, all controlled from Kubernetes manifests? 😉

Draft 2 and Ingress with Web Application Routing

If you read the previous article on Draft 2, we went from source code to deployed application in a few steps:

  • az aks draft create: creates a Dockerfile and Kubernetes manifests (deployment and service manifests)
  • az aks draft setup-gh: setup GitHub OIDC
  • az aks draft generate-workflow: create a GitHub workflow that builds and pushes the container image and deploys the application to Kubernetes

If you answer the questions from the commands above correctly, you should be up and running fairly quickly! 🚀

The manifests default to a Kubernetes service that uses the type LoadBalancer to configure an Azure public load balancer to access your app. But maybe you want to test your app with TLS and you do not want to configure a certificate in your container image? That is where the ingress configuration comes in.

You will need to do two things:

  • Configure web application routing: configures Ingress Nginx Controller and relies on Open Service Mesh (OSM) and the Secret Store CSI Driver for Azure Key Vault. That way, you are shielded from having to do all that yourself. I did have some issues with web application routing as described below.
  • Use az aks draft update to configure the your service to work with web application routing; this command will ask you for two things:
    • the hostname for your service: you decide this but the name should resolve to the public IP of the Nginx Ingress Controller installed by web application routing
    • a URI to a certificate on Azure Key Vault: you will need to deploy a Key Vault and upload or create the certificate

Configure web application routing

Although it should be supported, I could not enable the add-on on one of my existing clusters. On another one, it did work. I decided to create a new cluster with the add-on by running the following command:

az aks create --resource-group myResourceGroup --name myAKSCluster --enable-addons web_application_routing

⚠️ Make sure you use the most recent version of the Azure CLI aks-preview extension.

On my cluster, that gave me a namespace app-routing-system with two pods:

Nginx in app-routing-system

Although the add-on should also install Secrets Store CSI Driver, Open Service Mesh, and External DNS, that did not happen in my case. I installed the first two from the portal. I did not bother installing External DNS.

Enabling OSM
Enabling secret store CSI driver

Create a certificate

I created a Key Vault in the same resource group as my AKS cluster. I configured the access policies to use Azure RBAC (role-based access control). It did not work with the traditional access policies. I granted myself and the identity used by web application routing full access:

Key Vault Administrator for myself and the user-assigned managed id of web app routing add-on

You need to grant the user-assigned managed identity of web application routing access because a SecretProviderClass will be created automatically for that identity. The Secret Store CSI Driver uses that SecretProviderClass to grab a certificate from Key Vault and generate a Kubernetes secret for it. The secret will later be used by the Kubernetes Ingress resource to encrypt HTTP traffic. How you link the Ingress resource to the certificate is for a later step.

Now, in Key Vault, generate a certificate:

In Key Vault, click Certificates and create a new one

Above, I use nip.io with the IP address of the Ingress Controller to generate a name that resolves to the IP. For example, 10.2.3.4.nip.io will resolve to 10.2.3.4. Try it with ping. It’s truly a handy service. Use kubectl get svc -n app-routing-system to find the Ingress Controller public (external) IP.

Now we have everything in place for draft to modify our Kubernetes service to use the ingress controller and certificate.

Using az aks draft update

Back on your machine, in the repo that you used in the previous article, run az aks draft update. You will be asked two questions:

  • Hostname: use <IP Address of Nginx>.nip.io (same as in the common name of the cert without CN=)
  • URI to the certificate in Key Vault: you can find the URI in the properties of the certificate
There will be a copy button at the right of the certificate identifier

Draft will now update your service to something like:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubernetes.azure.com/ingress-host: IPADDRESS.nip.io
    kubernetes.azure.com/tls-cert-keyvault-uri: https://kvdraft.vault.azure.net/certificates/mycert/IDENTIFIER
  creationTimestamp: null
  name: super-api
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: super-api
  type: ClusterIP
status:
  loadBalancer: {}

The service type is now ClusterIP. The annotations will be used for several things:

  • to create a placeholder deployment that mounts the certificate from Key Vault in a volume AND creates a secret from the certificate; the Secret Store CSI Driver always needs to mount secrets and certs in a volume; rather than using your application pod, they use a placeholder pod to create the secret
  • to create an Ingress resource that routes to the service and uses the certificate in the secret created via the placeholder pod
  • to create an IngressBackend resource in Open Service Mesh

In my default namespace, I see two pods after deployment:

the placeholder pod starts with keyvault and creates the secret; the other pod is my app

Note that above, I actually used a Helm deployment instead of a manifest-based deployment. That’s why you see release-name in the pod names.

The placeholder pod creates a csi volume that uses a SecretProviderClass to mount the certificate:

SecretProviderClass

The SecretProviderClass references your Key Vault and managed identity to access the Key Vault:

spec of SecretProviderClass

If you have not assigned the correct access policy on Key Vault for the userAssignedIdentityID, the certificate cannot be retrieved and the pod will not start. The secret will not be created either.

I also have a secret with the cert inside:

Secret created by Secret Store CSI Driver; referenced by the Ingress

And here is the Ingress:

Ingress; note it says 8080 instead of the service port 80; do not change it! Never mind the app. in front of the IP; your config will not have that if you followed the instructions

All of this gets created for you but only after running az aks draft update and when you commit the changes to GitHub, triggering the workflow.

Did all this work smoothly from the first time?

The short answer is NO! 😉At first I thought Draft would take care of installing the Ingress components for me. That is not the case. You need to install and configure web application routing on your cluster and configure the necessary access rights.

I also thought web application routing would install and configure Open Service Mesh and Secret Store CSI driver. That did not happen although that is easily fixed by installing them yourself.

I thought there would be some help with certificate generation. That is not the case. Generating a self-signed certificate with Key Vault is easy enough though.

Once you have web application routing installed and you have a Key Vault and certificate, it is simple to run az aks draft update. That changes your Kubernetes service definition. After pushing that change to your repo, the updated service with the web application routing annotations can be deployed.

I got some 502 Bad Gateway errors from Nginx at first. I removed the OSM-related annotations from the Ingress object and tried some other things. Finally, I just redeployed the entire app and then it just started working. I did not spend more time trying to find out why it did not work from the start. The fact that Open Service Mesh is used, which has extra configuration like IngressBackends, will complicate troubleshooting somewhat. Especially if you have never worked with OSM, which is what I expect for most people.

Conclusion

Although this looks promising, it’s all still a bit rough around the edges. Adding OSM to the mix makes things somewhat more complicated.

Remember that all of this is in preview and we are meant to test drive it and provide feedback. However, I fear that, because of the complexity of Kubernetes, these tools will never truly make it super simple to get started as a developer. It’s just a tough nut to crack!

My own point of view here is that Draft v2 without az aks draft update is very useful. In most cases though, it’s enough to use standard Kubernetes services. And if you do need an ingress controller, most are easy to install and configure, even with TLS.

Trying out Draft 2 on AKS

Sadly no post about good Belgian beer 🍺.

Draft 2 is an open-source project that aims 🎯 to make things easier for developers that build Kubernetes applications. It can improve the inner dev loop, where the developers code and test their apps, in the following ways:

  • Automate the creation of a Dockerfile
  • Automate the creation of Kubernetes manifests, Helm charts, or Kustomize configs
  • Generate a GitHub Action workflow to build and deploy the application when you push changes

I have worked with Draft 1 in the past, and it worked quite well. Now Microsoft has integrated Draft 2 in the Azure CLI to make it part of the Kubernetes on Azure experience. A big difference with Draft 1 is that Draft 2 makes use of GitHub Actions (Wait? No Azure DevOps? 😲) to build and push your images to the development cluster. It uses GitHub OpenID Connect (OIDC) for Azure authentication.

That is quite a change and lots of bits and pieces that have to be just right. Make sure you know about Azure AD App Registrations, GitHub, GitHub Actions, Docker, etc… when the time comes to troubleshoot.

Let’s see what we can do? 👀

Prerequisites

At this point in time (June 2022), Draft for Azure Kubernetes Service is in preview. Draft itself can be found here: https://github.com/Azure/draft

The only thing you need to do is to install or upgrade the aks-preview extension:

az extension add --name aks-preview --upgrade

Next, type az aks draft -h to check if the command is available. You should see the following options:

create           
generate-workflow
setup-gh
up
update

We will look at the first four commands in this post.

Running draft create

With az aks draft create, you can generate a Dockerfile for your app, Kubernetes manifests, Helm charts, and Kustomize configurations. You should fork the following repository and clone it to your machine: https://github.com/gbaeke/draft-super

After cloning it, cd into draft-super and run the following command (requires go version 1.16.4 or higher):

CGO_ENABLED=0 go build -installsuffix 'static' -o app cmd/app/*
./app

The executable runs a web server on port 8080 by default. If that conflicts with another app on your system set the port with the port environment variable: run PORT=9999 ./app instead of just ./app. Now we know the app works, we need a Dockerfile to containerize it.

You will notice that there is no Dockerfile. Although you could create one manually, you can use draft for this. Draft will try to recognize your code and generate the Dockerfile. We will keep it simple and just create Kubernetes manifests. When you run draft without parameters, it will ask you what you want to create. You can also use parameters to specify what you want, like a Helm chart or Kustomize configs. Run the command below:

az aks draft create

The above command will download the draft CLI for your platform and run it for you. It will ask several questions and display what it is doing.

[Draft] --- Detecting Language ---
✔ yes
[Draft] --> Draft detected Go Checksums (72.289458%)

[Draft] --> Could not find a pack for Go Checksums. Trying to find the next likely language match...
[Draft] --> Draft detected Go (23.101180%)

[Draft] --- Dockerfile Creation ---
Please Enter the port exposed in the application: 8080
[Draft] --> Creating Dockerfile...

[Draft] --- Deployment File Creation ---
✔ manifests
Please Enter the port exposed in the application: 8080
Please Enter the name of the application: super-api
[Draft] --> Creating manifests Kubernetes resources...

[Draft] Draft has successfully created deployment resources for your project 😃
[Draft] Use 'draft setup-gh' to set up Github OIDC.

In your folder you will now see extra files and folders:

  • A manifests folder with two files: deployment.yaml and service.yaml
  • A Dockerfile

The manifests are pretty basic and just get things done:

  • create a Kubernetes deployment that deploys 1 pod
  • create a Kubernetes service of type LoadBalancer; that gives you a public IP to reach the app

The app name and port you specified after running az aks draft create is used to create the deployment and service.

The Dockerfile looks like the one below:

FROM golang
ENV PORT 8080
EXPOSE 8080

WORKDIR /go/src/app
COPY . .

RUN go mod vendor
RUN go build -v -o app  
RUN mv ./app /go/bin/

CMD ["app"]

This is not terribly optimized but it gets the job done. I would highly recommend using a two-stage Dockerfile that results in a much smaller image based on alpine, scratch, or distroless (depending on your programming language).

For my code, the Dockerfile will not work because the source files are not in the root of the repo. Draft cannot know everything. Replace the line that says RUN go build -v -o app with RUN CGO_ENABLED=0 go build -installsuffix 'static' -o app cmd/app/*

To check that the Dockerfile works, if you have Docker installed, run docker build -t draft-super . It will take some time for the base Golang image to be pulled and to download all the dependencies of the app.

When the build is finished, run docker run draft-super to check. The container should run properly.

The az aks draft create command did a pretty good job detecting the programming language and creating the Dockerfile. As we have seen, minor adjustments might be required and the Dockerfile will probably not be production-level quality.

GitHub OIDC setup

At the end of the create command, draft suggested using setup-gh to setup GitHub. Let’s run that command:

az aks draft setup-gh

Draft will ask for the name of an Azure AD app registration to create. Make sure you are allowed to create those. I used draft-super for the name. Draft will also ask you to confirm the Azure subscription ID and a name of a resource group.

⚠️Although not entirely clear from the question, use the resource group of your AKS cluster (not the MC_ group that contains your nodes!). The setup-gh command will grant the service principal that it creates the Contributor role on the group. This ensures that the GitHub Action azure/aks-set-context@v2.0 works.

Next, draft will ask for the GitHub organization and repo. In my case, that was gbaeke/draft-super. Make sure you have admin access to the repo. GitHub secrets will need to be created. When completed, you should see something like below:

Enter app registration name: draft-super
✔ <YOUR SUB ID>
Enter resource group name: rg-aks
✔ Enter github organization and repo (organization/repoName): gbaeke/draft-super█
[Draft] Draft has successfully set up Github OIDC for your project 😃
[Draft] Use 'draft generate-workflow' to generate a Github workflow to build and deploy an application on AKS.

Draft has done several things:

  • created an app registration (check Azure AD)
  • the app registration has federated credentials configured to allow a GitHub workflow to request an Azure AD token when you do pull requests, or push to main or master
  • secrets in your GitHub repo:AZURE_CLIENT_ID, AZURE_SUBSCRIPTION_ID,AZURE_TENANT_ID; these secrets are used by the workflow to request a token from Azure AD using federated credentials
  • granted the app registration contributor role on the resource group that you specified; that is why you should use the resource group of AKS!

The GitHub workflow you will create in the next step will use the OIDC configuration to request an Azure AD token. The main advantage of this is that you do not need to store Azure secrets in GitHub. The action that does the OIDC-based login is azure/login@v1.4.3.

Draft is now ready to create a GitHub workflow.

Creating the GitHub workflow

Use az aks draft generate-workflow to create the workflow file. This workflow needs the following information as shown below:

Please enter container registry name: draftsuper767
✔ Please enter container name: draft-super█
Please enter cluster resource group name: rg-aks
Please enter AKS cluster name: clu-git
Please enter name of the repository branch to deploy from, usually main: master
[Draft] --> Generating Github workflow
[Draft] Draft has successfully generated a Github workflow for your project 😃

⚠️ Important: use the short name of ACR. Do not append azurecr.io!

⚠️ The container registry needs to be created. Draft does not do that. For best results, create the ACR in the resource group of the AKS cluster because that ensures the service principal created earlier has access to ACR to build images and to enable admin access.

Draft has now created the workflow. As expected, it lives in the .github/workflows local folder.

The workflow runs the following actions:

  • Login to Azure using only the client, subscription, and tenant id. No secrets required! 👏 OIDC in action here!
  • Run az acr build to build the container image. The image is not built on the GitHub runner. The workflow expects ACR to be in the AKS resource group.
  • Get a Kubernetes context to our AKS cluster and create a secret to allow pulling from ACR; it will also enable the admin user on ACR
  • Deploy the application with the Azure/k8s-deploy@v3.1 action. It uses the manifests that were generated with az aks draft create but modifies the image and tag to match the newly built image.

Now it is time to commit our code and check the workflow result:

Looks fine at first glance…

Houston, we have a problem 🚀

For this blog post, I was working in a branch called draft, not main or master. I also changed the workflow file to run on pushes to the draft branch. Of course, the federated tokens in our app registration are not configured for that branch, only master and main. You have to be specific here or you will not get a token. This is the error on GitHub:

Oops

To fix this, just modify the app registration and run the workflow again:

Quick and dirty fix: update mainfic with a subject identifier for draft; you can also add a new credential

After running the workflow again, if buildImage fails, check that ACR is in the AKS resource group and that the service principal has Contributor access to the group. I ran az role assignment list -g rg-aks to see the directly assigned roles and checked that the principalName matched the client ID (application ID) of the draft-super app registration.

If you used the FQDN of ACR instead of just the short name. you can update the workflow environment variable accordingly:

ACR name should be the short name

After this change, the image build should be successful.

Looking better

If you used the wrong ACR name, the deploy step will fail. The image property in deployment.yaml will be wrong. Make the following change in deployment.yaml:

- image: draftsuper767.azurecr.io.azurecr.io/draft-super

to

- image: draftsuper767.azurecr.io/draft-super

Commit to re-run the workflow. You might need to cancel the previous one because it uses kubectl rollout to check the health of the deployment.

And finally, we have a winner…

🍾🍾🍾

In k9s:

super-api deployed to default namespace

You can now make changes to your app and commit your changes to GitHub to deploy new versions or iterations of your app. Note that any change will result in a new image build.

What about the az aks draft up command? It simply combines the setup of GitHub OIDC and the creation of the workflow. So basically, all you ever need to do is:

  • create a resource group
  • deploy AKS to the resource group
  • deploy ACR to the resource group
  • Optionally run az aks update -n -g --attach-acr (this gives the kubelet on each node access to ACR; as we have seen, draft can also create a pull secret)
  • run az aks draft create followed by az aks draft up

Conclusion

When working with Draft 2, ensure you first deploy an AKS cluster and Azure Container Registry in the same resource group. You need the Owner role because you will change role-based access control settings.

During OIDC setup, when asked for a resource group, type the AKS resource group. Draft will ensure the service principal it creates, has proper access to the resource group. With that access, it will interact with ACR and log on to AKS.

When asked for the ACR name, use the short name. Do not append azurecr.io! From that point on, it should be smooth sailing! ⛵

In a follow-up post, we will take a look at the draft update command.

%d bloggers like this: