Azure API Management with public APIs on Kubernetes

In my previous blog post, I looked at Azure API Management in combination with private APIs hosted on Kubernetes. The APIs were exposed via Traefik and an internal load balancer. To make that scenario work, the Azure API Management premium SKU is required, which is quite costly.

This post describes another approach where the APIs are exposed on the public Internet via an Ingress Controller that requires HTTPS in addition to restricting the API caller to the IP address of the Azure API Management instance. Something like this:

Internet client -> Azure API Management --> Ingress Controller (with IP whitelisting per ingress) --> API service (Kubernetes) --> API pods (Kubernetes, part of a Deployment)

Let’s see how this works, shall we?

API Management

Deploy Azure API management from the portal. In this case, you can use the other SKUs such as Basic and Standard. Note the IP address of the Azure API Management instance on the Overview page:

IP address of API Management

Ingress Controller

As usual, let’s use Traefik. When you have Helm installed, use the following command:

helm install stable/traefik --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=name@domain.com,onHostRule=true,acme.challengeType=tls-alpn-01,acme.staging=false,dashboard.enabled=true,externalTrafficPolicy=Local --namespace kube-system

Note the use of externalTrafficPolicy=Local. This lets Traefik know the IP address of the actual caller, which is required because we want to restrict access to the IP address of API Management.

Ingress object

When your API is deployed via a deployment and a service of type ClusterIP, use the following ingress definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/whitelist-source-range: "YOURIP/32"
spec:
  tls:
  - hosts:
    - api.domain.com
  rules:
    - host: api.domain.com
      http:
        paths:
        - path: /
          backend:
            serviceName: func
            servicePort: 80

The above ingress object, exposes the internal service func via Traefik. The whitelist-source-range annotation is used to limit access to this resource to the IP address of Azure API Management. Replace YOURIP with that IP address. Obviously, replace the host api.domain.com with a host that resolves to the external IP of the load balancer that provides access to Traefik. The Let’s Encrypt configuration automatically provisions a valid certificate to the service.

When I navigate to the API on my local computer, the following happens:

No access to the API if the request does not come from API management

When I test the API from API Management (after setting the back-end correctly):

API management can call the back-end API

Conclusion

What do you do when you do not want to spend money on the premium SKU? The answer is clear: use the lower SKUs if possible and restrict access to the back-end APIs with other means such as IP whitelisting. Other possibilities include using some form of authentication such as basic authentication etc…

A look at Windows containers on AKS

Now that the public preview of Windows containers on AKS is available, let’s look at the basics. You need a couple of things to get started, including a couple of subscription-wide settings. I recommend using a subscription that is not used to roll out production AKS clusters. Make sure the Azure CLI (az) is homed to the subscription. Use Azure Cloud Shell to make your life easier:

  • Install the aks-preview extension
  • Register the Windows preview feature
  • Check that the feature is active; this will take a few minutes
  • Register the Microsoft.ContainerService resource provider again (only if the Windows preview feature is active)

The following commands make the above happen:

az extension add --name aks-preview

az feature register --name WindowsPreview --namespace Microsoft.ContainerService

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}"

az provider register --namespace Microsoft.ContainerService

With that out of the way, deploy a new AKS cluster:

az aks create \
     --resource-group RESOURCEGROUP \
     --name winclu \
     --node-count 1 \
     --kubernetes-version 1.13.5 \
     --generate-ssh-keys \
     --windows-admin-password APASSWORDHERE \
     --windows-admin-username azureuser \
     --enable-vmss \
     --enable-addons monitoring \
     --network-plugin azure

Replace RESOURCEGROUP with an ARM resource group and replace APASSWORDHERE with a complex password. If you have ever deployed clusters that support multiple node pools with virtual machine scale sets, the above command will be very familiar. The only real difference here is –windows-admin-password and –windows-admin-username which are required to deploy the Windows hosts that will run your containers.

You can use the Windows user name and password to RDP into the Kubernetes nodes. You will need to deploy a jump host that has a route to the Kubernetes virtual network to make this happen as the Kubernetes hosts are not exposed with a public IP address. As they shouldn’t… 😉

Note that you need to deploy a node pool with Linux first (as in the above command). That is why the number of nodes has been set to the minimum. You cannot delete this node pool after adding a Windows node pool.

After deployment, you will see the cluster in the portal with the Linux node pool with one node:

node pool with one node

When you click Add node pool, you will be able to select the OS type of a new pool:

Both Linux and Windows as OS type for the node pool

We will add a Windows node pool via the CLI. The node pool will use the Standard_D2s_v3 virtual machine size by default, which is also the recommended minimum.

az aks nodepool add \
     --resource-group RESOURCEGROUP \
     --cluster-name winclu \
     --os-type Windows \
     --name winpl \
     --node-count 1 \
     --kubernetes-version 1.13.5

Note: the name of the Windows node pool cannot be longer than 6 characters

The node pool is now being added and will soon be ready:

windows node pool being added

When ready, you will see an additional scale set in the resource group that backs this AKS deployment:

additional scale set for the Windows node pool

We can now schedule pods on the Windows node pool. You can schedule a pod on a Windows node by adding a nodeSelector to the pod spec:

nodeSelector:         
  "beta.kubernetes.io/os": windows 

To try this, let’s deploy a Windows version of my realtime-go app with the following command. The gist contains the YAML required to deploy the app and a service. It uses the gbaeke/realtime-go-win image on Docker Hub. The base image is mcr.microsoft.com/windows/nanoserver:1809. You need to use the 1809 version because the hosts use 1809 as well. With Hyper-V isolation, the kernel match would not be required.

kubectl apply -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml 

This should deploy the app but sadly, it will error out. It needs a running redis server. Let’s deploy that the quick and dirty way (command on one line below):

kubectl run redis --image=redis --replicas=1 --overrides='{ "spec": { "template": { "spec": { "nodeSelector": { "beta.kubernetes.io/os": "linux" } } } } }' --expose --port 6379

I realize it’s ugly with the override but it does the trick. The above command creates a deployment called redis that sets the nodeSelector to target Linux nodes. It also creates a service of type ClusterIP that exposes port 6379. The ClusterIP allows the realtime-go-win container to connect to redis over the Kubernetes network. Now delete the realtime-go container and recreate it:

kubectl delete -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml

kubectl apply -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml 

Note that I could not get DNS resolution to work in the Windows container. Normally, the realtime-go container should be able to find the redis service via the name redis or the complete FQDN of redis.default.svc.cluster.local. Because that did not work, the code in the realtime-go-win container was modified to use environment variables injected by Kubernetes:

redisHost := getEnv("REDISHOST", "")
if redisHost == "" {
    redisIP := getEnv("REDIS_SERVICE_HOST", "localhost")
    redisPort := getEnv("REDIS_SERVICE_PORT", "6379")
    redisHost = redisIP + ":" + redisPort
} 

Conclusion

Deploying an AKS cluster with both Linux and Windows node pools is a simple matter. Because you can now deploy both Windows and Linux containers, you have some additional work to make sure Windows containers go to Windows hosts and Linux containers to Linux hosts. Using a nodeSelector is an easy way to do that. There are other methods as well such as node taints. Sadly, I had an issue with Kubernetes DNS in the Windows container so I switched to injected environment variables.