Now that the public preview of Windows containers on AKS is available, let’s look at the basics. You need a couple of things to get started, including a couple of subscription-wide settings. I recommend using a subscription that is not used to roll out production AKS clusters. Make sure the Azure CLI (az) is homed to the subscription. Use Azure Cloud Shell to make your life easier:
- Install the aks-preview extension
- Register the Windows preview feature
- Check that the feature is active; this will take a few minutes
- Register the Microsoft.ContainerService resource provider again (only if the Windows preview feature is active)
The following commands make the above happen:
az extension add --name aks-preview az feature register --name WindowsPreview --namespace Microsoft.ContainerService az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/WindowsPreview')].{Name:name,State:properties.state}" az provider register --namespace Microsoft.ContainerService
With that out of the way, deploy a new AKS cluster:
az aks create \ --resource-group RESOURCEGROUP \ --name winclu \ --node-count 1 \ --kubernetes-version 1.13.5 \ --generate-ssh-keys \ --windows-admin-password APASSWORDHERE \ --windows-admin-username azureuser \ --enable-vmss \ --enable-addons monitoring \ --network-plugin azure
Replace RESOURCEGROUP with an ARM resource group and replace APASSWORDHERE with a complex password. If you have ever deployed clusters that support multiple node pools with virtual machine scale sets, the above command will be very familiar. The only real difference here is –windows-admin-password and –windows-admin-username which are required to deploy the Windows hosts that will run your containers.
You can use the Windows user name and password to RDP into the Kubernetes nodes. You will need to deploy a jump host that has a route to the Kubernetes virtual network to make this happen as the Kubernetes hosts are not exposed with a public IP address. As they shouldn’t… 😉
Note that you need to deploy a node pool with Linux first (as in the above command). That is why the number of nodes has been set to the minimum. You cannot delete this node pool after adding a Windows node pool.
After deployment, you will see the cluster in the portal with the Linux node pool with one node:

When you click Add node pool, you will be able to select the OS type of a new pool:

We will add a Windows node pool via the CLI. The node pool will use the Standard_D2s_v3 virtual machine size by default, which is also the recommended minimum.
az aks nodepool add \ --resource-group RESOURCEGROUP \ --cluster-name winclu \ --os-type Windows \ --name winpl \ --node-count 1 \ --kubernetes-version 1.13.5
Note: the name of the Windows node pool cannot be longer than 6 characters
The node pool is now being added and will soon be ready:

When ready, you will see an additional scale set in the resource group that backs this AKS deployment:

We can now schedule pods on the Windows node pool. You can schedule a pod on a Windows node by adding a nodeSelector to the pod spec:
nodeSelector: "beta.kubernetes.io/os": windows
To try this, let’s deploy a Windows version of my realtime-go app with the following command. The gist contains the YAML required to deploy the app and a service. It uses the gbaeke/realtime-go-win image on Docker Hub. The base image is mcr.microsoft.com/windows/nanoserver:1809. You need to use the 1809 version because the hosts use 1809 as well. With Hyper-V isolation, the kernel match would not be required.
kubectl apply -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml
This should deploy the app but sadly, it will error out. It needs a running redis server. Let’s deploy that the quick and dirty way (command on one line below):
kubectl run redis --image=redis --replicas=1 --overrides='{ "spec": { "template": { "spec": { "nodeSelector": { "beta.kubernetes.io/os": "linux" } } } } }' --expose --port 6379
I realize it’s ugly with the override but it does the trick. The above command creates a deployment called redis that sets the nodeSelector to target Linux nodes. It also creates a service of type ClusterIP that exposes port 6379. The ClusterIP allows the realtime-go-win container to connect to redis over the Kubernetes network. Now delete the realtime-go container and recreate it:
kubectl delete -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml kubectl apply -f https://gist.githubusercontent.com/gbaeke/ed029e8ccbf345661ed7f07298a36c21/raw/02cedf88defa7a0a3dedff5e06f7e2fc5bbeccbe/realtime-go-win.yaml
Note that I could not get DNS resolution to work in the Windows container. Normally, the realtime-go container should be able to find the redis service via the name redis or the complete FQDN of redis.default.svc.cluster.local. Because that did not work, the code in the realtime-go-win container was modified to use environment variables injected by Kubernetes:
redisHost := getEnv("REDISHOST", "") if redisHost == "" { Â Â Â Â redisIP := getEnv("REDIS_SERVICE_HOST", "localhost") Â Â Â Â redisPort := getEnv("REDIS_SERVICE_PORT", "6379") Â Â Â Â redisHost = redisIP + ":" + redisPort }
Conclusion
Deploying an AKS cluster with both Linux and Windows node pools is a simple matter. Because you can now deploy both Windows and Linux containers, you have some additional work to make sure Windows containers go to Windows hosts and Linux containers to Linux hosts. Using a nodeSelector is an easy way to do that. There are other methods as well such as node taints. Sadly, I had an issue with Kubernetes DNS in the Windows container so I switched to injected environment variables.