Although I am using Kubernetes a lot, I didn’t quite get to trying the virtual nodes support. Virtual nodes is basically the implementation on AKS of the virtual kubelet project. The virtual kubelet project allows Kubernetes nodes to be backed by other services that support containers such as AWS Fargate, IoT Edge, Hyper.sh or Microsoft’s ACI (Azure Container Instances). The idea is that you spin up containers using the familiar Kubernetes API but on services like Fargate and ACI that can instantly scale and only charge you for the seconds the containers are running.
As expected, the virtual nodes support that is built into AKS uses ACI as the backing service. To use it, you need to deploy Kubernetes with virtual nodes support. Use either the CLI or the Azure Portal:
Note that virtual nodes for AKS are currently in preview. Virtual nodes require AKS to be configured with the advanced network option. You will need to provide a subnet for the virtual nodes that will be dedicated to ACI. The advanced networking option gives you additional control about IP ranges but also allows you to deploy a cluster in an existing virtual network. Note that advanced networking results in the use of the Azure Virtual Network container network interface. Each pod on a regular host gets its own IP address on the virtual network. You will see them in the network as connected devices:

In contrast, the containers you will create in the steps below will not show up as connected devices since they are managed by ACI which works differently.
Ok, go ahead and deploy a Kubernetes cluster or just follow along. After deployment, kubectl get nodes will show a result similar to the screenshot below:

With the virtual node online, we can deploy containers to it. Let’s deploy the ONNX ResNet50v2 container from an earlier post and scale it up. Create a YAML file like below and use kubectl apply -f path_to_yaml to create a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: resnet
spec:
replicas: 1
selector:
matchLabels:
app: resnet
template:
metadata:
labels:
app: resnet
spec:
containers:
- name: onnxresnet50v2
image: gbaeke/onnxresnet50v2
ports:
- containerPort: 5001
resources:
requests:
cpu: 1
limits:
cpu: 1
nodeSelector:
kubernetes.io/role: agent
beta.kubernetes.io/os: linux
type: virtual-kubelet
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists
- key: azure.com/aci
effect: NoSchedule
With the nodeSelector, you constrain a pod to run on particular nodes in your cluster. In this case, we want to deploy on a host of type virtual-kubelet. With the toleration, you specify that the container can be scheduled on the hosts that match the specified taints. There are two taints here, virtual-kubelet.io/provider and azure.com/aci which are applied to the virtual kubelet node.
After executing the above YAML, I get the following result after kubectl get pods -o wide:

After a while, the pod will be running, but it’s actually just a container on ACI.
Let’s expose the deployment with a public IP via an Azure load balancer:
kubectl expose deployment resnet --port=80 --target-port=5001 --type=LoadBalancer
The above command creates a service of type LoadBalancer that maps port 80 of the Azure load balancer to, eventually, port 5001 of the container. Just use kubectl get svc to see the external IP address. Configuring the load balancer usually takes around one minute.
Now let’s try to scale the deployment to 100 containers:
kubectl scale --replicas=100 deployments/resnet
Instantly, the containers will be provisioned on ACI via the virtual kubelet:
NAME READY STATUS RESTARTS AGE
resnet-6d7954d5d7-26n6l 0/1 Waiting 0 30s
resnet-6d7954d5d7-2bjgp 0/1 Creating 0 30s
resnet-6d7954d5d7-2jsrs 0/1 Creating 0 30s
resnet-6d7954d5d7-2lvqm 0/1 Pending 0 27s
resnet-6d7954d5d7-2qxc9 0/1 Creating 0 30s
resnet-6d7954d5d7-2wnn6 0/1 Creating 0 28s
resnet-6d7954d5d7-44rw7 0/1 Creating 0 30s
.... repeat ....
When you run kubectl get endpoints you will see all the endpoints “behind” the resnet service:
NAME ENDPOINTS
resnet 40.67.216.68:5001,40.67.219.10:5001,40.67.219.22:5001
+ 97 more…
In container monitoring:

As you can see, it is really easy to get started with virtual nodes and to scale up a deployment. In a later post, I will take a look at auto scaling containers on a virtual node.