Yesterday, I decided to try out DigitalOcean’s Kubernetes. As always with DigitalOcean, the solution is straightforward and easy to use.
Similarly to Azure, their managed Kubernetes product is free. You only pay for the compute of the agent nodes, persistent block storage and load balancers. The minimum price is 10$ per month for a single-node cluster with a 2GB and 1 vCPU node (s-1vcpu-2gb). Not bad at all!
At the moment, the product is in limited availability. The screenshot below shows a cluster in the UI:
Kubernetes cluster with one node pool and one node in the pool
Multiple node pools are supported, a feature that is coming soon to Azure’s AKS as well.
My cluster has one pod deployed, exposed via a service of type LoadBalancer. That results in the provisioning of a DigitalOcean load balancer:
DigitalOcean LoadBalancer
Naturally, you will want to automate this deployment. DigitalOcean has an API and CLI but I used Terraform to deploy the cluster. You need to obtain a personal access token for DigitalOcean and use that in conjunction with the DigitalOcean provider. Full details can be found on GitHub: https://github.com/gbaeke/kubernetes-do. Note that this is a basic example but it shows how easy it is to stand up a managed Kubernetes cluster on a cloud platform and not break the bank
Although I am using Kubernetes a lot, I didn’t quite get to trying the virtual nodes support. Virtual nodes is basically the implementation on AKS of the virtual kubelet project. The virtual kubelet project allows Kubernetes nodes to be backed by other services that support containers such as AWS Fargate, IoT Edge, Hyper.sh or Microsoft’s ACI (Azure Container Instances). The idea is that you spin up containers using the familiar Kubernetes API but on services like Fargate and ACI that can instantly scale and only charge you for the seconds the containers are running.
As expected, the virtual nodes support that is built into AKS uses ACI as the backing service. To use it, you need to deploy Kubernetes with virtual nodes support. Use either the CLI or the Azure Portal:
CLI: uses the Azure CLI on your machine or from cloud shell
Note that virtual nodes for AKS are currently in preview. Virtual nodes require AKS to be configured with the advanced network option. You will need to provide a subnet for the virtual nodes that will be dedicated to ACI. The advanced networking option gives you additional control about IP ranges but also allows you to deploy a cluster in an existing virtual network. Note that advanced networking results in the use of the Azure Virtual Network container network interface. Each pod on a regular host gets its own IP address on the virtual network. You will see them in the network as connected devices:
Connected devices on the Kubernetes VNET (includes pods)
In contrast, the containers you will create in the steps below will not show up as connected devices since they are managed by ACI which works differently.
Ok, go ahead and deploy a Kubernetes cluster or just follow along. After deployment, kubectl get nodes will show a result similar to the screenshot below:
kubectl get nodes output with virtual node
With the virtual node online, we can deploy containers to it. Let’s deploy the ONNX ResNet50v2 container from an earlier post and scale it up. Create a YAML file like below and use kubectl apply -f path_to_yaml to create a deployment:
With the nodeSelector, you constrain a pod to run on particular nodes in your cluster. In this case, we want to deploy on a host of type virtual-kubelet. With the toleration, you specify that the container can be scheduled on the hosts that match the specified taints. There are two taints here, virtual-kubelet.io/provider and azure.com/aci which are applied to the virtual kubelet node.
After executing the above YAML, I get the following result after kubectl get pods -o wide:
The pod is pending on node virtual-node-aci-linux
After a while, the pod will be running, but it’s actually just a container on ACI.
Let’s expose the deployment with a public IP via an Azure load balancer:
The above command creates a service of type LoadBalancer that maps port 80 of the Azure load balancer to, eventually, port 5001 of the container. Just use kubectl get svc to see the external IP address. Configuring the load balancer usually takes around one minute.
Now let’s try to scale the deployment to 100 containers:
kubectl scale --replicas=100 deployments/resnet
Instantly, the containers will be provisioned on ACI via the virtual kubelet:
When you run kubectl get endpoints you will see all the endpoints “behind” the resnet service:
NAME ENDPOINTS resnet 40.67.216.68:5001,40.67.219.10:5001,40.67.219.22:5001 + 97 more…
In container monitoring:
Hey, one pod has an issue! Who cares right?
As you can see, it is really easy to get started with virtual nodes and to scale up a deployment. In a later post, I will take a look at auto scaling containers on a virtual node.
In a previous post, I discussed the creation of a container image that uses the ResNet50v2 model for image classification. If you want to perform tasks such as localization or segmentation, there are other models that serve that purpose. The image was built with GPU support. Adding GPU support was pretty easy:
Use the enable_gpu flag in the Azure Machine Learning SDK or check the GPU box in the Azure Portal; the service will build an image that supports NVIDIA cuda
Add GPU support in your score.py file and/or conda dependencies file (scoring script uses the ONNX runtime, so we added the onnxruntime-gpu package)
In this post, we will deploy the image to a Kubernetes cluster with GPU nodes. We will use Azure Kubernetes Service (AKS) for this purpose. Check my previous post if you want to use NVIDIA V100 GPUs. In this post, I use hosts with one V100 GPU.
To get started, make sure you have the Kubernetes cluster deployed and that you followed the steps in my previous post to create the GPU container image. Make sure you attached the cluster to the workspace’s compute.
Deploy image to Kubernetes
Click the container image you created from the previous post and deploy it to the Kubernetes cluster you attached to the workspace by clicking + Create Deployment:
Starting the deployment from the image in the workspace
The Create Deployment screen is shown. Select AKS as deployment target and select the Kubernetes cluster you attached. Then press Create.
Azure Machine Learning now deploys the containers to Kubernetes. Note that I said containers in plural. In addition to the scoring container, another front–end container is added as well. You send your requests to the front-end container using HTTP POST. The front-end container talks to the scoring container over TCP port 5001 and passes the result back. The front-end container can be configured with certificates to support SSL.
Check the deployment and wait until it is healthy. We did not specify advanced settings during deployment so the default settings were chosen. Click the deployment to see the settings:
Deployment settings including authentication keys and scoring URI
As you can see, the deployment has authentication enabled. When you send your HTTP POST request to the scoring URI, make sure you pass an authentication header like so: bearer primary-or-secondary-key. The primary and secondary key are in the settings above. You can regenerate those keys at any time.
Checking the deployment
From the Azure Cloud Shell, issue the following commands in order to list the pods deployed to your Kubernetes cluster:
az aks list -o table
az aks get-credentials -g RESOURCEGROUP -n CLUSTERNAME
kubectl get pods
Listing the deployed pods
Azure Machine Learning has deployed three front-ends (default; can be changed via Advanced Settings during deployment) and one scoring container. Let’s check the container with: kubectl get pod onnxgpu-5d6c65789b-rnc56 -o yaml. Replace the container name with yours. In the output, you should find the following:
Great! We did not have to bother with doing this ourselves. Let’s now try to recognize an image by sending requests to the front-end pods.
Recognizing images
To recognize an image, we need to POST a JSON payload to the scoring URI. The scoring URI can be found in the deployment properties in the workspace. In my case, the URI is:
The data field is a multi-dimensional array, serialized to JSON. The shape of the array is (1,3,224,224). The dimensions correspond to the batch size, channels (RGB), height and width.
You only have to read an image and put the pixel values in the array! Easy right? Well, as usual the answer is: “it depends”! The easiest way to do it, according to me, is with Python and a collection of helper packages. The code is in the following GitHub gist: https://gist.github.com/gbaeke/b25849f3813e9eb984ee691659d1d05a. You need to run the code on a machine with Python 3 installed. Make sure you also install Keras and NumPy (pip3 install keras / pip3 install numpy). The code uses two images, cat.jpg and car.jpg but you can use your own. When I run the code, I get the following result:
Using TensorFlow backend. channels_last Loading and preprocessing image… cat.jpg Array shape (224, 224, 3) Array shape afer moveaxis: (3, 224, 224) Array shape after expand_dims (1, 3, 224, 224) prediction time (as measured by the scoring container) 0.025304794311523438 Probably a: Egyptian_cat 0.9460222125053406 Loading and preprocessing image… car.jpg Array shape (224, 224, 3) Array shape afer moveaxis: (3, 224, 224) Array shape after expand_dims (1, 3, 224, 224) prediction time (as measured by the scoring container) 0.02526378631591797 Probably a: sports_car 0.948998749256134
It takes about 25 milliseconds to classify an image, or 40 images/second. By increasing the number of GPUs and scoring containers (we only deployed one), we can easily scale out the solution.
With a bit of help from Keras and NumPy, the code does the following:
check the image format reported by the keras back-end: it reports channels_last which means that, by default, the RGB channels are the last dimensions of the image array
load the image; the resulting array has a (224,224,3) shape
our container expects the channels_first format; we use moveaxis to move the last axis to the front; the array now has a (3,224,224) shape
our container expects a first dimension with a batch size; we use expand_dims to end up with a (1,3,224,224) shape
we convert the 4D array to a list and construct the JSON payload
we send the payload to the scoring URI and pass an authorization header
we get a JSON response with two fields: result and time; we print the inference time as reported by the container
from keras.applications.resnet50, we use the decode_predictions class to process the result field; result contains the 1000 values computed by the softmax function in the container; decode_predictions knows the categories and returns the first five
we print the name and probability of the category with the highest probability (item 0)
What happens when you use a scoring container that uses the CPU? In that case, you could run the container in Azure Container Instances (ACI). Using ACI is much less costly! In ACI with the default setting of 0.1 CPU, it will take around 2 seconds to score an image. Ouch! With a full CPU (in ACI), the scoring time goes down to around 180-220ms per image. To achieve better results, simply increase the number of CPUs. On the Standard_NC6s_v3 Kubernetes node with 6 cores, scoring time with CPU hovers around 60ms.
Conclusion
In this post, you have seen how Azure Machine Learning makes it straightforward to deploy GPU scoring images to a Kubernetes cluster with GPU nodes. The service automatically configures the resource requests for the GPU and maps the NVIDIA drivers to the scoring container. The only thing left to do is to start scoring images with the service. We have seen how easy that is with a bit of help from Keras and NumPy. In practice, always start with CPU scoring and scale out that solution to match your requirements. But if you do need GPUs for scoring, Azure Machine Learning makes it pretty easy to do so!
If you work with containers and work with Kubernetes, Draft makes it easier to deploy your code while you are in the earlier development stages. You use Draft while you are working on your code but before you commit it to version control. The idea is simple:
You have some code written in something like Node.js, Go or another supported language
You then use draft create to containerize the application based on Draft packs; several packs come with the tool and provide a Dockerfile and a Helm chart depending on the development language
You then use draft up to deploy the application to Kubernetes; the application is made accessible via a public URL
Let’s demonstrate how Draft is used, based on a simple Go application that is just a bit more complex than the Go example that comes with Draft. I will use the go-data service that I blogged about earlier. You can find the source code on GitHub. The go-data service is a very simple REST API. By calling the endpoint /data/{deviceid}, it will check if a “device” exists and then actually return no data. Hey, it’s just a sample! The service uses the Gorilla router but also Go Micro to call a device service running in the Kubernetes cluster. If the device service does not run, the data service will just report that the device does not exist.
Note that this post does not cover how to install Draft and its prerequisites like Helm and a Kubernetes Ingress Controller. You will also need a Kubernetes cluster (I used Azure ACS) and a container registry (I used Docker Hub). I installed all client-side components in the Windows 10 Linux shell which works great!
The only thing you need on your development box that has Helm and Draft installed is main.go and an empty glide.yaml file. The first command to run is draft create
This results in several files and folders being created, based on the Golang Draft pack. Draft detected you used Go because of glide.yaml. No Docker container is created at this point.
Dockerfile: a simple Dockerfile that builds an image based on the golang:onbuild image
draft.toml: the Draft configuration file that contains the name of the application (set randomly), the namespace to deploy to and if the folder needs to be watched for changes after you do draft up
chart folder: contains the Helm chart for your application; you might need to make changes here if you want to modify the Kubernetes deployment as we will do soon
When you deploy, Draft will do several things. It will package up the chart and your code and send it to the Draft server-side component running in Kubernetes. It will then instruct Draft to build your container, push it to a configured registry and then install the application in Kubernetes. All those tasks are performed by the Draft server component, not your client!
In my case, after running draft up, I get the following on my prompt (after the build, push and deploy steps):
In my case, the name of the application was set to exacerbated-ragdoll (in draft.toml). Part of what makes Draft so great is that it then makes the service available using that name and the configured domain. That works because of the following:
During installation of Draft, you need to configure an Ingress Controller in Kubernetes; you can use a Helm chart to make that easy; the Ingress Controller does the magic of mapping the incoming request to the correct application
When you configure Draft for the first time with draft init you can pass the domain (in my case baeke.info); this requires a wildcard A record (e.g. *.baeke.info) that points to the public IP of the Ingress Controller; note that in my case, I used Azure Container Services which makes that IP the public IP of an Azure load balancer that load balances traffic between the Ingress Controller instances (ngnix)
So, with only my source code and a few simple commands, the application was deployed to Kubernetes and made available on the Internet! There is only one small problem here. If you check my source code, you will see that there is no route for /. The Draft pack for Golang includes a livenessProbe on / and a readinessProbe on /. The probes are in deployment.yaml which is the file that defines the Kubernetes deployment. You will need to change the path in livenessProbe and readinessProbe to point to /data/device like so:
If you already deployed the application but Draft is still watching the folder, you can simply make the above changes and save the deployment.yaml file (in chart/templates). The container will then be rebuilt and the deployment will be updated. When you now check the service with curl, you should get something like:
curl http://exacerbated-ragdoll.baeke.info/data/device1
Device active: false
Oh and, no data for you!
To actually make the Go Micro features work, we will have to make another change to deployment.yaml. We will need to add an environment variable that instructs our code to find other services developed with Go Micro using the kubernetes registry:
You can then check if it works by running the curl command again. It should now return the following:
Device active: true
Oh and, no data for you!
Hopefully, you have seen how you can work with Draft from your development box and that you can modify the files generated by Draft to control how your application gets deployed. In our case, we had to modify the health checks to make sure the service can be reached. In addition, we had to add an environment variable because the code uses the Go Micro microservices framework.