Attaching Kubernetes clusters with NVIDIA V100 GPUs to Azure Machine Learning Service

Azure Machine Learning Service allows you to easily deploy compute for training and inference via a machine learning workspace. Although one of the compute types is Kubernetes, the workspace is a bit picky about the node VM sizes. I wanted to use two Standard_NC6s_v3 instances with NVIDIA Tesla V100 GPUs but that was not allowed. Other GPU instances, such as the Standard_NC6 type (K80 GPU) can be deployed from the workspace.

Luckily, you can deploy clusters on your own and then attach the cluster to your Azure Machine Learning workspace. You can create the cluster with the below command. Make sure you ask for a quota increase that allows 12 cores of Standard_NC6s_v3.

az aks create -g RESOURCE_GROUP --generate-ssh-keys --node-vm-size Standard_NC6s_v3 --node-count 2 --disable-rbac --name NAME --admin-username azureuser --kubernetes-version 1.11.5

Before I ran the above command, I created an Azure Machine Learning workspace to a resource group called ml-rg. The above command was run with RESOURCE_GROUP set to ml-rg and NAME set to mlkub. After a few minutes, you should have your cluster up and running. Be mindful of the price of this cluster. GPU instances are not cheap!

Now we can Add Compute to the workspace. In your workspace, navigate to Compute and use the + Add Compute button. Complete the form as below. The compute name does not need to match the cluster name.

After a while, the Kubernetes cluster should be attached:

Manually deployed cluster attached

Note that detaching a cluster does not remove it. Be sure to remove the cluster manually!

You can now deploy container images to the cluster that take advantage of the GPU of each node. When you a deploy an image marked as a GPU image, Azure Machine Learning takes care of all the parameters that allow your container to use the GPU on the Kubernetes node.

The screenshot below shows a deployment of an image that can be used for inference. It uses an ONNX ResNet50v2 model.

Deployment of container for scoring (inference; ResNet50v2)

With the below picture of a cat, the model used by the container guesses it is an Egyptian Cat (it’s not but it is close) with close to 94% certainty.

Egyptian Cat (not)

Using your own compute with the Azure Machine Learning service is very easy to do. The more interesting and somewhat more complicated parts such as the creation of the inference container that supports GPUs is something I will discuss in a later post. In a follow-up post, I will also discuss how you send image data to the scoring container.

Deploying Azure Cognitive Services Containers with IoT Edge

Introduction

Azure Cognitive Services is a collection of APIs that make your applications smarter. Some of those APIs are listed below:

  • Vision: image classification, face detection (including emotions), OCR
  • Language: text analytics (e.g. key phrase or sentiment analysis), language detection and translation

To use one of the APIs you need to provision it in an Azure subscription. After provisioning, you will get an endpoint and API key. Every time you want to classify an image or detect sentiment in a piece of text, you will need to post an appropriate payload to the cloud endpoint and pass along the API key as well.

What if you want to use these services but you do not want to pass your payload to a cloud endpoint for compliance or latency reasons? In that case, the Cognitive Services containers can be used. In this post, we will take a look at the Text Analytics containers, specifically the one for Sentiment Analysis. Instead of deploying the container manually, we will deploy the container with IoT Edge.

IoT Edge Configuration

To get started, create an IoT Hub. The free tier will do just fine. When the IoT Hub is created, create an IoT Edge device. Next, configure your actual edge device to connect to IoT Hub with the connection string of the device you created in IoT Hub. Microsoft have a great tutorial to do all of the above, using a virtual machine in Azure as the edge device. The tutorial I linked to is the one for an edge device running Linux. When finished, the device should report its status to IoT Hub:

If you want to install IoT Edge on an existing device like a laptop, follow the procedure for Linux x64.

Once you have your edge device up and running, you can use the following command to obtain the status of your edge device: sudo systemctl status iotedge. The result:

Deploy Sentiment Analysis container

With the IoT Edge daemon up and running, we can deploy the Sentiment Analysis container. In IoT Hub, select your IoT Edge device and select Set modules:

In Set Modules you have the ability to configure the modules for this specific device. Modules are always deployed as containers and they do not have to be specifically designed or developed for use with IoT Edge. In the three step wizard, add the Sentiment Analysis container in the first step. Click Add and then select IoT Edge Module. Provide the following settings:

Although the container can freely be pulled from the Image URI, the container needs to be configured with billing info and an API key. In the Billing environment variable, specify the endpoint URL for the API you configured in the cloud. In ApiKey set your API key. Note that the container always needs to be connected to the cloud to verify that you are allowed to use the service. Remember that although your payload is not sent to the cloud, your container usage is. The full container create options are listed below:

{
"Env": [
"Eula=accept",
"Billing=https://westeurope.api.cognitive.microsoft.com/text/analytics/v2.0",
"ApiKey=<yourKEY>"
],
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
}
}
}

In HostConfig we ask the container runtime (Docker) to map port 5000 of the container to port 5000 of the host. You can specify other create options as well.

On the next page, you can configure routing between IoT Edge modules. Because we do not use actual IoT Edge modules, leave the configuration as shown below:

Now move to the last page in the Set Modules wizard to review the configuration and click Submit.

Give the deployment some time to finish. After a while, check your edge device with the following command: sudo iotedge list. Your TextAnalytics container should be listed. Alternatively, use sudo docker ps to list the Docker containers on your edge device.

Testing the Sentiment Analysis container

If everything went well, you should be able to go to http://localhost:5000/swagger to see the available endpoints. Open Sentiment Analysis to try out a sample:

You can use curl to test as well:

curl -X POST "http://localhost:5000/text/analytics/v2.0/sentiment" -H  "accept: application/json" -H  "Content-Type: application/json-patch+json" -d "{  \"documents\": [    {      \"language\": \"en\",      \"id\": \"1\",      \"text\": \"I really really despise this product!! DO NOT BUY!!\"    }  ]}"

As you can see, the API expects a JSON payload with a documents array. Each document object has three fields: language, id and text. When you run the above command, the result is:

{"documents":[{"id":"1","score":0.0001703798770904541}],"errors":[]}

In this case, the text I really really despise this product!! DO NOT BUY!! clearly results in a very bad score. As you might have guessed, 0 is the absolute worst and 1 is the absolute best.

Just for fun, I created a small Go program to test the API:

The Go program can be found here: https://github.com/gbaeke/sentiment. You can download the executable for Linux with: wget https://github.com/gbaeke/sentiment/releases/download/v0.1/ta. Make ta executable and use ./ta –help for help with the parameters.

Summary

IoT Edge is a great way to deploy containers to edge devices running Linux or Windows. Besides deploying actual IoT Edge modules, you can deploy any container you want. In this post, we deployed a Cognitive Services container that does Sentiment Analysis at the edge.

Deploying Azure resources using webhookd

In the previous blog post, I discussed adding SSL to webhookd. In this post, I will briefly show how to use this solution to deploy Azure resources.

To run webhookd, I deployed a small Standard_B1s machine (1GB RAM, 1 vCPU) with a system assigned managed identity. After deployment, information about the managed identity is available via the Identity link.

Code running on a machine with a managed identity needs to do something specific to obtain information about the identity like a token. With curl, you would issue the following command:

curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true -s

The response would be JSON that contains a field called access_token. You could parse out the access_token and then use the token in a call to the Azure Resource Manager APIs. You would use the token in the autorization header. Full details about acquiring these tokens can be found here. On that page, you will find details about acquiring the token with Go, JavaScript and several other languages.

Because we are using webhookd and shell scripts, the Azure CLI is the ideal way to create Azure resources. The Azure CLI can easily authenticate with the managed identity using a simple command: az login –identity. Here’s a shell script that uses it to create a virtual machine:

#!/bin/bash echo "Authenticating...`az login --identity`" 

echo "Creating the resource group...`az group create -n $rg -l westeurope`"

echo "Creating the vm...`az vm create --no-wait --size Standard_B1s --resource-group $rg --name $vmname --image win2016datacenter --admin-username azureuser --admin-password $pw`"

The script expects three parameters: rg, vmname and pw. We can pass these parameters as HTTP query parameters. If the above script would be in the ./scripts/vm folder as create.sh, I could do the following call to webhookd:

curl --user api -XPOST "https://<public_server_dns>/vm/create?vmname=myvm&rg=myrg&pw=Abcdefg$$$$!!!!" 

The response to the above call would contain the output from the three az commands. The az login command would output the following:

 data:   {
data: "environmentName": "AzureCloud",
data: "id": "<id>",
data: "isDefault": true,
data: "name": "<subscription name>",
data: "state": "Enabled",
data: "tenantId": "<tenant_id>",
data: "user": {
data: "assignedIdentityInfo": "MSI",
data: "name": "systemAssignedIdentity",
data: "type": "servicePrincipal"
data: }

Notice the user object, which clearly indicates we are using a system-assigned managed identity. In my case, the managed identity has the contributor role on an Azure subscription used for testing. With that role, the shell script has the required access rights to deploy the virtual machine.

As you can see, it is very easy to use webhookd to deploy Azure resources if the Azure virtual machine that runs webhookd has a managed identity with the required access rights.

Using certmagic to add SSL to webhookd

A while ago, I stumbled upon https://github.com/ncarlier/webhookd. It is a simple webhook server, written in Go, that can execute shell scripts. To use it, simply install it on a Linux box and execute it. By default, the executable looks at the ./scripts folder and maps each shell script to a URL you can call. It is well documented so do take a look at the GitHub page for full details on its configuration.

Out of the box, webhookd supports basic authentication if you supply a .htpasswd file. It does not, however, support SSL. That can be fixed in several ways though. In my case, I wanted one executable that supports SSL with Let’s Encrypt certificates. As it turns out, there is a great solution for that: https://github.com/mholt/certmagic.

To simplify using webhookd together with certmagic, I forked the webhookd repo and added certmagic support. The fork is here: https://github.com/gbaeke/webhookd. To use it, use go get github.com/gbaeke/webhookd and work from there. The fork could be improved by adding extra parameters for e-mail address and DNS name. For now, change the code by following the steps below:

  • In main.go, search for mail@mail.com and replace it with a valid e-mail address; although not required it is a good practice to supply an e-mail address to the folks at Let’s Encrypt
  • In main.go, search for www.example.com and replace it with a valid DNS name
  • The DNS name you use needs to resolve to the IP address of the machine that runs webhookd; it should be a public IP address because the code uses the HTTP challenger
  • The machine that runs webhookd should expose port 80 and port 443
  • If you want to use the Let’s Encrypt staging servers during testing (recommended) change certmagic.LetsEncryptProductionCA to certmagic.LetsEncryptStagingCA

In my case, the machine that runs webhookd is a small Linux machine running on Microsoft Azure. The DNS name is actually a CNAME record that is an alias for the DNS name of the virtual machine (e.g. vmname.westeurope.cloudapp.azure.com). You are now ready to build webhookd with go build. When it’s ready, just execute webhookd. When you run this the first time, certmagic will notice there is no certificate and will start to talk to Let’s Encrypt using the ACME protocol. By default, HTTP verification is used which just means Let’s Encrypt will tell certmagic to host a file over plain HTTP. When Let’s Encrypt can retrieve that file, it concludes you must be the owner of the DNS name used in the certificate. The certificate will be issued and stored on the file system under $home/.local/share/certmagic/acme.

You will get some messages regarding the certificate request process as shown below. When the cached certificate is found and it is valid, you will just get the Serving HTTP->HTTPS message:

image

Note that you will not be able to bind to low ports like 80 and 443 as a non-root user. So either run webhookd as root or use setcap. For instance sudo setcap cap_net_bind_service=+ep /path/to/webhookd. After running the setcap command, you can run webhookd as a non-root user and it will be able to bind to port 80 and 443.

I also have basic authentication enabled for a user called api. To test the configuration, I can use curl like so:

image

Due to the use of the Let’s Encrypt production CA, there is no need to use the –insecure flag with curl. The certificate is fully trusted on my (Windows) machine. If you pulled down the complete repository, the scripts folder contains a shell script called echo.sh. That script is automatically made available as /echo. Everything the script echoes to stdout is used as output of the HTTP call. Simple but effective!

In a follow-up post, we will take a look at using webhookd to deploy Azure resources using a managed identity and the Azure CLI. Stay tuned!

%d bloggers like this: