Simple Azure AD Authentication in a single page application (SPA)

Adding Azure AD integration to a website is often confusing if you are just getting started. Let’s face it, not everybody has the opportunity to dig deep into such topics. For https://deploy.baeke.info, I wanted to enable Azure AD authentication so that only a select group of users in our AD tenant can call the back-end webhooks exposed by webhookd. The architecture of the application looks like this:

Client to webhook

The process is as follows:

  • Load the client from https://deploy.baeke.info
  • Client obtains a token from Azure Active Directory; the user will have to authenticate; in our case that means that a second factor needs to be provided as well
  • When the user performs an action that invokes a webhook, the call is sent to API Management
  • API Management verifies the token and passes the request to webhookd over https with basic authentication
  • The response is received by API Management which passes it unmodified to the client

I know you are an observing reader that is probably thinking: “why not present the token to webhookd?”. That’s possible but then I did not have a reason to use API Management! 😉

Before we begin you might want to get some background information about what we are going to do. Take a look at this excellent Youtube video that explains topics such a OAuth 2.0 and OpenID Connect in an easy to understand format:

Create an application in Azure AD

The first step is to create a new application registration. You can do this from https://aad.portal.azure.com. In Azure Active Directory, select App registrations or use the new App registrations (Preview) experience.

For single page applications (SPAs), the application type should be Web app / API. As the App ID URI and Home page URL, I used https://deploy.baeke.info.

In my app, a user will authenticate to Azure AD with a Login button. Clicking that button brings the user to a Microsoft hosted page that asks for credentials:

Providing user credentials

Naturally, this implies that the authentication process, when finished, needs to find its way back to the application. In that process, it will also bring along the obtained authentication token. To configure this, specify the Reply URLs. If you also develop on your local machine, include the local URL of the app as well:

Reply URLs of the registered app

For a SPA, you need to set an additional option in the application manifest (via the Manifest button):

"oauth2AllowImplicitFlow": true

This implicit flow is well explained in the above video and also here.

This is basically all you have to do for this specific application. In other cases, you might want to grant access from this application to other applications such as an API. Take a look at this post for more information about calling the Graph API or your own API.

We will just present the token obtained by the client to API Management. In turn, API Management will verify the token. If it does not pass the verification steps, a 401 error will be returned. We will look at API Management in a later post.

A bit of client code

Just browse to https://deploy.baeke.info and view the source. Authentication is performed with ADAL for Javascript. ADAL stands for the Active Directory Authentication Library. The library is loaded with from the CDN.

This is a simple Vue application so we have a Vue instance for data and methods. In that Vue instance data, authContext is setup via a call to new AuthenticationContext. The clientId is the Application ID of the registered app we created above:

authContext: new AuthenticationContext({ 
clientId: '1fc9093e-8a95-44f8-b524-45c5d460b0d8',
postLogoutRedirectUri: window.location
})

To authenticate, the Login button’s click handler calls authContext.login(). The login method uses a redirect. It is also possible to use a pop-up window by setting popUp: true in the object passed to new AuthenticationContext() above. Personally, I do not like that approach though.

In the created lifecycle hook of the Vue instance, there is some code that handles the callback. When not in the callback, getCachedUser() is used to check if the user is logged in. If she is, the token is obtained via acquireToken() and stored in the token variable of the Vue instance. The acquireToken() method allows the application to obtain tokens silently without prompting the user again. The first parameter of acquireToken is the same application ID of the registered app.

Note that the token (an ID token) is not encrypted. You can paste the token in https://jwt.ms and look inside. Here’s an example (click to navigate):

Calling the back-end API

In this application, the calls go to API Management. Here is an example of a call with axios:

axios.post('https://geba.azure-api.net/rg/create?rg='                             + this.createrg.rg , null, this.getAxiosConfig(this.token)) 
.then(function(result) {
console.log("Got response...")
self.response = result.data;
})
.catch(function(error) {
console.log("Error calling webhook: " + error)
})
...

The third parameter is a call to getAxiosConfig that passes the token. getAxiosConfig uses the token to create the Authorization header:

getAxiosConfig: function(token) { 
const config = {
headers: {
"authorization": "bearer " + token
}
}
return config
}

As discussed earlier, the call goes to API Management which will verify the token before allowing a call to webhookd.

Conclusion

With the source of https://deploy.baeke.info and this post, it should be fairly straightforward to enable Azure AD Authentication in a simple single page web application. Note that the code is kept as simple as possible and does not cover any edge cases. In a next post, we will take a look at API Management.

ResNet50v2 classification in Go with a local container

To quickly go to the code, go here. Otherwise, keep reading…

In a previous blog post, I wrote about classifying images with the ResNet50v2 model from the ONNX Model Zoo. In that post, the container ran on a Kubernetes cluster with GPU nodes. The nodes had an NVIDIA v100 GPU. The actual classification was done with a simple Python script with help from Keras and Numpy. Each inference took around 25 milliseconds.

In this post, we will do two things:

  • run the scoring container (CPU) on a local machine that runs Docker
  • perform the scoring (classification) in Go

Installing the scoring container locally

I pushed the scoring container with the ONNX ResNet50v2 image to the following location: https://cloud.docker.com/u/gbaeke/repository/docker/gbaeke/onnxresnet50v2. Run the container with the following command:

docker run -d -p 5001:5001 gbaeke/onnxresnet50

The container will be pulled and started. The scoring URI is on http://localhost:5001/score.

Note that in the previous post, Azure Machine Learning deployed two containers: the scoring container (the one described above) and a front-end container. In that scenario, the front-end container handles the HTTP POST requests (optionally with SSL) and route the request to the actual scoring container.

The scoring container accepts the same payload as the front-end container. That means it can be used on its own, as we are doing now.

Note that you can also use IoT Edge, as explained in an earlier post. That actually shows how easy it is to push AI models to the edge and use them locally, befitting your business case.

Scoring with Go

To actually classify images, I wrote a small Go program to do just that. Although there are some scientific libraries for Go, they are not really needed in this case. That means we do have to create the 4D tensor payload and interpret the softmax result manually. If you check the code, you will see that is not awfully difficult.

The code can be found in the following GitHub repository: https://github.com/gbaeke/resnet-score.

Remember that this model expects the input as a 4D tensor with the following dimensions:

  • dimension 0: batch (we only send one image here)
  • dimension 1: channels (one for each; RGB)
  • dimension 2: height
  • dimension 3: width

The 4D tensor needs to be serialized to JSON in a field called data. We send that data with HTTP POST to the scoring URI at http://localhost:5001/score.

The response from the container will be JSON with two fields: a result field with the 1000 softmax values and a time field with the inference time. We can use the following two structs for marshaling and unmarshaling

Input and output of the model

Note that this model expects pictures to be scaled to 224 by 224 as reflected by the height and width dimensions of the uint8 array. The rest of the code is summarized below:

  • read the image; the path of the image is passed to the code via the -image command line parameter
  • the image is resized with the github.com/disintegration/imaging package (linear method)
  • the 4D tensor is populated by iterating over all pixels of the image, extracting r,g and b and placing them in the BCHW array; note that the r,g and b values are uint16 and scaled to fit in a uint8
  • construct the input which is a struct of type InputData
  • marshal the InputData struct to JSON
  • POST the JSON to the local scoring URI
  • read the HTTP response and unmarshal the response in a struct of type OutputData
  • find the highest probability in the result and note the index where it was found
  • read the 1000 ImageNet categories from imagenet_class_index.json and marshal the JSON into a map of string arrays
  • print the category using the index with the highest probability and the map

What happens when we score the image below?

What is this thing?

Running the code gives the following result:

$ ./class -image images/cassette.jpg

Highest prob is 0.9981583952903748 at 481 (inference time: 0.3309464454650879 )
Probably [n02978881 cassette

The inference time is 1/3 of a second on my older Linux laptop with a dual-core i7.

Try it yourself by running the container and the class program. Download it from here (Linux).

Deploying Azure Cognitive Services Containers with IoT Edge

Introduction

Azure Cognitive Services is a collection of APIs that make your applications smarter. Some of those APIs are listed below:

  • Vision: image classification, face detection (including emotions), OCR
  • Language: text analytics (e.g. key phrase or sentiment analysis), language detection and translation

To use one of the APIs you need to provision it in an Azure subscription. After provisioning, you will get an endpoint and API key. Every time you want to classify an image or detect sentiment in a piece of text, you will need to post an appropriate payload to the cloud endpoint and pass along the API key as well.

What if you want to use these services but you do not want to pass your payload to a cloud endpoint for compliance or latency reasons? In that case, the Cognitive Services containers can be used. In this post, we will take a look at the Text Analytics containers, specifically the one for Sentiment Analysis. Instead of deploying the container manually, we will deploy the container with IoT Edge.

IoT Edge Configuration

To get started, create an IoT Hub. The free tier will do just fine. When the IoT Hub is created, create an IoT Edge device. Next, configure your actual edge device to connect to IoT Hub with the connection string of the device you created in IoT Hub. Microsoft have a great tutorial to do all of the above, using a virtual machine in Azure as the edge device. The tutorial I linked to is the one for an edge device running Linux. When finished, the device should report its status to IoT Hub:

If you want to install IoT Edge on an existing device like a laptop, follow the procedure for Linux x64.

Once you have your edge device up and running, you can use the following command to obtain the status of your edge device: sudo systemctl status iotedge. The result:

Deploy Sentiment Analysis container

With the IoT Edge daemon up and running, we can deploy the Sentiment Analysis container. In IoT Hub, select your IoT Edge device and select Set modules:

In Set Modules you have the ability to configure the modules for this specific device. Modules are always deployed as containers and they do not have to be specifically designed or developed for use with IoT Edge. In the three step wizard, add the Sentiment Analysis container in the first step. Click Add and then select IoT Edge Module. Provide the following settings:

Although the container can freely be pulled from the Image URI, the container needs to be configured with billing info and an API key. In the Billing environment variable, specify the endpoint URL for the API you configured in the cloud. In ApiKey set your API key. Note that the container always needs to be connected to the cloud to verify that you are allowed to use the service. Remember that although your payload is not sent to the cloud, your container usage is. The full container create options are listed below:

{
"Env": [
"Eula=accept",
"Billing=https://westeurope.api.cognitive.microsoft.com/text/analytics/v2.0",
"ApiKey=<yourKEY>"
],
"HostConfig": {
"PortBindings": {
"5000/tcp": [
{
"HostPort": "5000"
}
]
}
}
}

In HostConfig we ask the container runtime (Docker) to map port 5000 of the container to port 5000 of the host. You can specify other create options as well.

On the next page, you can configure routing between IoT Edge modules. Because we do not use actual IoT Edge modules, leave the configuration as shown below:

Now move to the last page in the Set Modules wizard to review the configuration and click Submit.

Give the deployment some time to finish. After a while, check your edge device with the following command: sudo iotedge list. Your TextAnalytics container should be listed. Alternatively, use sudo docker ps to list the Docker containers on your edge device.

Testing the Sentiment Analysis container

If everything went well, you should be able to go to http://localhost:5000/swagger to see the available endpoints. Open Sentiment Analysis to try out a sample:

You can use curl to test as well:

curl -X POST "http://localhost:5000/text/analytics/v2.0/sentiment" -H  "accept: application/json" -H  "Content-Type: application/json-patch+json" -d "{  \"documents\": [    {      \"language\": \"en\",      \"id\": \"1\",      \"text\": \"I really really despise this product!! DO NOT BUY!!\"    }  ]}"

As you can see, the API expects a JSON payload with a documents array. Each document object has three fields: language, id and text. When you run the above command, the result is:

{"documents":[{"id":"1","score":0.0001703798770904541}],"errors":[]}

In this case, the text I really really despise this product!! DO NOT BUY!! clearly results in a very bad score. As you might have guessed, 0 is the absolute worst and 1 is the absolute best.

Just for fun, I created a small Go program to test the API:

The Go program can be found here: https://github.com/gbaeke/sentiment. You can download the executable for Linux with: wget https://github.com/gbaeke/sentiment/releases/download/v0.1/ta. Make ta executable and use ./ta –help for help with the parameters.

Summary

IoT Edge is a great way to deploy containers to edge devices running Linux or Windows. Besides deploying actual IoT Edge modules, you can deploy any container you want. In this post, we deployed a Cognitive Services container that does Sentiment Analysis at the edge.

Using certmagic to add SSL to webhookd

A while ago, I stumbled upon https://github.com/ncarlier/webhookd. It is a simple webhook server, written in Go, that can execute shell scripts. To use it, simply install it on a Linux box and execute it. By default, the executable looks at the ./scripts folder and maps each shell script to a URL you can call. It is well documented so do take a look at the GitHub page for full details on its configuration.

Out of the box, webhookd supports basic authentication if you supply a .htpasswd file. It does not, however, support SSL. That can be fixed in several ways though. In my case, I wanted one executable that supports SSL with Let’s Encrypt certificates. As it turns out, there is a great solution for that: https://github.com/mholt/certmagic.

To simplify using webhookd together with certmagic, I forked the webhookd repo and added certmagic support. The fork is here: https://github.com/gbaeke/webhookd. To use it, use go get github.com/gbaeke/webhookd and work from there. The fork could be improved by adding extra parameters for e-mail address and DNS name. For now, change the code by following the steps below:

  • In main.go, search for mail@mail.com and replace it with a valid e-mail address; although not required it is a good practice to supply an e-mail address to the folks at Let’s Encrypt
  • In main.go, search for www.example.com and replace it with a valid DNS name
  • The DNS name you use needs to resolve to the IP address of the machine that runs webhookd; it should be a public IP address because the code uses the HTTP challenger
  • The machine that runs webhookd should expose port 80 and port 443
  • If you want to use the Let’s Encrypt staging servers during testing (recommended) change certmagic.LetsEncryptProductionCA to certmagic.LetsEncryptStagingCA

In my case, the machine that runs webhookd is a small Linux machine running on Microsoft Azure. The DNS name is actually a CNAME record that is an alias for the DNS name of the virtual machine (e.g. vmname.westeurope.cloudapp.azure.com). You are now ready to build webhookd with go build. When it’s ready, just execute webhookd. When you run this the first time, certmagic will notice there is no certificate and will start to talk to Let’s Encrypt using the ACME protocol. By default, HTTP verification is used which just means Let’s Encrypt will tell certmagic to host a file over plain HTTP. When Let’s Encrypt can retrieve that file, it concludes you must be the owner of the DNS name used in the certificate. The certificate will be issued and stored on the file system under $home/.local/share/certmagic/acme.

You will get some messages regarding the certificate request process as shown below. When the cached certificate is found and it is valid, you will just get the Serving HTTP->HTTPS message:

image

Note that you will not be able to bind to low ports like 80 and 443 as a non-root user. So either run webhookd as root or use setcap. For instance sudo setcap cap_net_bind_service=+ep /path/to/webhookd. After running the setcap command, you can run webhookd as a non-root user and it will be able to bind to port 80 and 443.

I also have basic authentication enabled for a user called api. To test the configuration, I can use curl like so:

image

Due to the use of the Let’s Encrypt production CA, there is no need to use the –insecure flag with curl. The certificate is fully trusted on my (Windows) machine. If you pulled down the complete repository, the scripts folder contains a shell script called echo.sh. That script is automatically made available as /echo. Everything the script echoes to stdout is used as output of the HTTP call. Simple but effective!

In a follow-up post, we will take a look at using webhookd to deploy Azure resources using a managed identity and the Azure CLI. Stay tuned!

Draft: a simpler way to deploy to Kubernetes during development

If you work with containers and work with Kubernetes, Draft makes it easier to deploy your code while you are in the earlier development stages. You use Draft while you are working on your code but before you commit it to version control. The idea is simple:

  • You have some code written in something like Node.js, Go or another supported language
  • You then use draft create to containerize the application based on Draft packs; several packs come with the tool and provide a Dockerfile and a Helm chart depending on the development language
  • You then use draft up to deploy the application to Kubernetes; the application is made accessible via a public URL

Let’s demonstrate how Draft is used, based on a simple Go application that is just a bit more complex than the Go example that comes with Draft. I will use the go-data service that I blogged about earlier. You can find the source code on GitHub. The go-data service is a very simple REST API. By calling the endpoint /data/{deviceid}, it will check if a “device” exists and then actually return no data. Hey, it’s just a sample! The service uses the Gorilla router but also Go Micro to call a device service running in the Kubernetes cluster. If the device service does not run, the data service will just report that the device does not exist.

Note that this post does not cover how to install Draft and its prerequisites like Helm and a Kubernetes Ingress Controller. You will also need a Kubernetes cluster (I used Azure ACS) and a container registry (I used Docker Hub). I installed all client-side components in the Windows 10 Linux shell which works great!

The only thing you need on your development box that has Helm and Draft installed is main.go and an empty glide.yaml file. The first command to run is draft create

This results in several files and folders being created, based on the Golang Draft pack. Draft detected you used Go because of glide.yaml. No Docker container is created at this point.

  • Dockerfile: a simple Dockerfile that builds an image based on the golang:onbuild image
  • draft.toml: the Draft configuration file that contains the name of the application (set randomly), the namespace to deploy to and if the folder needs to be watched for changes after you do draft up
  • chart folder: contains the Helm chart for your application; you might need to make changes here if you want to modify the Kubernetes deployment as we will do soon

When you deploy, Draft will do several things. It will package up the chart and your code and send it to the Draft server-side component running in Kubernetes. It will then instruct Draft to build your container, push it to a configured registry and then install the application in Kubernetes. All those tasks are performed by the Draft server component, not your client!

In my case, after running draft up, I get the following on my prompt (after the build, push and deploy steps):

image

In my case, the name of the application was set to exacerbated-ragdoll (in draft.toml). Part of what makes Draft so great is that it then makes the service available using that name and the configured domain. That works because of the following:

  • During installation of Draft, you need to configure an Ingress Controller in Kubernetes; you can use a Helm chart to make that easy; the Ingress Controller does the magic of mapping the incoming request to the correct application
  • When you configure Draft for the first time with draft init you can pass the domain (in my case baeke.info); this requires a wildcard A record (e.g. *.baeke.info) that points to the public IP of the Ingress Controller; note that in my case, I used Azure Container Services which makes that IP the public IP of an Azure load balancer that load balances traffic between the Ingress Controller instances (ngnix)

So, with only my source code and a few simple commands, the application was deployed to Kubernetes and made available on the Internet! There is only one small problem here. If you check my source code, you will see that there is no route for /. The Draft pack for Golang includes a livenessProbe on / and a readinessProbe on /. The probes are in deployment.yaml which is the file that defines the Kubernetes deployment. You will need to change the path in livenessProbe and readinessProbe to point to /data/device like so:

- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
  httpGet:
   path: /data/device
   port: {{ .Values.service.internalPort }}
  readinessProbe:
   httpGet:
   path: /data/device
   port: {{ .Values.service.internalPort }}

If you already deployed the application but Draft is still watching the folder, you can simply make the above changes and save the deployment.yaml file (in chart/templates). The container will then be rebuilt and the deployment will be updated. When you now check the service with curl, you should get something like:

curl http://exacerbated-ragdoll.baeke.info/data/device1

Device active:  false
Oh and, no data for you!

To actually make the Go Micro features work, we will have to make another change to deployment.yaml. We will need to add an environment variable that instructs our code to find other services developed with Go Micro using the kubernetes registry:

- name: {{ .Chart.Name }}
  image: "{{ .Values.image.registry }}/{{ .Values.image.org }}/{{ .Values.image.name }}:{{ .Values.image.tag }}"
  imagePullPolicy: {{ .Values.image.pullPolicy }}
  env:
   - name: MICRO_REGISTRY
     value: kubernetes

To actually test this, use the following command to deploy the device service.

kubectl create -f https://raw.githubusercontent.com/gbaeke/go-device/master/go-device-dep.yaml

You can then check if it works by running the curl command again. It should now return the following:

Device active:  true
Oh and, no data for you!

Hopefully, you have seen how you can work with Draft from your development box and that you can modify the files generated by Draft to control how your application gets deployed. In our case, we had to modify the health checks to make sure the service can be reached. In addition, we had to add an environment variable because the code uses the Go Micro microservices framework.

Adaptable IoT

On May 24, 2017 I gave a short partner session at Techorama, a technology event in Belgium for both developers and IT Pros. You can find the slides on SlideShare:

Since it was a short session and a short slide deck, this post provides a bit more background information.

First, what do I mean with Adaptable IoT? Basically, an IoT solution should be adaptable at two levels:

  1. The IoT platform: use a platform that can be easily adapted to new conditions such as changed business needs or higher scaling requirements; a platform that allows you to plug in new services
  2. The application you write on the platform: use a flexible architecture that can easily be changed according to changing business needs; and no, that does not mean you have to use microservices

The presentation mainly focuses on the first point, which deals with the platform aspects that should be adaptable end-to-end at the following levels:

  • Devices and edge: devices should not be isolated in the field which means you should provide a two-way communication channel, a way to update firmware and write robust device code as a base requirement
  • Ingestion and management: with most platforms, the service used for ingestion of telemetry also provides management
  • Processing: the platform should be easy to extend with extra processing steps with limited impact on the existing processing pipeline
  • Storage: the platform should provide flexible storage options for both structured and unstructured data
  • Analytics: the platform should provide both descriptive and predictive analytics options that can be used to answer relevant business questions

Before continuing, note that this post focuses on Microsoft Azure with its Azure IoT Suite. The concepts laid out in this post can apply to other platforms as well!

Devices and Edge

There is a lot to say about devices and edge. What we see in the field is that most tend to think that the devices are the easy part. In fact, devices tend to be the most difficult part in an end-to-end IoT solution. Prototyping is easy because you can skip many of the hard parts you encounter in production:

  • Use Arduino or platforms such as particle.io: they are easy to use but do not give you full access to the underlying hardware and speed might be an issue
  • To demonstrate that it works, you can use simple and cheap sensors. But do they work in the long run? What about calibration?
  • You can use any library you find on the net but stability and accuracy might be an issue in production and even in the prototyping phase!
  • You can store secrets to connect to your back-end application directly in the sketch. In production however, you will need to store them securely.
  • Using TLS for secure connections is easy, provided the hardware and libraries support it. But what about certificate checks and expiry of root and leaf certificates?
  • You can just use WiFi because it is easy and convenient.

When you move to production and you want to create truly adaptable devices, you will need to think about several things:

  • Drop Arduino and move to C/C++ directly on the metal; heck, maybe you even have to throw in some assembler depending on the use case (though I hope not!); your focus should be on stability, speed and power usage.
  • Provide two-way communications so that devices can send telemetry and status messages to the back-end and the back-end can send messages back.
  • Make sure you can send messages to groups of devices (e.g. based on some query)
  • Provide a firmware update mechanism. Easier said than done!
  • Make sure the device is secure. Store secrets in a crypto chip.
  • Use stable and supported libraries such as the Azure IoT device SDK for C

Take into account that many devices will not be able to connect to your back-end directly, requiring a gateway at the edge. The edge should be adaptable as well, with options to do edge processing beyond merely relaying messages. What are some of those additional edge features?

  • Inference based on a machine learning algorithm trained in the cloud (e.g. anomaly detection)
  • Aggregation of data (e.g. stream processing with windowing)
  • Launch compute tasks based on conditions (e.g. launch an Azure Function when an anomaly is detected)

Ideally, the edge components are developed and tested in the cloud and then exported to the edge. Azure IoT Edge provides that functionality and uses containers to encapsulate the functionality described above.

Ingestion and management

The central service in the Azure IoT Suite for ingestion and management is Azure IoT Hub. It is highly scalable and makes your IoT solution adaptable by providing configuration and reporting mechanisms for devices. The figure below illustrates what is possible:

iothub

Device Twin functionality provides you with several options to make the solution adaptable and highly configurable:

  • From the back-end, you set desired properties that your devices can pick up. For instance, set a reporting interval to instruct the device to send telemetry more often
  • From the device, you send reported properties like battery status or available memory so you can act accordingly (e.g. send the user an alert to charge the device)
  • From the back-end, set tags to group devices (e.g. set the device location such as building, floor, room, etc…)

In a previous post, I already talked about setting desired properties with Device Twins and that today, you need to use the MQTT protocol to make this work. You can use the MQTT protocol directly or as part of one of the Azure Device SDKs where the protocol can simply be set as configuration.

The concept of jobs makes the solution even more adaptable since desired properties can be set on a group of devices using a query. By creating a query like ‘all devices where tag.building=buildingX’, you can set a desired property like the reporting interval on hundreds of devices at once.

Processing

The selected cloud platform should allow you to create an adaptable processing pipeline. With IoT Hub, the telemetry is made available to downstream components with a multi-consumer queue. An example is shown below:

processing

It is relatively easy to plug in new downstream components or modiy components. As an example, Microsoft recently made Time Series Insights available that uses an IoT Hub or an Event Hub as input. In a recent blogpost, I already described that service. Even if you already have an existing pipeline, it is simple to plug in Time Series Insights and to start using it to analyze your data.

IoT Hub Device Twin and MQTT

When you connect to IoT Hub with MQTT directly, you need to connect with a ClientId, username and password. Those three values need to be set according to Azure IoT Hub specificiations:

  • ClientId: use the IoT Hub deviceId
  • Username: use {iothubhostname}/{deviceId}/api-version=2016-11-14
  • Password: use a SAS token

When you connect with MQTT, you will notice it also works if you just use {iothubhostname}/{device_id}. You will be able to send telemetry to the devices/{deviceId}/messages/events/ topic and receive cloud-to-device messages by subscribing to the devices/{deviceId}/messages/devicebound/# topic.

With MQTT, you can also update a reported property in the Device Twin. You should do that as follows:

  • Subscribe to $iothub/twin/res/# to receive a message after you report a property; the message will indicate success or failure like a 204 status when a property is updated
  • Send a message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid} with the properties in the Json payload; {rid} is a value you set to match it up with the message you get back

If I want to set a property called freeRam, I would send the following message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid}:

{ “freeRam”: 27364 }

Although this is easy enough, do not make the same mistake as I did: include the api-version=2016-11-14 in the MQTT username. If you don’t, IoT Hub will disconnect your client because Device Twins are only supported in recent incarnations of IoT Hub. Took me a few hours to troubleshoot… Winking smile

You can test all this from a client such as MQTT.fx. Install that client and in the settings, add a new connection profile. In the profile, specify the IoT Hub hostname in broker address, set the port to 8883 and set the client to a device Id that exists in your IoT Hub. Also set the MQTT version to 3.1.1 specifically. In User Credentials, specify the username and password and do not forget the api version. In SSL/TLS, enable SSL/TLS. Note: use Device Explorer to create a SAS token for your device from the Management tab.

Next, subscribe to $iothub/twin/res/#:

image

 

Then, send a freeRam property to the device like so (on topic $iothub/twin/PATCH/properties/reported/?$rid={rid} where you set {rid} to any value):

image

 Note: to delete a property, send the null value

In Subscribe, you will get the result of the PATCH operation which mentions the {rid} you specified and also reports the version which indicates the amount of times the property was changed. Also notice the status of 204 which means the property was updated.

image

 

By the way, if you want to retrieve the twin properties, just send an empty message to $iothub/twin/GET/?$rid={rid}. The result will be the desired and reported properties of the Device Twin in Json:

image

 

In the Azure Portal:

image

Hope this helps when trying to work with Device Twins from a device with MQTT directly (and not the IoT Hub Device SDKs)!