IoT Hub Device Twin and MQTT

When you connect to IoT Hub with MQTT directly, you need to connect with a ClientId, username and password. Those three values need to be set according to Azure IoT Hub specificiations:

  • ClientId: use the IoT Hub deviceId
  • Username: use {iothubhostname}/{deviceId}/api-version=2016-11-14
  • Password: use a SAS token

When you connect with MQTT, you will notice it also works if you just use {iothubhostname}/{device_id}. You will be able to send telemetry to the devices/{deviceId}/messages/events/ topic and receive cloud-to-device messages by subscribing to the devices/{deviceId}/messages/devicebound/# topic.

With MQTT, you can also update a reported property in the Device Twin. You should do that as follows:

  • Subscribe to $iothub/twin/res/# to receive a message after you report a property; the message will indicate success or failure like a 204 status when a property is updated
  • Send a message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid} with the properties in the Json payload; {rid} is a value you set to match it up with the message you get back

If I want to set a property called freeRam, I would send the following message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid}:

{ “freeRam”: 27364 }

Although this is easy enough, do not make the same mistake as I did: include the api-version=2016-11-14 in the MQTT username. If you don’t, IoT Hub will disconnect your client because Device Twins are only supported in recent incarnations of IoT Hub. Took me a few hours to troubleshoot… Winking smile

You can test all this from a client such as MQTT.fx. Install that client and in the settings, add a new connection profile. In the profile, specify the IoT Hub hostname in broker address, set the port to 8883 and set the client to a device Id that exists in your IoT Hub. Also set the MQTT version to 3.1.1 specifically. In User Credentials, specify the username and password and do not forget the api version. In SSL/TLS, enable SSL/TLS. Note: use Device Explorer to create a SAS token for your device from the Management tab.

Next, subscribe to $iothub/twin/res/#:

image

 

Then, send a freeRam property to the device like so (on topic $iothub/twin/PATCH/properties/reported/?$rid={rid} where you set {rid} to any value):

image

 Note: to delete a property, send the null value

In Subscribe, you will get the result of the PATCH operation which mentions the {rid} you specified and also reports the version which indicates the amount of times the property was changed. Also notice the status of 204 which means the property was updated.

image

 

By the way, if you want to retrieve the twin properties, just send an empty message to $iothub/twin/GET/?$rid={rid}. The result will be the desired and reported properties of the Device Twin in Json:

image

 

In the Azure Portal:

image

Hope this helps when trying to work with Device Twins from a device with MQTT directly (and not the IoT Hub Device SDKs)!

IoT Hub and Azure Time Series Insights

Azure Time Series Insights is a new service that makes it very easy to store and visualize time series data. In this blog post, we will create a dashboard that looks like the one below (click to enlarge):

image

The dashboard has four sections:

  • Query1: a heat map of events per device; in this case there are 20 devices sending data every 2 seconds
  • Query2: a line graph with random “temperature” data
  • Query3: a line graph with both “temperature” and “humidity” data
  • Query4: a line graph with “humidity” data

The events are sent to an IoT Hub using the following JSON shape: {temperature: x, humidity: y} where x and y are randomized floating point numbers, generated by an IoT device simulator.

Step 1: Create IoT Hub

Install Azure CLI 2.0, and then use az login to login. Use az account list to list your subscriptions and use az account set –subscription name_or_id to set the default subscription. Next, issue the following commands to create a resource group and an IoT Hub (set location to your preference):

az group create --name resource_group_name --location westeurope
az iot hub create --sku F1 --name iot_hub_name --resource-group resource_group_name

As a best practice, create a separate consumer group on the Events endpoint. In the Azure Portal, in the properties of the IoT Hub, click Endpoints. Then click Events and add a consumer group underneath $Default. Click Save.

Record the Connection String – primary key setting of the device or  iothubowner Shared access policy. Click Shared Access Policies, and device to find this connection string. It will be in the form of:

HostName=iot_hub_name.azure-devices.net;SharedAccessKeyName=keyname;SharedAccessKey=b5dARuGPhL6wdgHboUIhEC6LlcFalIjfEdh4aXYa1WI=

You will need this connection string later to configure the IoT Simulator.

Step 2: Create Time Series Insights Environment

In the Azure Portal, click the green + and navigate to Internet of Things. Click Time Series Insights and follow the on-screen instructions. You will end up with:

image

I selected one unit of the S1 tier which is more than enough for this example.

Step 3: Set Data Access Policy

Even though you created the Time Series Insights Environment, you still need to grant yourself access to the data. Click Data Access Policies and add your user or group and a role of Contributor.

image

Step 4: Add Event Source

We will add the IoT Hub we created earlier as an event source. Click Event Sources and then click Add. Give the event source a name and set the source to IoT Hub. Then select an IoT Hub from your available subscriptions and do not forget to set the consumer group to the one you created in step 1. If your event data has a timestamp, you can enter the timestamp property name. If you do not specify the timestamp, the event enqueue time set by the IoT Hub will be used.

Note that Azure Time Series Insights also supports Event Hubs as an event source.

Step 5: Configure the IoT simulator

Head over to https://github.com/gbaeke/iot-simulator/releases/tag/v0.3 and download iot-simulator.exe to a folder of your choice. In the same folder add a file called config.json with the following contents:

{
     "Interval":5,
     "IoTHubs":["iot_hub_name.azure-devices.net”],
     "SasTokens":["SharedAccessSignature sr=..."],
     "DevGroups":[
        {"Prefix":"ts","DeviceNum":20,"Firmware":"1.0","IoTHub": 0}
     ]
}

In the SasTokens array, replace SharedAccessSignature sr=… with a Sas token that has the necessary rights to submit events to the IoT Hub. One way of doing so, is with Device Explorer. Once installed, copy the connection string from step 1 in the connection string box and click Generate SAS. Copy the Sas token in the config.json file.

image

With the config.json correctly configured, from a command prompt, start iot-simulator.exe. It will connect to the IoT Hub, create the devices and start sending data every 5 seconds from every device. In the sample config file, you can set the interval in seconds (Interval) and the amount of devices (DeviceNum). To clean up the devices, run iot-simulator.exe –r.

Step 6: Visualize the data

Now go to https://insights.timeseries.azure.com and login with the credentials you used in step 3. You will get a screen to select data. I selected Last 60 Mins from the quick times dropdown and then clicked the search icon:

image

In the following screen, click Heatmap and then configure the box at the left with a descriptive title. Also select a split by deviceid to have an idea about the number of events per time window per device and to spot devices that stopped sending data.

image

Now, at the right top corner, click the circle with the four squares. You end up with:

image

Now click the + in the top, right section. Select a time range again and then, at the left, change the measure from Events to Temperature. Automatically, the temperature will be averaged over the interval size. Change the term (Term 1) to Temperature and click the circle with the four squares again.

The temperature line graph has been added and you can now click the copy icon and create the same visualization for humidity.

image

Now it’s easy to create the other panel with both temperature and humidity. Give it a go and try out other visualizations. When you are finished, you can click the Save icon and save this perspective. Yep, these visualizations are called perspectives!

It’s still early days for the service and many features will be added in the near future. If you are already working with event data coming into an Event Hub and IoT Hub, it should be easy to add a new consumer group and start analyzing the data with this service.

Microservices on Kubernetes: a simple example in Go

In the previous post, Getting started with Kubernetes on Azure, we talked about creating a Kubernetes cluster and deploying a couple of services. There are basically two services:

  • Data: a service that exposes an endpoint to pick up data for an IoT device; you call it with http://service_endpoint:8080/data/devicename
  • Device: a service that can be used by the Data API to check if a device exists; if the device exists you will see that in the response

When you call the Data service, it will call the Device service using gRPC, using HTTP as the transport protocol. You define the service using Protocol Buffers. gRPC works across languages and platforms, so I could have implemented each service using a different language like Go for the Device service and Node.js for the Data service. In this example, I decided to use Go in both cases and use Go Micro, a pluggable RPC framework for microservices. Go Micro uses gRPC and protocol buffers under the hood with changes specific to Go Micro.

Ok, enough with the talk, let’s take a look how it is done. The Device service is kept extremely simple for an abvious reason: I just started with Go Micro and then it is best to start with something simple. I do expect you know a bit of Go from here on out. All the code can be found at https://github.com/gbaeke/go-device.

Lets start with the definition of Protocol Buffers, found in proto/device.proto:

syntax = "proto3";

service DevSvc {
    rpc Get(DeviceName) returns (Device) {}
}

message DeviceName {
    string name = 1;
}

message Device {
    string name = 1;
    bool active = 2;
}

We define one RPC call here that expects a DeviceName message as input and returns a Device message. Simple enough but this does not get us very far. To actually use this in Go (or another supported language), we will generate some code from the above definition. You need a couple of things to do that though:

  • protoc compiler: download from Github  for your platform
  • protobuf plugins for code generation for Go Micro: run go get github.com/micro/protobuf/{proto,protoc-gen-go} (if you have issues, use 2 gets, one for proto and one for protoc-gen-go)

To actually compile the proto file, use the following command:

protoc --go_out=plugins=micro:. device.proto

That compiles device.proto to device.pb.go with help from the micro plugin. You can check the generated code here. Among other things, there are Go structs for the DeviceName and Device message plus several methods you can call on these structs such as Reset() and String().

Now for main.go! You’ll need several imports: for the generated code but also for the dependencies to build the service with Go Micro. If you check the code, you will also find the following import:

_ "github.com/micro/go-plugins/registry/kubernetes"

As stated above, Go Micro is a pluggable RPC framework. Out of the box, a microservice written with Go Micro will try to register itself with Consul on localhost for service discovery and configuration. We could run the Consul service in Kubernetes but Kubernetes supports service registration natively. Kubernetes support is something you add with the import above. That is not enough though! You still need to tell Go Micro to use Kubernetes as the registry, either with the —registry command line parameter or with an environment variable MICRO_REGISTRY. Check https://github.com/gbaeke/go-device/blob/master/go-device-dep.yaml file where that environment variable is set. Besides Consul and Kubernetes, there are other alternatives. One of them is multicast DNS (mdns) which is handy when you are testing services on your local machine and you don’t have something like Consul running.

If you want to check the information that is registered, you can do the following (after running kubectl proxy --port=8080):

curl http://localhost:8080/api/v1/pods | grep micro

Each pod will have an annotation with key micro.mu/service-<servicename> with information about the service such as its name, IP address, port, and much more.

Now really over to main.go, which is pretty self explanatory. There’s a struct called DevSvc which has a field called devs which holds the map of strings to Device structs. The DevSvc actually defines the service and you write the RPC calls as methods of that struct. Check out the following code snippet:

// DevSvc defines the service
type DevSvc struct {
	devs map[string]*device.Device
}
func (d *DevSvc) Get(ctx context.Context, req *device.DeviceName, rsp *device.Device) error {
	device, ok := d.devs[req.Name]
	if !ok {
		fmt.Println("Device does not exist")
		return nil
	}

	fmt.Println("Will respond with ", device)

	// this also works
	rsp.Name = device.Name
	rsp.Active = device.Active

	return nil
}

The Get function implements what was defined in the .proto file earlier and uses pointers to a DeviceName struct as input and a pointer to a Device struct as output. The code itself is of course trivial and just looks up a device in the map and returns it with rsp.

Of course, this handler needs to be registered and this happens in the main() function (besides setting up the service and implementing a custom flag):

// register handler and initialise devs map with a list of devices
device.RegisterDevSvcHandler(service.Server(), &DevSvc{devs: LoadDevices()})

If you want to test the service and call it (e.g. on the local machine) then clone the repository (or get it) and run the server as follows:

go run main.go --registry=mdns

In another terminal, run:

go run main.go --registry=mdns --run_client

When you run the code with the run_client option, the runClient function is called which looks like:

func runClient(service micro.Service) {
	// Create new client to call DevSvc service
	DevClient := device.NewDevSvcClient("go.micro.srv.device", service.Client())

	// Call Get to get a device
	rsp, err := DevClient.Get(context.TODO(), &device.DeviceName{Name: "device2"})
	if err != nil {
		fmt.Println(err)
		return
	}

	// Print response
	fmt.Println("Response: ", rsp)
}

This again shows the power of using a framework like Go Micro: you create a client for the DevSvc service and then simply perform the remote procedure call with the Get method, passing in a DeviceName struct with the Name field set to the device you want to check. The client uses the service registry to know where and how to connect. All the serialization and deserialization is handled for you as well using protocol buffers.

So great, you now have a little bit more information about the Device service and you know how to deploy it to Kubernetes. In another post, we’ll see how the Data service works and explore some other options to write that service.

Getting started with Kubernetes on Azure

As you may or may not know, at Xylos we have developed an IoT platform to support sensor networks of any kind. The back-end components are microservices running as containers on Rancher, a powerful and easy to use container orchestration tool. In the meantime, we are constantly evaluating other ways of orchestrating containers and naturally, Azure Container Services is one of the options. Recently, Microsoft added support for Kubernetes so we decided to check that out.

Instead of the default “look, here’s how you deploy an nginx container”, we will walk through an example of an extremely simple microservices application written in Go with the help of go-micro, a microservices toolkit. Now, I have to warn you that I am quite the newbie when it comes to Go and go-micro. If you have remarks about the code, just let me know. This post will not explain the Go services however, so let’s focus on deploying a Kubernetes cluster first and deploying the finished containers. Subsequent posts will talk about the services in more detail.

With the help of Azure CLI 2.0, deploying Kubernetes could not be simpler. You will find full details about installation on https://docs.microsoft.com/en-us/cli/azure/install-azure-cli. The CLI runs on Windows, Linux and macOS. For this post, I used macOS. If you are a bit unsure about how the Azure CLI works, check out this post: https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli.

After installation, use az login to authenticate and az account to set the default subscription. After that you are all set to deploy Kubernetes. First, create a resource group for the cluster:

az group create --name=rgname --location=westeurope

After the above command (use any name as resource group), use the following command to create a Kubernetes cluster with only one master and two agents and use a small virtual machine size. We do this to keep costs down while testing.

az acs create --orchestrator-type=kubernetes --resource-group=rgname --name=clustername --generate-ssh-keys --agent-count=2 --master-count=1 --agent-vm-size=Standard_A1_v2

Tip: to know the other virtual machine sizes in a region (like westeurope) use az vm list-sizes --location=westeurope

Note that in the az acs command, we auto-generate SSH keys. These are used to interact with the cluster and you can of course create your own. When you use generate-ssh-keys, you will find them in your home folder in the .ssh folder (id_rsa and id_rsa.pub files).

Now you need a way to administer the Kubernetes cluster. You do that with the kubectl command-line tool. Get kubectl with the following command:

az acs kubernetes install-cli

The kubectl tool needs a configuration file that instructs the tool where to connect and the credentials to use. Just use the following command to get this configured:

az acs kubernetes get-credentials --resource-group=rgname --name=clustername

Running the above command creates a config file in the .kube folder of your home folder. In the config file, you will see a https location that kubectl connects to, in addition to user information such as a user name and certificates.

Now, as a test, lets deploy a part of the microservices application that exposes a REST API endpoint to the outside world (I call it the data API). To do so, do the following:

kubectl create -f https://raw.githubusercontent.com/gbaeke/go-data/master/go-data-dep.yaml

The above command creates a deployment from a configuration file that makes sure that there are two containers running that use the image gbaeke/go-data. Each container runs in its own pod. You can check this like so:

kubectl get pods

You will see something like:

image-2

Run kubectl get deployment to see the deployment. Use kubectl describe deployment dataapito obtain more details about the deployment.

You will not be able to access this API from the outside world. To do this, let’s create a service of type LoadBalancer which will also configure an Azure load balancer automatically (could have been done from the YAML file as well):

kubectl expose deployments dataapi --port=8080 --type=LoadBalancer

You can check the service with kubectl get service. After a while and by running the last command again, the external IP will appear. You should now be able to hit the service with curl like so:

curl http://IP_of_service:8080/data/device1

No matter what device id you type at the end, you will always get Device active: false because the device API has not been deployed yet. How the data API talks to the device API and how they use service registration in Kubernetes will be discussed in another post.

Tip: for those that cannot wait, just run kubectl create -f https://raw.githubusercontent.com/gbaeke/go-device/master/go-device-dep.yaml and then use curl again with device1 at the end (should return true). The above command deploys the device API so that the data API can find and use it to check if a device exists.

Particle and Azure IoT Hub: forward events for storage and analysis

In a previous post about Partice published events, you have seen how to publish custom events to the Particle Cloud. Other devices or applications can subscribe to these events and act upon them. What if you want to do more and connect these events to custom applications? In that case, Particle has a couple of integrations that might help:

image

In this post, I will take a look at Azure IoT Hub integration which, at the moment of writing, is still in beta. Note that this integration works with events you publish from your device with Particle.publish and not with Particle Variables or Functions. Remember that in the post about events, we published a lights on and lights out event. For simplicity, we will build upon those events here.

To configure the IoT Hub integration, you will need a few things:

  • An Azure Subscription so you can logon to the portal at https://portal.azure.com (see https://azure.microsoft.com/en-us/free/ to get started)
  • An IoT Hub that you create from the portal; to get started, use the free tier which allows you to publish 8000 events per day (give or take; depends on message size as well); in the portal, use the + button

An IoT Hub has a name and works with shared access policies and access keys to be able to control the IoT Hub and send messages. To get to the policies, just click Shared Access Policies.

image

Although considered bad practice, I will use the iothubowner policy which has all required rights. Click iothubowner to view the access keys and note the primary access key. You will need that key in a moment.

In Particle Console, click the Integrations icon and click new integration. In the configuration screen, you will see:

image

It’s pretty self explanatory once you have your IoT Hub created in Azure. Just fill in the required information and note that the event name is the name of the event you have given in the call to Particle.publish. My events are called lights on and lights out and I will use lights as Event Name. This will catch both events!

To test this, the photoresistor was given enough light to fire the events. This is the result when you click on the integration after it was created:

image

When you click on one of the log entries, you will see more details:

image

You see the event payload that was sent to IoT Hub plus details about the call to IoT Hub using HTTP POST.

In IoT Hub, you will see a couple of things as well. First of all, the events:

image

In the list of devices, you will find a device with the id of the Particle Photon:

image

Note: Azure IoT Hub requires devices to authenticate but this is taken care of automatically by Particle Cloud

What you do now with these messages is up to you. You can use the new endpoints and routes feature of IoT Hub to forward events to Event Hubs or Service Bus. Or you could connect Stream Analytics to IoT Hub and save your events to Azure Storage, Data Lake, SQL, Document DB or stream the data to a real-time Power BI dashboard.

Note that although an Azure Subscription is free, not all services have free tiers. For instance, IoT Hub has a free tier but Stream Analytics does not. And although IoT Hub’s free tier is great to get started, it can only process a limited amount of events. It’s up to you to control the rate of events sent from your devices. For home use or small PoCs you should not run into issues though!

IoT Hub Scaling

When you work with Azure IoT Hub, it is not always easy to tell what will happen when you reach the limits of IoT Hub and what to do when you reach those limits. As a reminder, recall that the scale of IoT Hub is defined by its tier and the number of units in the tier. There are three paying tiers, besides the free tier:

image

Although these tiers make it clear how many messages you can send, other limits such as the amount of messages per second cannot be seen here. To have an idea about the amount of messages you can send and the sustained throughput see https://azure.microsoft.com/en-us/documentation/articles/iot-hub-scaling/#device-to-cloud-and-cloud-to-device-message-throughput

The specific burst performance numbers can be found here: https://azure.microsoft.com/en-us/documentation/articles/iot-hub-devguide-quotas-throttling/. Typically, the limit you are concerned with is the amount of device-to-cloud sends which are as follows:

  • S1: 12/sec/unit (but you get at least 100/sec in total; not per unit obviously); 10 units give you 120/sec and not 100+120/sec
  • S2: 120/sec/unit
  • S3: 6000/sec/unit

Now suppose you think about deploying 300 devices which send data every half a second. What tier should you use and how many units? It is clear that you need to send 600 messages per second so 5 units of S2 will suffice. You could also take 50 units of S1 for the same performance and price. With 5 units of S2 though, you can send more messages.

Now it would be nice to test the above in advance. At ThingTank we use Docker containers for this and we schedule them with Rancher, a great and easy to use Docker orchestration tool. If you want to try it, just use the container you can find on Docker Hub or the new Docker Store (still in beta). Just search for gbaeke and you will find the following container:

image

If you want to check out the code (warning: written hastily!), you can find it on GitHub here: https://github.com/xyloscloudservices/docker-itproceed. It is a simple NodeJs script that uses the Azure IoT Hub libraries to create a new device in the registry with a GUID for the name. Afterwards, the code sends a simple JSON payload to IoT Hub every half a second.

To use the script, start it as follows with three parameters:

app.js IoT_Hub_Short_Name IoT_Hub_Connection_String millis

Note: the millis parameter is the amount of milliseconds to wait between each send

Now you can run the containers in Rancher (for instance). I won’t go into the details how to add Docker Hosts to Rancher and how to create a new Stack (as they call it). Alternatively, you can run the containers on Azure Container Service or similar solutions.

In the PowerBI chart below, you see the eventcount every five seconds which is around 420-440 events which is a bit lower than expected for one S1 unit:

image

Note: the spike you see happens after the launch of 300 containers; throttling quickly kicks in

When switched to 5 S2 units, the graph looks as follows:

image

You see the eventcount jump to 3000 (near the end) which is what you would expect (300 containers send 600 messages per second = 3000 messages per 5 seconds which is possible with 5 S2 units that deliver 120 messages/sec/unit)

You really need to think if you want to send data every half a second or second. For our ThingTank Air Quality solution, we take measurements every second but aggregate them over a minute at the edge. Sending every minute with 5 S2 units would amount to thousands of devices before you reach the limits of IoT Hub!

Adding natural language to your Bot

In the last post, Bots in an IoT context, I created a very simple bot to request the air quality in a room. To change the room, you had to type change room and then type the room name when requested. It would be much nicer to be able to give commands like change room to <roomname> or set the room to <roomname> or switch room to <roomname>. Instead of using regular expressions, you should use the Language Understanding Intelligent Service or LUIS in short.

In LUIS, you first create an app. In the app, you define things like intents and entities. In this case, I only need one intent which I called ChangeRoom. Because I am going to specify the name of the room in phrases I type, I also defined an entity called room.

image

Next, you need to specify utterances and tell LUIS what the intent of the utterance is (if LUIS does not match your intent to the utterance automatically). The example below shows an example utterance:

image

When you type the utterance and click the orange arrow, LUIS will analyze the utterance. In the case above, LUIS automatically matched the utterance to the ChangeRoom intent and also marked the word asterix as an entity. If you hover over the entity, you will see the entity name, in this case room.

You should enter several utterances that make sense for your scenario and fix the intent and entity if needed. After adding several utterances, it is time to train the model with the tiny link in the bottom left of the browser.

After training, it is time to publish the application. You will get a URL that you need to supply in your bot. You can also test some queries from the publish dialog. For instance:

image

If you click the link for the query above, you get a JSON response like below:

image

What you see in the response above, is that LUIS matched the query to intent ChangeRoom and that the room entity is set to Asterix with a score of 0.948. Great!!

Now it is time to use this in your bot. You will need the following code to be able to use the LUIS app from your code and use the LUIS recognizer in the intents:

image

Obviously, you set the URL of the recognizer to the URL you received after publishing the LUIS app. Next, use the LUIS intent (ChangeRoom remember), in your code as follows:

image

In the above code, the important part is extracting the room entity. We make sure that, when a room entity is not found, we give the user a message. Otherwise, we set the room in userData.roomName.

Now it is time to test the code in Slack or another service. Everything was set up in the previous post, so we just need to push the code changes to Azure App Services (git push azure commit). Just for fun, I will show the results in Skype:

image

As you can see, many different phrases can be used. Not all of them were entered in LUIS. Of course, not all phrases will work. It’s clear that new place is Obelix does not work. However, it’s very simple to go back to the LUIS app and add extra utterances, train the model and publish it again.

To sum things up:

  • Regular expressions are great to get started
  • Use LUIS to add natural language processing to your bot in a simple and intuitive way