From MQTT to InfluxDB with Dapr

In a previous post, we looked at using the Dapr InfluxDB component to write data to InfluxDB Cloud. In this post, we will take a look at reading data from an MQTT topic and storing it in InfluxDB. We will use Dapr 0.10, which includes both components.

To get up to speed with Dapr, please read the previous post and make sure you have an InfluxDB instance up and running in the cloud.

If you want to see a video instead:

MQTT to Influx with Dapr

Note that the video sends output to both InfluxDB and Azure SignalR. In addition, the video uses Dapr 0.8 with a custom compiled Dapr because I was still developing and testing the InfluxDB component.

MQTT Server

Although there are cloud-based MQTT servers you can use, let’s mix it up a little and run the MQTT server from Docker. If you have Docker installed, type the following:

docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto

The above command runs Mosquitto and exposes port 1883 on your local machine. You can use a tool such as MQTT Explorer to send data. Install MQTT Explorer on your local machine and run it. Create a connection like in the below screenshot:

MQTT Explorer connection

Now, click Connect to connect to Mosquitto. With MQTT, you send data to topics of your choice. Publish a json message to a topic called test as shown below:

Publish json data to the test topic

You can now click the topic in the list of topics and see its most recent value:

Subscribing to the test topic

Using MQTT with Dapr

You are now ready to read data from an MQTT topic with Dapr. If you have Dapr installed, you can run the following code to read from the test topic and store the data in InfluxDB:

const express = require('express');
const bodyParser = require('body-parser');

const app = express();
app.use(bodyParser.json());

const port = 3000;

// mqtt component will post messages from influx topic here
app.post('/mqtt', (req, res) => {
    console.log("MQTT Binding Trigger");
    console.log(req.body)

    // body is expected to contain room and temperature
    room = req.body.room
    temperature = req.body.temperature

    // room should not contain spaces
    room = room.split(" ").join("_")

    // create message for influx component
    message = {
        "measurement": "stat",
        "tags": `room=${room}`,
        "values": `temperature=${temperature}`
    };
    
    // send the message to influx output binding
    res.send({
        "to": ["influx"],
        "data": message
    });
});

app.listen(port, () => console.log(`Node App listening on port ${port}!`));

In this example, we use Node.js instead of Python to illustrate that Dapr works with any language. You will also need this package.json and run npm install:

{
  "name": "mqttapp",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.18.3",
    "express": "^4.16.4"
  }
}

In the previous post about InfluxDB, we used an output binding. You use an output binding by posting data to a Dapr HTTP URI.

To use an input binding like MQTT, you will need to create an HTTP server. Above, we create an HTTP server with Express, and listen on port 3000 for incoming requests. Later, we will instruct Dapr to listen for messages on an MQTT topic and, when a message arrives, post it to our server. We can then retrieve the message from the request body.

To tell Dapr what to do, we’ll create a components folder in the same folder that holds the Node.js code. Put a file in that folder with the following contents:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt
spec:
  type: bindings.mqtt
  metadata:
  - name: url
    value: mqtt://localhost:1883
  - name: topic
    value: test

Above, we configure the MQTT component to list to topic test on mqtt://localhost:1883. The name we use (in metadata) is important because that needs to correspond to our HTTP handler (/mqtt).

Like in the previous post, there’s another file that configures the InfluxDB component:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: influx
spec:
  type: bindings.influx
  metadata:
  - name: Url
    value: http://localhost:9999
  - name: Token
    value: ""
  - name: Org
    value: ""
  - name: Bucket
    value: ""

Replace the parameters in the file above with your own.

Saving the MQTT request body to InfluxDB

If you look at the Node.js code, you have probably noticed that we send a response body in the /mqtt handler:

res.send({
        "to": ["influx"],
        "data": message
    });

Dapr is written to accept responses that include a to and a data field in the JSON response. The above response simply tells Dapr to send the message in the data field to the configured influx component.

Does it work?

Let’s run the code with Dapr to see if it works:

dapr run --app-id mqqtinflux --app-port 3000 --components-path=./components node app.js

In dapr run, we also need to specify the port our app uses. Remember that Dapr will post JSON data to our /mqtt handler!

Let’s post some JSON with the expected fields of temperature and room to our MQTT server:

Posting data to the test topic

The Dapr logs show the following:

Logs from the APP (appear alongside the Dapr logs)

In InfluxDB Cloud table view:

Data stored in InfluxDB Cloud (posted some other data points before)

Conclusion

Dapr makes it really easy to retrieve data with input bindings and send that data somewhere else with output bindings. There are many other input and output bindings so make sure you check them out on GitHub!

Update to IoT Simulator

Quite a while ago, I wrote a small IoT Simulator in Go that creates or deletes multiple IoT devices in IoT Hub and sends telemetry at a preset interval. However, when you use version 0.4 of the simulator, you will encounter issues in the following cases:

  • You create a route to store telemetry in an Azure Storage account: the telemetry will be base 64 encoded
  • You create an Event Grid subscription that forwards the telemetry to an Azure Function or other target: the telemetry will be base 64 encoded

For example, in Azure Storage, when you store telemetry in JSON format, you will see something like this with versions 0.4 and older:

{"EnqueuedTimeUtc":"2020-02-10T14:13:19.0770000Z","Properties":{},"SystemProperties":{"connectionDeviceId":"dev35","connectionAuthMethod":"{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"637169341138506565","contentType":"application/json","contentEncoding":"","enqueuedTime":"2020-02-10T14:13:19.0770000Z"},"Body":"eyJUZW1wZXJhdHVyZSI6MjYuNjQ1NjAwNTMyMTg0OTA0LCJIdW1pZGl0eSI6NDQuMzc3MTQxODcxODY5OH0="}

Note that the body is base 64 encoded. The encoding stems from the fact that UTF-8 encoding was not specified as can be seen in the JSON. contentEncoding is indeed empty and the contentType does not mention the character set.

To fix that, a small code change was required. Note that the code uses HTTP to send telemetry, not MQTT or AMQP:

Setting the character set as part of the content type

With the character set as UTF-8, the telemetry in the Storage Account will look like this:

{"EnqueuedTimeUtc":"2020-02-11T15:02:07.9520000Z","Properties":{},"SystemProperties":{"connectionDeviceId":"dev15","connectionAuthMethod":"{\"scope\":\"hub\",\"type\":\"sas\",\"issuer\":\"iothub\",\"acceptingIpFilterRule\":null}","connectionDeviceGenerationId":"637169341138088841","contentType":"application/json; charset=utf-8","contentEncoding":"","enqueuedTime":"2020-02-11T15:02:07.9520000Z"},"Body":{"Temperature":20.827852028684607,"Humidity":49.95058826575425}}

Note that contentEncoding is still empty here, but contentType includes the charset. That is enough for the body to be in plain text.

The change will also allow you to use queries on the body in IoT Hub message routing filters or Event Grid subscription filters.

Enjoy the new version 0.5! All three of you… 😉😉😉

Adaptable IoT

On May 24, 2017 I gave a short partner session at Techorama, a technology event in Belgium for both developers and IT Pros. You can find the slides on SlideShare:

Since it was a short session and a short slide deck, this post provides a bit more background information.

First, what do I mean with Adaptable IoT? Basically, an IoT solution should be adaptable at two levels:

  1. The IoT platform: use a platform that can be easily adapted to new conditions such as changed business needs or higher scaling requirements; a platform that allows you to plug in new services
  2. The application you write on the platform: use a flexible architecture that can easily be changed according to changing business needs; and no, that does not mean you have to use microservices

The presentation mainly focuses on the first point, which deals with the platform aspects that should be adaptable end-to-end at the following levels:

  • Devices and edge: devices should not be isolated in the field which means you should provide a two-way communication channel, a way to update firmware and write robust device code as a base requirement
  • Ingestion and management: with most platforms, the service used for ingestion of telemetry also provides management
  • Processing: the platform should be easy to extend with extra processing steps with limited impact on the existing processing pipeline
  • Storage: the platform should provide flexible storage options for both structured and unstructured data
  • Analytics: the platform should provide both descriptive and predictive analytics options that can be used to answer relevant business questions

Before continuing, note that this post focuses on Microsoft Azure with its Azure IoT Suite. The concepts laid out in this post can apply to other platforms as well!

Devices and Edge

There is a lot to say about devices and edge. What we see in the field is that most tend to think that the devices are the easy part. In fact, devices tend to be the most difficult part in an end-to-end IoT solution. Prototyping is easy because you can skip many of the hard parts you encounter in production:

  • Use Arduino or platforms such as particle.io: they are easy to use but do not give you full access to the underlying hardware and speed might be an issue
  • To demonstrate that it works, you can use simple and cheap sensors. But do they work in the long run? What about calibration?
  • You can use any library you find on the net but stability and accuracy might be an issue in production and even in the prototyping phase!
  • You can store secrets to connect to your back-end application directly in the sketch. In production however, you will need to store them securely.
  • Using TLS for secure connections is easy, provided the hardware and libraries support it. But what about certificate checks and expiry of root and leaf certificates?
  • You can just use WiFi because it is easy and convenient.

When you move to production and you want to create truly adaptable devices, you will need to think about several things:

  • Drop Arduino and move to C/C++ directly on the metal; heck, maybe you even have to throw in some assembler depending on the use case (though I hope not!); your focus should be on stability, speed and power usage.
  • Provide two-way communications so that devices can send telemetry and status messages to the back-end and the back-end can send messages back.
  • Make sure you can send messages to groups of devices (e.g. based on some query)
  • Provide a firmware update mechanism. Easier said than done!
  • Make sure the device is secure. Store secrets in a crypto chip.
  • Use stable and supported libraries such as the Azure IoT device SDK for C

Take into account that many devices will not be able to connect to your back-end directly, requiring a gateway at the edge. The edge should be adaptable as well, with options to do edge processing beyond merely relaying messages. What are some of those additional edge features?

  • Inference based on a machine learning algorithm trained in the cloud (e.g. anomaly detection)
  • Aggregation of data (e.g. stream processing with windowing)
  • Launch compute tasks based on conditions (e.g. launch an Azure Function when an anomaly is detected)

Ideally, the edge components are developed and tested in the cloud and then exported to the edge. Azure IoT Edge provides that functionality and uses containers to encapsulate the functionality described above.

Ingestion and management

The central service in the Azure IoT Suite for ingestion and management is Azure IoT Hub. It is highly scalable and makes your IoT solution adaptable by providing configuration and reporting mechanisms for devices. The figure below illustrates what is possible:

iothub

Device Twin functionality provides you with several options to make the solution adaptable and highly configurable:

  • From the back-end, you set desired properties that your devices can pick up. For instance, set a reporting interval to instruct the device to send telemetry more often
  • From the device, you send reported properties like battery status or available memory so you can act accordingly (e.g. send the user an alert to charge the device)
  • From the back-end, set tags to group devices (e.g. set the device location such as building, floor, room, etc…)

In a previous post, I already talked about setting desired properties with Device Twins and that today, you need to use the MQTT protocol to make this work. You can use the MQTT protocol directly or as part of one of the Azure Device SDKs where the protocol can simply be set as configuration.

The concept of jobs makes the solution even more adaptable since desired properties can be set on a group of devices using a query. By creating a query like ‘all devices where tag.building=buildingX’, you can set a desired property like the reporting interval on hundreds of devices at once.

Processing

The selected cloud platform should allow you to create an adaptable processing pipeline. With IoT Hub, the telemetry is made available to downstream components with a multi-consumer queue. An example is shown below:

processing

It is relatively easy to plug in new downstream components or modiy components. As an example, Microsoft recently made Time Series Insights available that uses an IoT Hub or an Event Hub as input. In a recent blogpost, I already described that service. Even if you already have an existing pipeline, it is simple to plug in Time Series Insights and to start using it to analyze your data.

IoT Hub Device Twin and MQTT

When you connect to IoT Hub with MQTT directly, you need to connect with a ClientId, username and password. Those three values need to be set according to Azure IoT Hub specificiations:

  • ClientId: use the IoT Hub deviceId
  • Username: use {iothubhostname}/{deviceId}/api-version=2016-11-14
  • Password: use a SAS token

When you connect with MQTT, you will notice it also works if you just use {iothubhostname}/{device_id}. You will be able to send telemetry to the devices/{deviceId}/messages/events/ topic and receive cloud-to-device messages by subscribing to the devices/{deviceId}/messages/devicebound/# topic.

With MQTT, you can also update a reported property in the Device Twin. You should do that as follows:

  • Subscribe to $iothub/twin/res/# to receive a message after you report a property; the message will indicate success or failure like a 204 status when a property is updated
  • Send a message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid} with the properties in the Json payload; {rid} is a value you set to match it up with the message you get back

If I want to set a property called freeRam, I would send the following message to topic $iothub/twin/PATCH/properties/reported/?$rid={rid}:

{ “freeRam”: 27364 }

Although this is easy enough, do not make the same mistake as I did: include the api-version=2016-11-14 in the MQTT username. If you don’t, IoT Hub will disconnect your client because Device Twins are only supported in recent incarnations of IoT Hub. Took me a few hours to troubleshoot… Winking smile

You can test all this from a client such as MQTT.fx. Install that client and in the settings, add a new connection profile. In the profile, specify the IoT Hub hostname in broker address, set the port to 8883 and set the client to a device Id that exists in your IoT Hub. Also set the MQTT version to 3.1.1 specifically. In User Credentials, specify the username and password and do not forget the api version. In SSL/TLS, enable SSL/TLS. Note: use Device Explorer to create a SAS token for your device from the Management tab.

Next, subscribe to $iothub/twin/res/#:

image

 

Then, send a freeRam property to the device like so (on topic $iothub/twin/PATCH/properties/reported/?$rid={rid} where you set {rid} to any value):

image

 Note: to delete a property, send the null value

In Subscribe, you will get the result of the PATCH operation which mentions the {rid} you specified and also reports the version which indicates the amount of times the property was changed. Also notice the status of 204 which means the property was updated.

image

 

By the way, if you want to retrieve the twin properties, just send an empty message to $iothub/twin/GET/?$rid={rid}. The result will be the desired and reported properties of the Device Twin in Json:

image

 

In the Azure Portal:

image

Hope this helps when trying to work with Device Twins from a device with MQTT directly (and not the IoT Hub Device SDKs)!

%d bloggers like this: