On May 24, 2017 I gave a short partner session at Techorama, a technology event in Belgium for both developers and IT Pros. You can find the slides on SlideShare:
Since it was a short session and a short slide deck, this post provides a bit more background information.
First, what do I mean with Adaptable IoT? Basically, an IoT solution should be adaptable at two levels:
- The IoT platform: use a platform that can be easily adapted to new conditions such as changed business needs or higher scaling requirements; a platform that allows you to plug in new services
- The application you write on the platform: use a flexible architecture that can easily be changed according to changing business needs; and no, that does not mean you have to use microservices
The presentation mainly focuses on the first point, which deals with the platform aspects that should be adaptable end-to-end at the following levels:
- Devices and edge: devices should not be isolated in the field which means you should provide a two-way communication channel, a way to update firmware and write robust device code as a base requirement
- Ingestion and management: with most platforms, the service used for ingestion of telemetry also provides management
- Processing: the platform should be easy to extend with extra processing steps with limited impact on the existing processing pipeline
- Storage: the platform should provide flexible storage options for both structured and unstructured data
- Analytics: the platform should provide both descriptive and predictive analytics options that can be used to answer relevant business questions
Before continuing, note that this post focuses on Microsoft Azure with its Azure IoT Suite. The concepts laid out in this post can apply to other platforms as well!
Devices and Edge
There is a lot to say about devices and edge. What we see in the field is that most tend to think that the devices are the easy part. In fact, devices tend to be the most difficult part in an end-to-end IoT solution. Prototyping is easy because you can skip many of the hard parts you encounter in production:
- Use Arduino or platforms such as particle.io: they are easy to use but do not give you full access to the underlying hardware and speed might be an issue
- To demonstrate that it works, you can use simple and cheap sensors. But do they work in the long run? What about calibration?
- You can use any library you find on the net but stability and accuracy might be an issue in production and even in the prototyping phase!
- You can store secrets to connect to your back-end application directly in the sketch. In production however, you will need to store them securely.
- Using TLS for secure connections is easy, provided the hardware and libraries support it. But what about certificate checks and expiry of root and leaf certificates?
- You can just use WiFi because it is easy and convenient.
When you move to production and you want to create truly adaptable devices, you will need to think about several things:
- Drop Arduino and move to C/C++ directly on the metal; heck, maybe you even have to throw in some assembler depending on the use case (though I hope not!); your focus should be on stability, speed and power usage.
- Provide two-way communications so that devices can send telemetry and status messages to the back-end and the back-end can send messages back.
- Make sure you can send messages to groups of devices (e.g. based on some query)
- Provide a firmware update mechanism. Easier said than done!
- Make sure the device is secure. Store secrets in a crypto chip.
- Use stable and supported libraries such as the Azure IoT device SDK for C
Take into account that many devices will not be able to connect to your back-end directly, requiring a gateway at the edge. The edge should be adaptable as well, with options to do edge processing beyond merely relaying messages. What are some of those additional edge features?
- Inference based on a machine learning algorithm trained in the cloud (e.g. anomaly detection)
- Aggregation of data (e.g. stream processing with windowing)
- Launch compute tasks based on conditions (e.g. launch an Azure Function when an anomaly is detected)
Ideally, the edge components are developed and tested in the cloud and then exported to the edge. Azure IoT Edge provides that functionality and uses containers to encapsulate the functionality described above.
Ingestion and management
The central service in the Azure IoT Suite for ingestion and management is Azure IoT Hub. It is highly scalable and makes your IoT solution adaptable by providing configuration and reporting mechanisms for devices. The figure below illustrates what is possible:
Device Twin functionality provides you with several options to make the solution adaptable and highly configurable:
- From the back-end, you set desired properties that your devices can pick up. For instance, set a reporting interval to instruct the device to send telemetry more often
- From the device, you send reported properties like battery status or available memory so you can act accordingly (e.g. send the user an alert to charge the device)
- From the back-end, set tags to group devices (e.g. set the device location such as building, floor, room, etc…)
In a previous post, I already talked about setting desired properties with Device Twins and that today, you need to use the MQTT protocol to make this work. You can use the MQTT protocol directly or as part of one of the Azure Device SDKs where the protocol can simply be set as configuration.
The concept of jobs makes the solution even more adaptable since desired properties can be set on a group of devices using a query. By creating a query like ‘all devices where tag.building=buildingX’, you can set a desired property like the reporting interval on hundreds of devices at once.
The selected cloud platform should allow you to create an adaptable processing pipeline. With IoT Hub, the telemetry is made available to downstream components with a multi-consumer queue. An example is shown below:
It is relatively easy to plug in new downstream components or modiy components. As an example, Microsoft recently made Time Series Insights available that uses an IoT Hub or an Event Hub as input. In a recent blogpost, I already described that service. Even if you already have an existing pipeline, it is simple to plug in Time Series Insights and to start using it to analyze your data.