Managed Identity on Azure Arc Servers

Azure Arc – Azure Management | Microsoft Azure

When you install the Azure Arc agent on any physical or virtual server, either Windows or Linux, the machine suddenly starts living in a cloud world:

  • it appears in the Azure Portal
  • you can apply resource tags
  • you can check for security and regulatory compliance with Azure Policy
  • you can enable Update management
  • and much, much more…

Check Microsoft’s documentation for more information about Azure Arc for Servers to find out more. Below is a screenshot of such an Azure Arc-enabled Windows Server 2019 machine running on-premises with Insights enabled (on my laptop 😀):

Azure Arc-enabled Windows Server 2019

A somewhat lesser-known feature of Azure Arc is that these servers also have Managed Server Identity (MSI). After you have installed the Azure Arc agent, which normally installs to Program Files\AzureConnectedMachineAgent, two environment variables are set:

  • IMDS_ENDPOINT=http://localhost:40342
  • IDENTITY_ENDPOINT=http://localhost:40342/metadata/identity/oauth2/token

IMDS stands for Instance Metadata Service. On a regular Azure virtual machine, this service listens on the non-routable IP address of 169.254.169.254. On the virtual machine, you can make HTTP requests to that IP address without any issue. The traffic never leaves the virtual machine.

On an Azure Arc-enabled server, which can run anywhere, using the non-routable IP address is not feasible. Instead, the IMDS listens on a port on localhost as indicated by the environment variables.

The service can be used for all sorts of things. For example, I can make the following request (PowerShell):

Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://localhost:40342/metadata/instance?api-version=2020-06-01 | ConvertTo-Json

The result will be a JSON structure with most of the fields empty. That is not surprising since this is not an Azure VM and most fields are Azure-related (vmSize, fault domain, update domain, …). But it does show that the IMDS works, just like on a regular Azure VM.

Although there are many other things you can do, one of its most useful features is providing you with an access token to access Azure Resource Manager, Key Vault, or other services.

There are many ways to obtain an access token. The documentation contains an example in PowerShell that uses the environment variables and Invoke-WebRequest to get a token for https://management.azure.com.

A common requirement is code that needs to retrieve secrets from Azure Key Vault. Now we know that we can acquire a token via the IMDS, let’s see how we can do this with the Azure SDK for Python, which has full support for the IMDS on Azure Arc-enabled machines. The code below does the trick:

from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient

credentials = ManagedIdentityCredential()

secret_client = SecretClient(vault_url="https://gebakv.vault.azure.net", credential=credentials)
secret = secret_client.get_secret("notsecret")
print(secret.value)

Of course, you need Python installed with the following packages (use pip install):

  • azure-identity
  • azure-keyvault

Yes, the above code is all you need to use the managed identity of the Azure Arc-enabled server to authenticate to Key Vault and obtain the secret called notsecret. The functionality that makes the Python SDK work with Azure Arc can be seen here.

Of course, you need to make sure that the managed identity has the necessary access rights to Key Vault:

Managed Identity has Get permissions on Secrets

I have not looked at MSI Azure Arc support in the other SDKs but the Python SDK sure makes it easy!

Multi-Tier Bitnami Grafana Stack on Azure

After seeing some tweets about Bitnami’s multi-tier Grafana Stack, I decided to give it a go. On the page describing the Grafana stack, there are several deployment offerings:

Grafana deployment offerings (Image: from Bitnami website)

I decided to use the multi-tier deployment, which deploys multiple Grafana nodes and a shared Azure Database for MariaDB.

On Azure, the Grafana stack is deployed via an Azure Resource Manager (ARM) template. You can easily find it via the Azure Marketplace:

Grafana multi-tier in Azure Marketplace

From the above page, click Create to start deploying the template. You will get a series of straightforward questions such as the resource group, the Grafana admin password, MariaDB admin password, virtual machine size, etc…

It will take about half an hour to deploy the template. When finished, you will find the following resources in the resource group you chose or created during deployment:

Deployed Grafana resources

Let’s take a look at the deployed resources. The database back-end is Azure Database for MariaDB server. The deployment uses a General Purpose, 2 vCore, 50GB database. The monthly cost is around €130.

The Grafana VMs are Standard D1 v2 virtual machines (can be changed). These two machine cost around €100 per month. By default, these virtual machines have a public IP that allows SSH access on port 22. To logon, use the password or public key you configured during deployment.

To access the Grafana portal, Bitnami used an Azure Application Gateway. They used the Standard tier (not WAF) with the Medium SKU size and three nodes. The monthly cost for this setup is around €140.

The public IP address of the front-end can be found in the list of resources (e.g. in my case, mygrafanaagw-ip). The IP address will have an associated DNS name in the form of
mygrafanaRANDOMTEXT-agw-dns.westeurope.cloudapp.azure.com. Simply connect to that URL to access your Grafana instance:

Grafana instance (after logging on and showing a simple dashboard

Naturally, you will want to access Grafana over SSL. That is something you will need to do yourself. For more information see this link.

It goes without saying that the template only takes care of deployment. Once deployed, you are responsible for the infrastructure! Security, backup, patching etc… is your responsibility!

Note that the template does not allow you to easily select the virtual network to deploy to. By default, the template creates a virtual network with address space 10.0.0.0/16. If you got some ARM templating skills, you can download the template right after validation but before deployment and modify it:

Downloading the template for modification

Conclusion

Setting up a multi-tier Grafana stack with Bitnami is very easy. Note that the cost of this deployment is around €370 per month though. Instead of deploying and managing Grafana yourself, you can also take a look at hosted offerings such as Grafana Cloud or Aiven Grafana.

Adaptable IoT

On May 24, 2017 I gave a short partner session at Techorama, a technology event in Belgium for both developers and IT Pros. You can find the slides on SlideShare:

Since it was a short session and a short slide deck, this post provides a bit more background information.

First, what do I mean with Adaptable IoT? Basically, an IoT solution should be adaptable at two levels:

  1. The IoT platform: use a platform that can be easily adapted to new conditions such as changed business needs or higher scaling requirements; a platform that allows you to plug in new services
  2. The application you write on the platform: use a flexible architecture that can easily be changed according to changing business needs; and no, that does not mean you have to use microservices

The presentation mainly focuses on the first point, which deals with the platform aspects that should be adaptable end-to-end at the following levels:

  • Devices and edge: devices should not be isolated in the field which means you should provide a two-way communication channel, a way to update firmware and write robust device code as a base requirement
  • Ingestion and management: with most platforms, the service used for ingestion of telemetry also provides management
  • Processing: the platform should be easy to extend with extra processing steps with limited impact on the existing processing pipeline
  • Storage: the platform should provide flexible storage options for both structured and unstructured data
  • Analytics: the platform should provide both descriptive and predictive analytics options that can be used to answer relevant business questions

Before continuing, note that this post focuses on Microsoft Azure with its Azure IoT Suite. The concepts laid out in this post can apply to other platforms as well!

Devices and Edge

There is a lot to say about devices and edge. What we see in the field is that most tend to think that the devices are the easy part. In fact, devices tend to be the most difficult part in an end-to-end IoT solution. Prototyping is easy because you can skip many of the hard parts you encounter in production:

  • Use Arduino or platforms such as particle.io: they are easy to use but do not give you full access to the underlying hardware and speed might be an issue
  • To demonstrate that it works, you can use simple and cheap sensors. But do they work in the long run? What about calibration?
  • You can use any library you find on the net but stability and accuracy might be an issue in production and even in the prototyping phase!
  • You can store secrets to connect to your back-end application directly in the sketch. In production however, you will need to store them securely.
  • Using TLS for secure connections is easy, provided the hardware and libraries support it. But what about certificate checks and expiry of root and leaf certificates?
  • You can just use WiFi because it is easy and convenient.

When you move to production and you want to create truly adaptable devices, you will need to think about several things:

  • Drop Arduino and move to C/C++ directly on the metal; heck, maybe you even have to throw in some assembler depending on the use case (though I hope not!); your focus should be on stability, speed and power usage.
  • Provide two-way communications so that devices can send telemetry and status messages to the back-end and the back-end can send messages back.
  • Make sure you can send messages to groups of devices (e.g. based on some query)
  • Provide a firmware update mechanism. Easier said than done!
  • Make sure the device is secure. Store secrets in a crypto chip.
  • Use stable and supported libraries such as the Azure IoT device SDK for C

Take into account that many devices will not be able to connect to your back-end directly, requiring a gateway at the edge. The edge should be adaptable as well, with options to do edge processing beyond merely relaying messages. What are some of those additional edge features?

  • Inference based on a machine learning algorithm trained in the cloud (e.g. anomaly detection)
  • Aggregation of data (e.g. stream processing with windowing)
  • Launch compute tasks based on conditions (e.g. launch an Azure Function when an anomaly is detected)

Ideally, the edge components are developed and tested in the cloud and then exported to the edge. Azure IoT Edge provides that functionality and uses containers to encapsulate the functionality described above.

Ingestion and management

The central service in the Azure IoT Suite for ingestion and management is Azure IoT Hub. It is highly scalable and makes your IoT solution adaptable by providing configuration and reporting mechanisms for devices. The figure below illustrates what is possible:

iothub

Device Twin functionality provides you with several options to make the solution adaptable and highly configurable:

  • From the back-end, you set desired properties that your devices can pick up. For instance, set a reporting interval to instruct the device to send telemetry more often
  • From the device, you send reported properties like battery status or available memory so you can act accordingly (e.g. send the user an alert to charge the device)
  • From the back-end, set tags to group devices (e.g. set the device location such as building, floor, room, etc…)

In a previous post, I already talked about setting desired properties with Device Twins and that today, you need to use the MQTT protocol to make this work. You can use the MQTT protocol directly or as part of one of the Azure Device SDKs where the protocol can simply be set as configuration.

The concept of jobs makes the solution even more adaptable since desired properties can be set on a group of devices using a query. By creating a query like ‘all devices where tag.building=buildingX’, you can set a desired property like the reporting interval on hundreds of devices at once.

Processing

The selected cloud platform should allow you to create an adaptable processing pipeline. With IoT Hub, the telemetry is made available to downstream components with a multi-consumer queue. An example is shown below:

processing

It is relatively easy to plug in new downstream components or modiy components. As an example, Microsoft recently made Time Series Insights available that uses an IoT Hub or an Event Hub as input. In a recent blogpost, I already described that service. Even if you already have an existing pipeline, it is simple to plug in Time Series Insights and to start using it to analyze your data.

Controlling Sonos from a Particle Photon using a Sonos API on a Pi 3

In the previous article, Control Sonos with a easy to use API, we configured a Docker container on a Raspberry Pi 3 to run an easy to use Sonos API. I prefer this solution over writing code on the Photon to control Sonos. Now it is time to let the Photon talk to the API on the PI 3 to load a playlist and start playing or to stop playing at the press of a button.

Just create a new app with the Particle Build IDE and call the app SonosCtrl. Then add the following library: HttpClient. After adding the library, make sure you have the following includes:

image

To actually use HttpClient to make requests to the Sonos API, you will need some variables of specific types:

image

You will use the request variable to configure the request. When you configure request, you will need to specify a hostname or an IP address. I used the IP address of my RPi 3 (SonosController above).

To configure request:

image

The above just sets the port and IP address for the request. We do this in the setup() function. When we press a button, we toggle between playing from a playlist or pausing the Sonos:

image

By setting the request path appropriately, we can easily load a Sonos playlist or pause. See the GitHub page at https://github.com/jishi/node-sonos-http-api for more paths to use. There is much more you can do! Above, we target a specific Sonos Player (Living Room). As you can see, this is very simple to do and keeps the Particle Photon code cleaner. The code is kept pretty simple so no error handling, logging etc… You can find the full code in the following Gist: https://gist.github.com/gbaeke/9c185e82e7f23c0c4c9d803990d3660f. Have fun!!!

Control Sonos with an easy to use API

In an earlier post, Controlling Sonos from a Particle Photon, we created a small app to do just that. The app itself contained some C++ code to interact with a Sonos player on your network. Although the code works, it does not provide you with full control over your Sonos player and it’s tedious to work with.

Wouldn’t it be great if you had an API at your disposal that is both easy to use and powerful? And even better, has Sonos discovery built-in so that there is no need to target Sonos players by their IP? Well, look no further as something like that exists: https://github.com/jishi/node-sonos-http-api. The Sonos HTTP API is written in Node.js which makes it easy to run anywhere!

And I do mean ANYWHERE!!! I wanted to run the API as a Docker container on my Raspberry Pi 3, which is very easy to do. Here are the basic steps I took to configure the Raspberry Pi:

With Docker up and running, I created a Dockerfile and built the image. Here is the Dockerfile:

FROM hypriot/rpi-node
RUN git clone -q https://github.com/jishi/node-sonos-http-api.git
WORKDIR node-sonos-http-api
RUN npm install > /dev/null
EXPOSE 5005
CMD [“npm”,”start”]

Note: a Raspberry Pi uses an ARM architecture which means you need to use ARM compatible images; above I used hypriot/rpi-node (see https://hub.docker.com/r/hypriot/rpi-node/)

Note 2: I’m sure there already is a Docker image for this Sonos API; I just decided to build it myself

After building the image, I tagged it sonosctrl (using docker tag). You will see the tag of this image coming back later when we run the container.

Because the API server needs to discover the Sonos devices on the network, you should not use the Docker bridge network. The command to run the container from the sonosctrl image:

docker run –net=host –restart=always -d –name SonosController sonosctrl

Now you should have a container called SonosController up and running that accepts API requests to control your Sonos:

image

Note: you also see Portainer running above; I use that to get an easy GUI for Docker on this Pi

To actually test the API, use Postman or cURL. From Postman:

image

Above, you see a request to load the Sonos playlist called “car” on players in “Living Room”. The request was successful as can be seen in the response. This command will also start playing songs from the playlist right away. If you want to pause playing:

image

Great! We have a Sonos API running on a Raspberry Pi as a Docker container with a few simple steps. We can now more easily send commands to Sonos from devices like the Particle Photon or an Arduino. I will show you how to do that from a Particle Photon using the HttpClient library in a later article.

Temboo, Twilio and Nexmo: SMS and voice messages from your IoT device

In this post, I will provide an overview of how to use Twilio and Nexmo to send SMSs and voice messages directly from your device. I will use a Particle Photon but this also works from an Arduino, or a Raspberry Pi or basically any other system. The reason for this is that I will also use Temboo, an easy to use service that basically provides a uniform way to call a wide variety of APIs and even helps you with a code builder.

I will use the same basic sketch form earlier examples. This means there is a photoresistor which measures the amount of light but also a button that will trigger the calls to Temboo to send an SMS and a voice message with the current sensor value from the photoresistor.

Let’s get started shall we? You will first need accounts for all three services so go ahead and sign up. They all have free accounts to get started but remember they are all paying services. It’s up to you to decide how useful you find these services.

For Temboo, you will need to provide the account name, app key name and app key. Sadly, in the free Temboo tier, this key is only valid for a month and you will need to manually change it. I added these values as #defines in a header file called TembooAccount.h. Be sure to use #include “TembooAccount.h” in you .ino file. The contents of the TembooAccount.h:

image

In your .ino file, we’ll create two functions:

  • void runSendSMS(String body)
  • void runSendVoice(String body)

When you want to send an SMS or send a voice message, you call the appropriate function with the message you want to send or the text you want translated to speech.

The contents of the function is easy to write because you don’t have to. Temboo provides a code generator for you. When you are logged in, just go to https://temboo.com/library/ and select the Choreo you want to use. For the SMS, you select Twilio / SMSMessages / SendSMS. You will now be asked for parameters for the Choreo:

image

After providing all the inputs, you will find the code below and then you will pick and choose what you need. You can find an example for SMS and Voice in the following gist: https://gist.github.com/gbaeke/15596e3e2d185eb11720c965ab33e179. The voice Choreo uses Nexmo / Voice / TextToSpeech. Tip: Nexmo can also take input from your phone (like press ‘1’ to turn on sprinklers) and send them back to your device!

To actually fire off the SMS and voice message, we’ll do that when the button is pressed:

image

As you can see, Temboo and the APIs it exposes as Choreos makes it really easy to work with all sorts of APIs. I have only used Twilio and Nexmo here but there are many others. Again, these are all paying services and the lowest Temboo tier is quite pricey for home users. If you find it a bit too pricey, you can always use the Particle IFTTT integration to achieve similar results.

Controlling Sonos from a Particle Photon

Now for something fun! Let’s control a Sonos from a Particle Photon and a connected button. I connected a Grove Button to the Particle with simple male-to-female wires. The SIG line on the button should go to a digital port (D0 in my case). When the button is pressed, the port will read HIGH and otherwise LOW.

Controlling Sonos is another matter though. Sonos should really make simple APIs available and/or provide access through IFTTT and similar services. Until they do that, you will need to control Sonos the hard way, by connecting directly to it from the Particle and sending commands over their HTTP interface. Luckily, the people from Hover Labs, have some code on GitHub that you can build upon. I simply copied their code in my Particle app and removed references to the Hover device. By the way, the Hover is a cool device in its own right that you should definitely check out as well!

image

In the above snippet, you see part to the loop() code that checks for a button press. Since we want to toggle between Sonos PLAY and PAUSE, there’s some code for that. The hard work is done by the sonos() function which takes commands like PLAY, PAUSE, NEXT, PREVIOUS. You can check out the full code in the following gist: https://gist.github.com/gbaeke/240fb221204ff828dec06150014ec5fd. Note that the code also contains the LED and photoresitor code from earlier examples. The Sonos control is also very basic as it only implements PLAY and PAUSE so you need something in the queue. But at least you have a start to create more complex interactions.

You could also create a Particle Function that executes the Sonos code which would enable you to control your Sonos from the cloud and even connect this with other services via IFTTT. For instance, you could start playing your Sonos when you are arriving home.

Have fun controlling Sonos from your Particle!!!

Particle and Azure IoT Hub: forward events for storage and analysis

In a previous post about Partice published events, you have seen how to publish custom events to the Particle Cloud. Other devices or applications can subscribe to these events and act upon them. What if you want to do more and connect these events to custom applications? In that case, Particle has a couple of integrations that might help:

image

In this post, I will take a look at Azure IoT Hub integration which, at the moment of writing, is still in beta. Note that this integration works with events you publish from your device with Particle.publish and not with Particle Variables or Functions. Remember that in the post about events, we published a lights on and lights out event. For simplicity, we will build upon those events here.

To configure the IoT Hub integration, you will need a few things:

  • An Azure Subscription so you can logon to the portal at https://portal.azure.com (see https://azure.microsoft.com/en-us/free/ to get started)
  • An IoT Hub that you create from the portal; to get started, use the free tier which allows you to publish 8000 events per day (give or take; depends on message size as well); in the portal, use the + button

An IoT Hub has a name and works with shared access policies and access keys to be able to control the IoT Hub and send messages. To get to the policies, just click Shared Access Policies.

image

Although considered bad practice, I will use the iothubowner policy which has all required rights. Click iothubowner to view the access keys and note the primary access key. You will need that key in a moment.

In Particle Console, click the Integrations icon and click new integration. In the configuration screen, you will see:

image

It’s pretty self explanatory once you have your IoT Hub created in Azure. Just fill in the required information and note that the event name is the name of the event you have given in the call to Particle.publish. My events are called lights on and lights out and I will use lights as Event Name. This will catch both events!

To test this, the photoresistor was given enough light to fire the events. This is the result when you click on the integration after it was created:

image

When you click on one of the log entries, you will see more details:

image

You see the event payload that was sent to IoT Hub plus details about the call to IoT Hub using HTTP POST.

In IoT Hub, you will see a couple of things as well. First of all, the events:

image

In the list of devices, you will find a device with the id of the Particle Photon:

image

Note: Azure IoT Hub requires devices to authenticate but this is taken care of automatically by Particle Cloud

What you do now with these messages is up to you. You can use the new endpoints and routes feature of IoT Hub to forward events to Event Hubs or Service Bus. Or you could connect Stream Analytics to IoT Hub and save your events to Azure Storage, Data Lake, SQL, Document DB or stream the data to a real-time Power BI dashboard.

Note that although an Azure Subscription is free, not all services have free tiers. For instance, IoT Hub has a free tier but Stream Analytics does not. And although IoT Hub’s free tier is great to get started, it can only process a limited amount of events. It’s up to you to control the rate of events sent from your devices. For home use or small PoCs you should not run into issues though!

IoT with Particle and Porter

In an earlier article, we took a look at Particle Functions and Variables. We wrote a simple application that can blink an LED with a function and read the value of a photoresistor with a variable.

Although you can easily call the function or read the variable with the Particle CLI or with a REST call (using cURL for instance), you might want an easy web-based experience to work with your device. Porter (http://porterapp.com/) might be the answer!

I’ll quickly describe how Porter works. It’s so easy to use though that it doesn’t need much describing. After signing up and linking to Particle, you can add your devices. In the screenshot below, you can see my device:

image

The cool thing is that Porter automatically finds all your functions and variables and exposes them to you. Using the Customize option, you have some control over the UI elements. In the above screen, I changed the Led function to use on and off buttons instead of the default text input field where you need to type the parameter to the function (on or off). The variable is exposed as well and you can obtain the most recent value with the refresh icon.

Porter also has a mobile app that exposes the same functionality:

file-3

You can also work with Particle events. We discussed events in a previous post where we published events based on a threshold of 2000 for the photoresistor value. The events will show up in the Events tab (Web UI shown below):

image

Based on these events, you can define all sorts of Actions:

image

In the above screen, an action is defined that sends a notification when the lights on event is received. This noticification works together with the Porter app on your phone to notify you of the event. Other actions are:

  • Web Request: HTTP PUT or GET with variable request data using tokens such as [data], [time], [device_name] and so on
  • Send an e-mail
  • Send an SMS

Note that Porter is a paying service and that e-mails and SMSs require a specific plan. They have a 30-day trial.

As you can see, it’s very easy to use Porter and for quick access and control of your prototypes, it’s a great service. It’s not very difficult to build a quick web UI for your device yourself but it all comes down to gettng off the ground quickly and focusing on what matters in the early stages.

IoT with Particle: publishing events

In the two previous posts, we discussed setup and talked about triggering actions and reading sensor data. Particle also allows you to publish events. You can subscribe to these events or pass them to other systems such as Azure IoT Hub.

Let’s build on the previous example with the LED and the photoresistor. When we read a high value from the photoresistor (yes, more light) we will publish a lights on event including the value we have read. When we read a low value, we will publish a lights out event.

In code, this is easily done. The setup part:

image

This is not very different from the earlier post. I added a boolean (true/false) variable called bright to maintain the state (is it bright or not) and we initialise the variable depending on the amount of light we measure at the start.

In the loop() part:

image

Above you see Particle.publish in action. We read the brightness every second. When it was not bright and brightness is above or equal to 2000, we send an event to the Particle Cloud. This way, you only publish the event when the state changes. Particle Publish takes 4 parameters:

  • The name of the event
  • The data you want to send along; here it’s the brightness value converted to a string with the built in String class and its constructor which can take an integer and returns it as a string
  • 60 is the TTL (default and cannot be changed for now)
  • PRIVATE: this is a private event that only authorized subscribers can subscribe to

Lastly, we still implement the Particle Function to turn the LED on or off remotely:

image

The events can be tracked from the Particle Console:

image

The question of course is, what can you do with published events? One course of action is to use these events for communication between your IoT devices. Another Particle device can use Particle.subscribe to subscribe to the events published by other devices. Using Particle.subscribe is very simple and somewhat analogous to a Particle Function. You can find out more about it here: https://docs.particle.io/reference/firmware/photon/#particle-subscribe-

Another course of action is to use Particle’s IFTTT integration to use IFTTTs rich ecosystem of connected services. Particle is one of these services so just provide IFTTT with credentials to Particle and you are set!

Do know that the published events are not stored by Particle. If you want to do that, one way of achieving this is with the Azure IoT Hub integration. In a later post, I’ll talk more about that.

%d bloggers like this: