Microservices on Kubernetes: a simple example in Go

In the previous post, Getting started with Kubernetes on Azure, we talked about creating a Kubernetes cluster and deploying a couple of services. There are basically two services:

  • Data: a service that exposes an endpoint to pick up data for an IoT device; you call it with http://service_endpoint:8080/data/devicename
  • Device: a service that can be used by the Data API to check if a device exists; if the device exists you will see that in the response

When you call the Data service, it will call the Device service using gRPC, using HTTP as the transport protocol. You define the service using Protocol Buffers. gRPC works across languages and platforms, so I could have implemented each service using a different language like Go for the Device service and Node.js for the Data service. In this example, I decided to use Go in both cases and use Go Micro, a pluggable RPC framework for microservices. Go Micro uses gRPC and protocol buffers under the hood with changes specific to Go Micro.

Ok, enough with the talk, let’s take a look how it is done. The Device service is kept extremely simple for an abvious reason: I just started with Go Micro and then it is best to start with something simple. I do expect you know a bit of Go from here on out. All the code can be found at https://github.com/gbaeke/go-device.

Lets start with the definition of Protocol Buffers, found in proto/device.proto:

syntax = "proto3";

service DevSvc {
    rpc Get(DeviceName) returns (Device) {}
}

message DeviceName {
    string name = 1;
}

message Device {
    string name = 1;
    bool active = 2;
}

We define one RPC call here that expects a DeviceName message as input and returns a Device message. Simple enough but this does not get us very far. To actually use this in Go (or another supported language), we will generate some code from the above definition. You need a couple of things to do that though:

  • protoc compiler: download from Github  for your platform
  • protobuf plugins for code generation for Go Micro: run go get github.com/micro/protobuf/{proto,protoc-gen-go} (if you have issues, use 2 gets, one for proto and one for protoc-gen-go)

To actually compile the proto file, use the following command:

protoc --go_out=plugins=micro:. device.proto

That compiles device.proto to device.pb.go with help from the micro plugin. You can check the generated code here. Among other things, there are Go structs for the DeviceName and Device message plus several methods you can call on these structs such as Reset() and String().

Now for main.go! You’ll need several imports: for the generated code but also for the dependencies to build the service with Go Micro. If you check the code, you will also find the following import:

_ "github.com/micro/go-plugins/registry/kubernetes"

As stated above, Go Micro is a pluggable RPC framework. Out of the box, a microservice written with Go Micro will try to register itself with Consul on localhost for service discovery and configuration. We could run the Consul service in Kubernetes but Kubernetes supports service registration natively. Kubernetes support is something you add with the import above. That is not enough though! You still need to tell Go Micro to use Kubernetes as the registry, either with the —registry command line parameter or with an environment variable MICRO_REGISTRY. Check https://github.com/gbaeke/go-device/blob/master/go-device-dep.yaml file where that environment variable is set. Besides Consul and Kubernetes, there are other alternatives. One of them is multicast DNS (mdns) which is handy when you are testing services on your local machine and you don’t have something like Consul running.

If you want to check the information that is registered, you can do the following (after running kubectl proxy --port=8080):

curl http://localhost:8080/api/v1/pods | grep micro

Each pod will have an annotation with key micro.mu/service-<servicename> with information about the service such as its name, IP address, port, and much more.

Now really over to main.go, which is pretty self explanatory. There’s a struct called DevSvc which has a field called devs which holds the map of strings to Device structs. The DevSvc actually defines the service and you write the RPC calls as methods of that struct. Check out the following code snippet:

// DevSvc defines the service
type DevSvc struct {
	devs map[string]*device.Device
}
func (d *DevSvc) Get(ctx context.Context, req *device.DeviceName, rsp *device.Device) error {
	device, ok := d.devs[req.Name]
	if !ok {
		fmt.Println("Device does not exist")
		return nil
	}

	fmt.Println("Will respond with ", device)

	// this also works
	rsp.Name = device.Name
	rsp.Active = device.Active

	return nil
}

The Get function implements what was defined in the .proto file earlier and uses pointers to a DeviceName struct as input and a pointer to a Device struct as output. The code itself is of course trivial and just looks up a device in the map and returns it with rsp.

Of course, this handler needs to be registered and this happens in the main() function (besides setting up the service and implementing a custom flag):

// register handler and initialise devs map with a list of devices
device.RegisterDevSvcHandler(service.Server(), &DevSvc{devs: LoadDevices()})

If you want to test the service and call it (e.g. on the local machine) then clone the repository (or get it) and run the server as follows:

go run main.go --registry=mdns

In another terminal, run:

go run main.go --registry=mdns --run_client

When you run the code with the run_client option, the runClient function is called which looks like:

func runClient(service micro.Service) {
	// Create new client to call DevSvc service
	DevClient := device.NewDevSvcClient("go.micro.srv.device", service.Client())

	// Call Get to get a device
	rsp, err := DevClient.Get(context.TODO(), &device.DeviceName{Name: "device2"})
	if err != nil {
		fmt.Println(err)
		return
	}

	// Print response
	fmt.Println("Response: ", rsp)
}

This again shows the power of using a framework like Go Micro: you create a client for the DevSvc service and then simply perform the remote procedure call with the Get method, passing in a DeviceName struct with the Name field set to the device you want to check. The client uses the service registry to know where and how to connect. All the serialization and deserialization is handled for you as well using protocol buffers.

So great, you now have a little bit more information about the Device service and you know how to deploy it to Kubernetes. In another post, we’ll see how the Data service works and explore some other options to write that service.

Getting started with Kubernetes on Azure

As you may or may not know, at Xylos we have developed an IoT platform to support sensor networks of any kind. The back-end components are microservices running as containers on Rancher, a powerful and easy to use container orchestration tool. In the meantime, we are constantly evaluating other ways of orchestrating containers and naturally, Azure Container Services is one of the options. Recently, Microsoft added support for Kubernetes so we decided to check that out.

Instead of the default “look, here’s how you deploy an nginx container”, we will walk through an example of an extremely simple microservices application written in Go with the help of go-micro, a microservices toolkit. Now, I have to warn you that I am quite the newbie when it comes to Go and go-micro. If you have remarks about the code, just let me know. This post will not explain the Go services however, so let’s focus on deploying a Kubernetes cluster first and deploying the finished containers. Subsequent posts will talk about the services in more detail.

With the help of Azure CLI 2.0, deploying Kubernetes could not be simpler. You will find full details about installation on https://docs.microsoft.com/en-us/cli/azure/install-azure-cli. The CLI runs on Windows, Linux and macOS. For this post, I used macOS. If you are a bit unsure about how the Azure CLI works, check out this post: https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli.

After installation, use az login to authenticate and az account to set the default subscription. After that you are all set to deploy Kubernetes. First, create a resource group for the cluster:

az group create --name=rgname --location=westeurope

After the above command (use any name as resource group), use the following command to create a Kubernetes cluster with only one master and two agents and use a small virtual machine size. We do this to keep costs down while testing.

az acs create --orchestrator-type=kubernetes --resource-group=rgname --name=clustername --generate-ssh-keys --agent-count=2 --master-count=1 --agent-vm-size=Standard_A1_v2

Tip: to know the other virtual machine sizes in a region (like westeurope) use az vm list-sizes --location=westeurope

Note that in the az acs command, we auto-generate SSH keys. These are used to interact with the cluster and you can of course create your own. When you use generate-ssh-keys, you will find them in your home folder in the .ssh folder (id_rsa and id_rsa.pub files).

Now you need a way to administer the Kubernetes cluster. You do that with the kubectl command-line tool. Get kubectl with the following command:

az acs kubernetes install-cli

The kubectl tool needs a configuration file that instructs the tool where to connect and the credentials to use. Just use the following command to get this configured:

az acs kubernetes get-credentials --resource-group=rgname --name=clustername

Running the above command creates a config file in the .kube folder of your home folder. In the config file, you will see a https location that kubectl connects to, in addition to user information such as a user name and certificates.

Now, as a test, lets deploy a part of the microservices application that exposes a REST API endpoint to the outside world (I call it the data API). To do so, do the following:

kubectl create -f https://raw.githubusercontent.com/gbaeke/go-data/master/go-data-dep.yaml

The above command creates a deployment from a configuration file that makes sure that there are two containers running that use the image gbaeke/go-data. Each container runs in its own pod. You can check this like so:

kubectl get pods

You will see something like:

image-2

Run kubectl get deployment to see the deployment. Use kubectl describe deployment dataapito obtain more details about the deployment.

You will not be able to access this API from the outside world. To do this, let’s create a service of type LoadBalancer which will also configure an Azure load balancer automatically (could have been done from the YAML file as well):

kubectl expose deployments dataapi --port=8080 --type=LoadBalancer

You can check the service with kubectl get service. After a while and by running the last command again, the external IP will appear. You should now be able to hit the service with curl like so:

curl http://IP_of_service:8080/data/device1

No matter what device id you type at the end, you will always get Device active: false because the device API has not been deployed yet. How the data API talks to the device API and how they use service registration in Kubernetes will be discussed in another post.

Tip: for those that cannot wait, just run kubectl create -f https://raw.githubusercontent.com/gbaeke/go-device/master/go-device-dep.yaml and then use curl again with device1 at the end (should return true). The above command deploys the device API so that the data API can find and use it to check if a device exists.

Particle and Azure IoT Hub: forward events for storage and analysis

In a previous post about Partice published events, you have seen how to publish custom events to the Particle Cloud. Other devices or applications can subscribe to these events and act upon them. What if you want to do more and connect these events to custom applications? In that case, Particle has a couple of integrations that might help:

image

In this post, I will take a look at Azure IoT Hub integration which, at the moment of writing, is still in beta. Note that this integration works with events you publish from your device with Particle.publish and not with Particle Variables or Functions. Remember that in the post about events, we published a lights on and lights out event. For simplicity, we will build upon those events here.

To configure the IoT Hub integration, you will need a few things:

  • An Azure Subscription so you can logon to the portal at https://portal.azure.com (see https://azure.microsoft.com/en-us/free/ to get started)
  • An IoT Hub that you create from the portal; to get started, use the free tier which allows you to publish 8000 events per day (give or take; depends on message size as well); in the portal, use the + button

An IoT Hub has a name and works with shared access policies and access keys to be able to control the IoT Hub and send messages. To get to the policies, just click Shared Access Policies.

image

Although considered bad practice, I will use the iothubowner policy which has all required rights. Click iothubowner to view the access keys and note the primary access key. You will need that key in a moment.

In Particle Console, click the Integrations icon and click new integration. In the configuration screen, you will see:

image

It’s pretty self explanatory once you have your IoT Hub created in Azure. Just fill in the required information and note that the event name is the name of the event you have given in the call to Particle.publish. My events are called lights on and lights out and I will use lights as Event Name. This will catch both events!

To test this, the photoresistor was given enough light to fire the events. This is the result when you click on the integration after it was created:

image

When you click on one of the log entries, you will see more details:

image

You see the event payload that was sent to IoT Hub plus details about the call to IoT Hub using HTTP POST.

In IoT Hub, you will see a couple of things as well. First of all, the events:

image

In the list of devices, you will find a device with the id of the Particle Photon:

image

Note: Azure IoT Hub requires devices to authenticate but this is taken care of automatically by Particle Cloud

What you do now with these messages is up to you. You can use the new endpoints and routes feature of IoT Hub to forward events to Event Hubs or Service Bus. Or you could connect Stream Analytics to IoT Hub and save your events to Azure Storage, Data Lake, SQL, Document DB or stream the data to a real-time Power BI dashboard.

Note that although an Azure Subscription is free, not all services have free tiers. For instance, IoT Hub has a free tier but Stream Analytics does not. And although IoT Hub’s free tier is great to get started, it can only process a limited amount of events. It’s up to you to control the rate of events sent from your devices. For home use or small PoCs you should not run into issues though!

IoT Hub Scaling

When you work with Azure IoT Hub, it is not always easy to tell what will happen when you reach the limits of IoT Hub and what to do when you reach those limits. As a reminder, recall that the scale of IoT Hub is defined by its tier and the number of units in the tier. There are three paying tiers, besides the free tier:

image

Although these tiers make it clear how many messages you can send, other limits such as the amount of messages per second cannot be seen here. To have an idea about the amount of messages you can send and the sustained throughput see https://azure.microsoft.com/en-us/documentation/articles/iot-hub-scaling/#device-to-cloud-and-cloud-to-device-message-throughput

The specific burst performance numbers can be found here: https://azure.microsoft.com/en-us/documentation/articles/iot-hub-devguide-quotas-throttling/. Typically, the limit you are concerned with is the amount of device-to-cloud sends which are as follows:

  • S1: 12/sec/unit (but you get at least 100/sec in total; not per unit obviously); 10 units give you 120/sec and not 100+120/sec
  • S2: 120/sec/unit
  • S3: 6000/sec/unit

Now suppose you think about deploying 300 devices which send data every half a second. What tier should you use and how many units? It is clear that you need to send 600 messages per second so 5 units of S2 will suffice. You could also take 50 units of S1 for the same performance and price. With 5 units of S2 though, you can send more messages.

Now it would be nice to test the above in advance. At ThingTank we use Docker containers for this and we schedule them with Rancher, a great and easy to use Docker orchestration tool. If you want to try it, just use the container you can find on Docker Hub or the new Docker Store (still in beta). Just search for gbaeke and you will find the following container:

image

If you want to check out the code (warning: written hastily!), you can find it on GitHub here: https://github.com/xyloscloudservices/docker-itproceed. It is a simple NodeJs script that uses the Azure IoT Hub libraries to create a new device in the registry with a GUID for the name. Afterwards, the code sends a simple JSON payload to IoT Hub every half a second.

To use the script, start it as follows with three parameters:

app.js IoT_Hub_Short_Name IoT_Hub_Connection_String millis

Note: the millis parameter is the amount of milliseconds to wait between each send

Now you can run the containers in Rancher (for instance). I won’t go into the details how to add Docker Hosts to Rancher and how to create a new Stack (as they call it). Alternatively, you can run the containers on Azure Container Service or similar solutions.

In the PowerBI chart below, you see the eventcount every five seconds which is around 420-440 events which is a bit lower than expected for one S1 unit:

image

Note: the spike you see happens after the launch of 300 containers; throttling quickly kicks in

When switched to 5 S2 units, the graph looks as follows:

image

You see the eventcount jump to 3000 (near the end) which is what you would expect (300 containers send 600 messages per second = 3000 messages per 5 seconds which is possible with 5 S2 units that deliver 120 messages/sec/unit)

You really need to think if you want to send data every half a second or second. For our ThingTank Air Quality solution, we take measurements every second but aggregate them over a minute at the edge. Sending every minute with 5 S2 units would amount to thousands of devices before you reach the limits of IoT Hub!

Adding natural language to your Bot

In the last post, Bots in an IoT context, I created a very simple bot to request the air quality in a room. To change the room, you had to type change room and then type the room name when requested. It would be much nicer to be able to give commands like change room to <roomname> or set the room to <roomname> or switch room to <roomname>. Instead of using regular expressions, you should use the Language Understanding Intelligent Service or LUIS in short.

In LUIS, you first create an app. In the app, you define things like intents and entities. In this case, I only need one intent which I called ChangeRoom. Because I am going to specify the name of the room in phrases I type, I also defined an entity called room.

image

Next, you need to specify utterances and tell LUIS what the intent of the utterance is (if LUIS does not match your intent to the utterance automatically). The example below shows an example utterance:

image

When you type the utterance and click the orange arrow, LUIS will analyze the utterance. In the case above, LUIS automatically matched the utterance to the ChangeRoom intent and also marked the word asterix as an entity. If you hover over the entity, you will see the entity name, in this case room.

You should enter several utterances that make sense for your scenario and fix the intent and entity if needed. After adding several utterances, it is time to train the model with the tiny link in the bottom left of the browser.

After training, it is time to publish the application. You will get a URL that you need to supply in your bot. You can also test some queries from the publish dialog. For instance:

image

If you click the link for the query above, you get a JSON response like below:

image

What you see in the response above, is that LUIS matched the query to intent ChangeRoom and that the room entity is set to Asterix with a score of 0.948. Great!!

Now it is time to use this in your bot. You will need the following code to be able to use the LUIS app from your code and use the LUIS recognizer in the intents:

image

Obviously, you set the URL of the recognizer to the URL you received after publishing the LUIS app. Next, use the LUIS intent (ChangeRoom remember), in your code as follows:

image

In the above code, the important part is extracting the room entity. We make sure that, when a room entity is not found, we give the user a message. Otherwise, we set the room in userData.roomName.

Now it is time to test the code in Slack or another service. Everything was set up in the previous post, so we just need to push the code changes to Azure App Services (git push azure commit). Just for fun, I will show the results in Skype:

image

As you can see, many different phrases can be used. Not all of them were entered in LUIS. Of course, not all phrases will work. It’s clear that new place is Obelix does not work. However, it’s very simple to go back to the LUIS app and add extra utterances, train the model and publish it again.

To sum things up:

  • Regular expressions are great to get started
  • Use LUIS to add natural language processing to your bot in a simple and intuitive way

Bots in an IoT context

At ThingTank (@thingtankBE), we are constantly looking to expose IoT data in different ways. A chat bot can be a great way to ask for device measurements or even instruct devices to perform actions. In this post, I will describe a bot that gets air quality data for a meeting room with Slack.

I chose to write the bot in Node.js for simplicity reasons and publish it to Azure’s App Service. The basics about writing a bot with Node.js can be found in the documentation of Microsoft’s Bot Framework here: https://docs.botframework.com/en-us/node/builder/overview.

Our bot is really simple for now. After getting the basics up and running, the bot can be enhanced with a natural language interface. What we want to do now:

  • Set the room name and save it in the session (UserData)
  • Change the room name and save it in the session
  • Simple help: list the commands you can use
  • Get air quality measurements (a subset)

To achieve the above, you use dialogs, intents and some simple regular expressions. Check out the source code to see how it is done (remember, this is a basic script to get it working at a minimum). The basic logic is as follows:

  • If the intent is unknown, check if the room name is set. If not, switch to the /roomName dialog that asks for the room name and stores it in session.UserData
  • if the intent matches commands, repond with a list of commands
  • if the intent matches change room, switch to the /roomName dialog that asks for the room name
  • if the intent matches air quality, get the measurements for the selected room using the getRoom function in an external module airq.js. Our real-time air quality data comes from a pubsub channel and the getRoom function just retrieves it from there

Writing an intent is very simple. The change room intent for instance:

intents.matches(/^change room/i, [
function (session) {
session.send(“Ok, let’s change the room name…”);
session.beginDialog(‘/roomName’);
},
function(session, results) {
session.send(‘Changed room to %s’, session.userData.roomName);
}
]);

If you look at the source code, you will see we use the Chat Connector. When you are writing your bot in the beginning, I recommend you use the ConsoleConnector instead. You can then simply run your bot with node .js and interact with it from the command line. In our case, we use the ChatConnector so you should use the Bot Framework Channel Emulator from here to interact with and test your bot.

image

To get the emulator working, you need to obtain an App Id and App Password from Microsoft and make sure you use those in both your bot source code and the emulator. In the source code, these two values come from environment variables.Note that for local testing, you can leave these values blank.

Now it’s time to publish the bot on the web so we can register it with Microsoft and then enable it on Slack. To publish the bot, use the instructions here. You will use the Azure CLI and git to make this work so be sure to install both on your machine. After the bot is installed and running on App Service, set the environment variables for App Id and App Password in the website properties. Next, you can test your bot using the Channel Emulator.

Important: when you test your bot in the cloud using the Channel Emulator, be sure to use ngrok as specified here: https://docs.botframework.com/en-us/tools/bot-framework-emulator/#using-the-emulator-with-ngrok-to-interact-with-your-bot-in-the-cloud.

Now we have the bot running, it’s time to register it with Microsoft at https://dev.botframework.com/bots/new. As part of the registration process, you need to supply the URL to your bot in the cloud and obtain a new App Id and App Password. Update the website settings with these new values. After registration, you get:

image

From the above page, you can test your bot and add other channels. One of those channels is Slack. When you add Slack as a channel, you will be guided to create an app in Slack, authenticate, and of course, create a Slack bot. In Slack, you will get something like:

image

To summarize:

  • Creating a simple bot with the Bot Framework is easy; the fun starts when you want to enable things like natural language processing
  • When you deploy you bot to the cloud and want to test it with the Channel Emulator, use ngrok
  • When you want to deploy the bot to Slack, register the bot with Microsoft and simply add Slack as a channel

Azure Automation and PowerApps

One of our applications in our “test playground” is running some code in an Azure WebApp that needs to be restarted once in a while. Rather than trying to fix the underlying problem (no fun in that right?), I decided to create a small mobile app to restart the WebApp when needed. To make it a bit more fun, I used the following “code-less” solutions to make it work:

  • Azure Automation: Graphical Runbook to restart the WebApp; use a Webhook to call the Runbook using a simple HTTP POST
  • Microsoft Flow: calls the Azure Automation Webhook when a control is selected in a PowerApp
  • PowerApp: simple app with a button that calls the above Flow

Azure Automation

I created an Azure Automation account with the option to create a service principal. This results in an account that is added as Contributor for the subscription in which the Azure Automation account was created. This also means that a runbook that uses this account is allowed to restart a WebApp in the same subscription. In my case, the Automation Account and the WebApp are in the same subscription.

Now, before you can use the Restart-AzureRMWebApp cmdlet, you need to add the AzureRM.Websites module to the Automation Account. To do so, navigate to https://www.powershellgallery.com/packages/AzureRM.Websites/1.1.2 and use the Deploy to Azure Automation button. Follow the instructions to add the module to an existing Azure Automation account. When you are finished, click Assets in the Automation Account’s main pane and then click Modules. You should see the following:

 image

Now you can duplicate the AzureAutomationTutorial graphical runbook. In Runbooks, click that Runbook and use the Export option to export the definition to a local file on your computer. Now add a new Runbook and use the Import an existing runbook option together with the export file you just created. Your copied Runbook will look like below:

image

You can remove everything after Login to Azure (that’s the login with the Service Principal that has Contributor rights). Just add the Restart-AzureRMWebApp cmdlet like so:

image

The Restart-AzureRmWebApp only needs two parameters: the name of the WebApp and the resource group of the WebApp. To be able to call the Runbook using HTTP POST, create a Webhook for it. In the properties of the Runbook, click Webhooks and then add a Webhook. Note that there is no authentication for these Webhooks. It’s just a long, unique URL with an expiration date that you set. Make sure you copy the URL before you save the Webhook because it will not be shown later. I created a RunFromPowerApps webhook like so:

image

You can try the Webhook with Postman (https://www.getpostman.com/) or curl and see if a job gets started.

Microsoft Flow

Well, this could not be simpler. Go to https://flow.microsoft.com and login with your credentials (the same credentials for PowerApps, in my case they are Azure AD organization credentials). From My flows create a new flow that looks like this:

image

In the URI, enter the Webhook address from Azure Automation. Save the flow. We will now use this flow in PowerApps.

PowerApps

To create a PowerApp, install the Windows PowerApp application (a Windows Store app) and logon with the same credentials you used with Flow. I created a blank app with a simple button, nothing fancy. With the button selected, click Flows from the Action menu. You should see the flow you created. Just select it to link it to the button selection. You should see something like:

image

Note that it is possible to pass data to the flow as parameters to the Run() command. You could for instance create a list of WebApps to restart and pass the WebApp to be restarted to the Flow and the Webhook.

Test the PowerApp with the play button in the menu bar. When you click Restart, check that the Automation Job fired properly:

image

Now you can run the PowerApp on your iOS or Android device with the PowerApp app for those platforms. Enjoy!

This simple example shows that a lot can be accomplished with tools like Azure Automation, Flow and PowerApps for prototyping or even actual applications with a quick time to value.