Streamlined Kubernetes Development with Draft

A longer time ago, I wrote a post about draft. Draft is a tool to streamline your Kubernetes development experience. It basically automates, based on your code, the creation of a container image, storing the image in a registry and installing a container based on that image using a Helm chart. Draft is meant to be used during the development process while you are still messing around with your code. It is not meant as a deployment mechanism in production.

The typical workflow is the following:

  • in the folder with your source files, run draft create
  • to build, push and install the container run draft up; in the background a Helm chart is used
  • to see the logs and connect to the app in your container over an SSH tunnel, run draft connect
  • modify your code and run draft up again
  • rinse and repeat…

Let’s take a look at how it works in a bit more detail, shall we?

Prerequisites

Naturally, you need a Kubernetes cluster with kubectl, the Kubernetes cli, configured to use that cluster.

Next, install Helm on your system and install Tiller, the server-side component of Helm on the cluster. Full installation instructions are here. If your cluster uses rbac, check out how to configure the proper service account and role binding. Run helm init to initialize Helm locally and install Tiller at the same time.

Now install draft on your system. Check out the quickstart for installation instructions. Run draft init to initialize it.

Getting some source code

Let’s use a small Go program to play with draft. You can use the realtime-go repository. Clone it to your system and checkout the httponly branch:

git clone https://github.com/gbaeke/realtime-go.git
git checkout httponly

You will need a redis server as a back-end for the realtime server. Let’s install that the quick and dirty way:

kubectl run redis --image=redis --replicas=1 
kubectl expose deploy/redis –port 6379  

Running draft create

In the realtime-go folder, run draft create. You should get the following output:

draft create output

The command tries to detect the language and it found several. In this case, because there is no pack for Coq (what is that? πŸ˜‰) and HTML, it used Go. Knowing the language, draft creates a simple Dockerfile if there is no such file in the folder:

FROM golang
ENV PORT 8080
EXPOSE 8080

WORKDIR /go/src/app
COPY . .

RUN go get -d -v ./...
RUN go install -v ./...

CMD ["app"] 

Usually, I do not use the Dockerfile created by draft. If there already is a Dockerfile in the folder, draft will use that one. That’s what happened in our case because the folder contains a 2-stage Dockerfile.

Draft created some other files as well:

  • draft.toml: configuration file (more info); can be used to create environments like staging and production with different settings such as the Kubernetes namespace to deploy to or the Dockerfile to use
  • draft.tasks.toml: run commands before or after you deploy your container with draft (more info); we could have used this to install and remove the redis container
  • .draftignore: yes, to ignore stuff

Draft also created a charts folder that contains the Helm chart that draft will use to deploy your container. It can be modified to suit your particular needs as we will see later.

Helm charts folder and a partial view on the deployment.yaml file in the chart

Setting the container registry

In older versions of draft, the source files were compressed and sent to a sever-side component that created the container. At present though, the container is built locally and then pushed to a registry of your choice. If you want to use Azure Container Registry (ACR), run the following commands (set and login):

draft config set registry REGISTRYNAME.azurecr.io
az acr login -n REGISTRYNAME

Note that you need the Azure CLI for the last command. You also need to set the subscription to the one that contains the registry you reference.

With this configuration, you need Docker on your system. Docker will build and push the container. If you want to build in the cloud, you can use ACR Build Tasks. To do that, use these commands:

draft config set container-builder acrbuild
draft config set registry REGISTRYNAME.azurecr.io
draft config set resource-group-name RESOURCEGROUPNAME

Make sure your are logged in to the subscription (az login) and login to ACR as well before continuing. In this example, I used ACR build tasks.

Note: because ACR build tasks do not cache intermediate layers, this approach can lead to longer build times; when the image is small as in this case, doing a local build and push is preferred!

Running draft up

We are now ready to run draft up. Let’s do so and see what happens:

results of draft up

YES!!!! Draft built the container image and released it. Run helm ls to check the release. It did not have to push the image because it was built in ACR and pushed from there. Let’s check the ACR build logs in the portal (you can also use the draft logs command):

acr build log for the 2-stage Docker build

Fixing issues

Although the container is properly deployed (check it with helm ls), if you run kubectl get pods you will notice an error:

container error

In this case, the container errors out because it cannot find the redis host, which is a dependency. We can tell the container to look for redis via a REDISHOST environment variable. You can add it to deployment.yaml in the chart like so:

environment variable in deployment.yaml

After this change, just run draft up again and hope for the best!

Running draft connect

With the realtime-go container up and running, run draft connect:

output of draft connect

This maps a local port on your system to the remote port over an ssh tunnel. In addition, it streams the logs from the container. You can now connect to http://localhost:18181 (or whatever port you’ll get):

Great success! The app is running

If you want a public IP for your service, you can modify the Helm chart. In values.yaml, set service.type to LoadBalancer instead of ClusterIP and run draft up again. You can verify the external IP by running kubectl get svc.

Conclusion

Working with draft while your are working on one or more containers and still hacking away at your code really is smooth sailing. If you are not using it yet, give it a go and see if you like it. I bet you will!

Update on restricting egress traffic on Azure Kubernetes Service

In an earlier post, I discussed the combination of Azure Firewall and Azure Kubernetes Service (AKS) to secure ingress and egress AKS traffic.

A few days ago, Microsoft added documentation that describes the ports and URLs to allow when you route traffic through Azure Firewall or a virtual appliance. Some of the allowed ports and addresses are required for the operation of the cluster, while some others are optional. It’s highly recommended to allow the optional ports and addresses though.

The top of the document mentions registering an additional feature called AKSLockingDownEgressPreview:

az feature register --name AKSLockingDownEgressPreview --namespace Microsoft.ContainerService

The document is not very clear on what the feature does but the comments contain the following:

The feature registration tells the cluster to only pull core system images from container image repositories housed in the Microsoft Container Registry (MCR). Otherwise, clusters could try to pull container images for the core components from external repositories. There is some additional routing that also occurs for the cluster to do this. The list of ports and addresses are then what's required for you to permit when the egress traffic is restricted. You can't simply limit the egress traffic to only those address without the feature being enabled for the cluster. 

In summary, to limit egress traffic, use Azure Firewall or a Network Virtual Appliance, allow the listed ports and URLs AND register the AKSLockingDownEgressPreview feature.

Creating and deploying a model with Azure Machine Learning Service

In this post, we will take a look at creating a simple machine learning model for text classification and deploying it as a container with Azure Machine Learning service. This post is not intended to discuss the finer details of creating a text classification model. In fact, we will use the Keras library and its Reuters newswire dataset to create a simple dense neural network. You can find many online examples based on this dataset. For further information, be sure to check out and buy πŸ‘ Deep Learning with Python by FranΓ§ois Chollet, the creator of Keras and now at Google. It contains a section that explains using this dataset in much more detail!

Machine Learning service workspace

To get started, you need an Azure subscription. Once you have the subscription, create a Machine Learning service workspace. Below, you see such a workspace:

My Machine Learning service workspace (gebaml)

Together with the workspace, you also get a storage account, a key vault, application insights and a container registry. In later steps, we will create a container and store it in this registry. That all happens behind the scenes though. You will just write a few simple lines of code to make that happen!

Note the Authoring (Preview) section! These were added just before Build 2019 started. For now, we will not use them.

Azure Notebooks

To create the model and interact with the workspace, we will use a free Jupyter notebook in Azure Notebooks. At this point in time (8 May 2019), Azure Notebooks is still in preview. To get started, find the link below in the Overview section of the Machine Learning service workspace:

Getting Started with Notebooks

To quickly get the notebook, you can clone my public project: ⏩⏩⏩ https://notebooks.azure.com/geba/projects/textclassificationblog.

Creating the model

When you open the notebook, you will see the following first four cells:

Getting the dataset

It’s always simple if a prepared dataset is handed to you like in the above example. Above, you simply use the reuters class of keras.datasets and use the load_data method to get the data and directly assign it to variables to hold the train and test data plus labels.

In this case, the data consists of newswires with a corresponding label that indicates the category of the newswire (e.g. an earnings call newswire). There are 46 categories in this dataset. In the real world, you would have the newswire in text format. In this case, the newswire has already been converted (preprocessed) for you in an array of integers, with each integer corresponding to a word in a dictionary.

A bit further in the notebook, you will find a Vectorization section:

Vectorization

In this section, the train and test data is vectorized using a one-hot encoding method. Because we specified, in the very first cell of the notebook, to only use the 10000 most important words each article can be converted to a vector with 10000 values. Each value is either 1 or 0, indicating the word is in the text or not.

This bag-of-words approach is one of the ways to represent text in a data structure that can be used in a machine learning model. Besides vectorizing the training and test samples, the categories are also one-hot encoded.

Now the dense neural network model can be created:

Dense neural net with Keras

The above code defines a very simple dense neural network. A dense neural network is not necessarily the best type but that’s ok for this post. The specifics are not that important. Just note that the nn variable is our model. We will use this variable later when we convert the model to the ONNX format.

The last cell (16 above) does the actual training in 9 epochs. Training will be fast because the dataset is relatively small and the neural network is simple. Using the Azure Notebooks compute is sufficient. After 9 epochs, this is the result:

Training result

Not exactly earth-shattering: 78% accuracy on the test set!

Saving the model in ONNX format

ONNX is an open format to store deep learning models. When your model is in that format, you can use the ONNX runtime for inference.

Converting the Keras model to ONNX is easy with the onnxmltools:

Converting the Keras model to ONNX

The result of the above code is a file called reuters.onnx in your notebook project.

Predict with the ONNX model

Let’s try to predict the category of the first newswire in the test set. Its real label is 3, which means it’s a newswire about an earnings call (earn class):

Inferencing with the ONNX model

We will use similar code later in score.py, a file that will be used in a container we will create to expose the model as an API. The code is pretty simple: start an inference session based on the reuters.onnx file, grab the input and output and use run to predict. The resulting array is the output of the softmax layer and we use argmax to extract the category with the highest probability.

Saving the model to the workspace

With the model in reuters.onnx, we can add it to the workspace:

Saving the model in the workspace

You will need a file in your Azure Notebook project called config.json with the following contents:

{
     "subscription_id": "<subscription-id>",
     "resource_group": "<resource-group>",
     "workspace_name": "<workspace-name>" 
} 

With that file in place, when you run cell 27 (see above), you will need to authenticate to Azure to be able to interact with the workspace. The code is pretty self-explanatory: the reuters.onnx model will be added to the workspace:

Models added to the workspace

As you can see, you can save multiple versions of the model. This happens automatically when you save a model with the same name.

Creating the scoring container image

The scoring (or inference) container image is used to expose an API to predict categories of newswires. Obviously, you will need to give some instructions how scoring needs to be done. This is done via score.py:

score.py

The code is similar to the code we wrote earlier to test the ONNX model. score.py needs an init() and run() function. The other functions are helper functions. In init(), we need to grab a reference to the ONNX model. The ONNX model file will be placed in the container during the build process. Next, we start an InferenceSession via the ONNX runtime. In run(), the code is similar to our earlier example. It predicts via session.run and returns the result as JSON. We do not have to worry about the rest of the code that runs the API. That is handled by Machine Learning service.

Note: using ONNX is not a requirement; we could have persisted and used the native Keras model for instance

In this post, we only need score.py since we do not train our model via Azure Machine learning service. If you train a model with the service, you would create a train.py file to instruct how training should be done based on data in a storage account for instance. You would also provision compute resources for training. In our case, that is not required so we train, save and export the model directly from the notebook.

Training and scoring with Machine Learning service

Now we need to create an environment file to indicate the required Python packages and start the image build process:

Create an environment yml file via the API and build the container

The build process is handled by the service and makes sure the model file is in the container, in addition to score.py and myenv.yml. The result is a fully functional container that exposes an API that takes an input (a newswire) and outputs an array of probabilities. Of course, it is up to you to define what the input and output should be. In this case, you are expected to provide a one-hot encoded article as input.

The container image will be listed in the workspace, potentially multiple versions of it:

Container images for the reuters ONNX model

Deploy to Azure Container Instances

When the image is ready, you can deploy it via the Machine Learning service to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). To deploy to ACI:

Deploying to ACI

When the deployment is finished, the deployment will be listed:

Deployment (ACI)

When you click on the deployment, the scoring URI will be shown (e.g. http://IPADDRESS:80/score). You can now use Postman or any other method to score an article. To quickly test the service from the notebook:

Testing the service

The helper method run of aci_service will post the JSON in test_sample to the service. It knows the scoring URI from the deployment earlier.

Conclusion

Containerizing a machine learning model and exposing it as an API is made surprisingly simple with Azure Machine learning service. It saves time so you can focus on the hard work of creating a model that performs well in the field. In this post, we used a sample dataset and a simple dense neural network to illustrate how you can build such a model, convert it to ONNX format and use the ONNX runtime for scoring.

Azure Kubernetes Service and Azure Firewall

Deploying Azure Kubernetes Service (AKS) is, like most other Kubernetes-as-a-service offerings such as those from DigitalOcean and Google, very straightforward. It’s either a few clicks in the portal or one or two command lines and you are finished.

Using these services properly and in a secure fashion is another matter though. I am often asked how to secure access to the cluster and its applications. In addition, customers also want visibility and control of incoming and outgoing traffic. Combining Azure Firewall with AKS is one way of achieving those objectives.

This post will take a look at the combination of Azure Firewall and AKS. It is inspired by this post by Dennis Zielke. In that post, Dennis provides all the necessary Azure CLI commands to get to the following setup:

AKS and Azure Firewall (from
https://medium.com/@denniszielke/setting-up-azure-firewall-for-analysing-outgoing-traffic-in-aks-55759d188039 by Dennis Zielke)

In what follows, I will keep referring to the subnet names and IP addresses as in the above diagram.

Azure Firewall

Azure Firewall is a stateful firewall, provided as a service with built-in high availability. You deploy it in a subnet of a virtual network. The subnet should have the name AzureFirewallSubnet. The firewall will get two IP addresses:

  • Internal IP: the first IP address in the subnet (here 10.0.3.4)
  • Public IP: a public IP address; in the above setup we will use it to provide access to a Kubernetes Ingress controller via a DNAT rule

As in the physical world, you will need to instruct systems to route traffic through the firewall. In Azure, this is done via a route table. The following route table was created:

Route table

In (1) a route to 0.0.0.0/0 is defined that routes to the private IP of the firewall. The route will be used when no other route applies! The route table is associated with just the aks-5-subnet (2), which is the subnet where AKS (with advanced networking) is deployed. It’s important to note that now, all external traffic originating from the Kubernetes cluster passes through the firewall.

When you compare Azure Firewall to the Network Virtual Appliances (NVAs) from vendors such as CheckPoint, you will notice that the capabilities are somewhat limited. On the flip side though, Azure Firewall is super simple to deploy when compared with a highly available NVA setup.

Before we look at the firewall rules, let’s take a look at the Kubernetes Ingress Controller.

Kubernetes Ingress Controller

In this example, I will deploy nginx-ingress as an Ingress Controller. It will provide access to HTTP-based workloads running in the cluster and it can route to various workloads based on the URL. I will deploy the nginx-ingress with Helm.

Think of an nginx-ingress as a reverse proxy. It receives http requests, looks at the hostname and path (e.g. mydomain.com/api/user) and routes the request to the appropriate Kubernetes service (e.g. the user service).

Diagram showing Ingress traffic flow in an AKS cluster
Ingress in Kubernetes (from Microsoft:
https://docs.microsoft.com/en-us/azure/aks/operator-best-practices-network#distribute-ingress-traffic )

Normally, the nginx-ingress service is accessed via an Azure external load balancer. Behind the scenes, this is the result of the service object having spec.type set to the value LoadBalancer. If we want external traffic to nginx-ingress to pass through the firewall, we will need to tell Kubernetes to create an internal load balancer via an annotation. Let’s do that with Helm. First, you will need to install tiller, the server-side component of Helm. Use the following procedure from the Microsoft documentation:

  • Create a service account for tiller: link
  • Configure tiller: link

With tiller installed, issue the following two commands:

kubectl create ns ingress 

helm install stable/nginx-ingress --namespace ingress --set controller.replicaCount=2 --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=true --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal-subnet"=ing-4-subnet

The second command installs nginx-ingress in the ingress namespace. The two –set parameters add the following annotations to the service object (yes I know, the Helm annotation parameters are ugly 🀒):

service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: ing-4-subnet

This ensures an internal load balancer gets created. It gets created in the mc-* resource group that backs your AKS deployment:

Internal load balancer created by the Kubernetes cloud integration components

Note that Kubernetes creates the load balancer, including the rules and probes for port 80 and 443 as defined in the service object that comes with the Helm chart. The load balancer is created in the ing-4-subnet as instructed by the service annotation. Its private IP address is 10.0.4.4 as in the diagram at the top of this post

DNAT Rule to Load Balancer

To provide access to internal resources, Azure Firewall uses DNAT rules which stands for destination network address translation. The concept is simple: traffic to the firewall’s public IP on some port can be forwarded to an internal IP on the same or another port. In our case, traffic to the firewall’s public IP on port 80 and 443 is forwarded to the internal load balancer’s private IP on port 80 and 443. The load balancer will forward the request to nginx-ingress:

DNAT rule forwarding port 80 and 443 traffic to the internal load balancer

If the installation of nginx-ingress was successful, you should end up at the default back-end when you go to http://firewallPublicIP.

nginx-ingress default backend when browsing to public IP of firewall

If you configured Log Analytics and installed the Azure Firewall solution, you can look at the firewall logs. DNAT actions are logged and can be inspected:

Firewall logs via Log Analytics

Application and Network Rules

Azure Firewall application rules are rules that allow or deny outgoing HTTP/HTTPS traffic based on the URL. The following rules were defined:

Application rules

The above rules allow http and https traffic to destinations such as docker.io, cloudflare and more.

Note that another Azure Firewall rule type, network rules, are evaluated first. If a match is found, rule evaluation is stopped. Suppose you have these network rules:

Network rules

The above network rule allows port 22 and 443 for all sources and destinations. This means that Kubernetes can actually connect to any https-enabled site on the default port, regardless of the defined application rules. See rule processing for more information.

Threat Intelligence

This feature alerts on and/or denies network traffic coming from known bad IP addresses or domains. You can track this via Log Analytics:

Threat Intelligence Alerts and Denies on Azure Firewall

Above, you see denied port scans, traffic from botnets or brute force credentials attacks all being blocked by Azure Firewall. This feature is currently in preview.

Best Practices

The AKS documentation has a best practices section that discusses networking. It contains useful information about the networking model (Kubenet vs Azure CNI), ingresses and WAF. It does not, at this point in time (May 2019), desicribe how to use Azure Firewall with AKS. It would be great if that were added in the near future.

Here are a couple of key points to think about:

  • WAF (Web Application Firewall): Azure Firewall threat intelligence is not WAF; to enable WAF, there are several options:
    • you can enable mod_security in nginx_ingress
    • you can use Azure WAF or a 3rd party WAF
    • you can use cloud-native WAFs such as TwistLock (WAF is one of the features of this product; it also provides firewall and vulnerability assessment)
  • remote access to Kubernetes API: today, the API server is exposed via a public IP address; having the API server on a local IP will be available soon
  • remote access to Kubernetes hosts using SSH: only allow SSH on the private IP addresses; use a bastion host to enable connectivity

Conclusion

Azure Kubernetes Service (AKS) can be combined with Azure Firewall to control network traffic to and from your Kubernetes cluster. Log Analytics provides the dashboard and logs to report and alert on traffic patterns. Features such as threat intelligence provide an extra layer of defense. For HTTP/HTTPS workloads (so most workloads), you should complement the deployment with a WAF such as Azure Application Gateway or 3rd party.

Securing access to and from Azure Functions

I am often asked how to secure access to and from Azure Functions that are not running in an App Service Environment (ASE). An App Service Environment allows you to safeguard your apps in a subnet of your Azure Virtual Network. In a sense, it gives you a private deployment of Azure App Service that you can secure with Azure Firewall, Network Security Groups (NSGs) or Network Virtual Appliances (NVAs).

When you use Azure Functions in a regular App Service Plan or Premium plan, you will need to rely on Virtual Network Service Endpoints and App Service network integration to achieve similar results.

In this post, we will look at an example of an Azure Function, running in a Premium plan, that queries CosmosDB. We will restrict incoming traffic to the Azure Function from a subnet and only allow CosmosDB to be queried by the same Azure Function. Here’s a diagram:

Incoming Traffic

To restrict incoming traffic to the Azure Function, navigate to the Function App in the portal and select Networking in Platform Features. You will see the following screen:

Azure Functions network features

We will configure the inbound restrictions via Configure Access Restrictions. You can configure restrictions for both the Function App itself and the scm site:

From the moment you add rules, a Deny All rule will appear. In the above rules, I allowed my private IP and the default subnet in the virtual network. The second rule configures the service endpoints. When you open the properties of the subnet, you will see:

Service Endpoint of type Microsoft.Web

Great! When you try to access the function from any other location, you will get a 403 error from the Azure Functions front-end. So don’t expect a connection timeout like with regular network security rules.

Outgoing traffic

The example Azure Function uses an HTTP trigger and a Cosmos DB input (cosmos). Documents contain a name property. The query outputs the name found on the first document:

module.exports = async function (context, req, cosmos) {
context.log(cosmos);
context.res = {
body: "hello " + cosmos[0].name
}};

In order to secure access to Cosmos DB, two features were used:

  1. Azure Functions VNet Integration (VNet integration is currently in preview)
  2. Cosmos DB network service endpoints to restrict access to the subnet that provides the Azure Function hosts with an IP address

Configuring the VNet integration is straightforward, especially when compared to the old style of integration which required a VPN tunnel:

App Service (including Azure Functions) VNet integration

As you can see in the above screenshot, you delegate a subnet to the App Service hosts. In my case, that is subnet func-sec:

Subnet delegated to a service (Microsoft.Web/serverFarms)

The bottom of the screenshot shows the subnet is delegated to the Microsoft.Web/serverFarms service. That is the result of the VNet integration.

You can also see the subnet has service endpoints configured for Cosmos DB. That is the result of the Cosmos DB configuration below:

Service endpoint config in Cosmos DB

In Cosmos DB, an existing virtual network was added. I did not enable the Accept connections from within public Azure datacenters option.

When you remove the service endpoint and you run the Azure Function, the following error is thrown:

Unable to proceed with the request. Please check the authorization claims to ensure the required permissions to process the request. ActivityId: 03b2c11f-2b21-44c9-ab44-61b4864539fe, Microsoft.Azure.Documents.Common/2.2.0.0, Windows/10.0.14393 documentdb-netcore-sdk/2.2.0

Does it work from a VM in the default subnet?

If all went well, I should be able to call the Azure Function from the virtual machine in the default subnet. Let’s try with curl:

Yes, itsme!

The name field in the first document is set to itsme so it worked! Great, the function can be called from the default subnet. In case you are wondering about the use of -p in the ssh command: this virtual machine sat behind an Azure Firewall and the VM ssh port was exposed via a DNAT rule over a random port.

From another location, the following error is shown (wrapped around some HTML but this is the main error):

Error 403 - This web app is stopped

Conclusion

With virtual network service endpoints now available for most Azure PaaS (platform as a service) components, you can ensure those services are only accessed from intended locations. In this example, you saw how to secure access to Azure Functions and Cosmos DB. Service endpoints combined with the App Service VNet integration make it straightforward to secure a Function App end-to-end.

Querying Postgres with GraphQL

I wanted a quick and easy way to build an API that retrieves the ten latest events from a stream of data sent to a TimescaleDB hypertable. Since such a table can be queried by any means supported by Postgres, I decided to use Postgraphile, which automatically provides a GraphQL server for a database.

If you have Node.js installed, just run the following command:

npm install -g postgraphile

Then run the following command to start the GraphQL server:

postgraphile -c "postgres://USER@SERVER:PASSWORD@SERVER.postgres.database.azure.com/DATABASE?ssl=1" --simple-collections only --enhance-graphiql

Indeed, I am using Azure Database for PostgreSQL. Replace the strings in UPPERCASE with your values. I used simple-collections only to, eh, only use simple collections which makes it, well, simpler. πŸ‘πŸ‘πŸ‘

Note: the maintainer of Postgraphile provided a link to what simple-collections actually does; take a look there for a more thorough explanation πŸ˜‰

The result of the above command looks like the screenshot below:

GraphQL Server started

You can now navigate to http://localhost:5000/graphiql to try some GraphQL queries in an interactive environment:

GraphiQL, enhanced with the –enhance-graphiql flag when we started the server

In the Explorer to the left, you can easily click the query together. In this case, that is easy to do since I only want to query a single table an obtain the last ten events for a single device. The resulting query looks like so:

{
allConditionsList(condition: {device: "pg-1"}, orderBy: TIME_DESC, first: 10) {
time
device
temperature
}
}

allConditionsList gets created by the GraphQL server by looking at the tables of the database. Indeed, my database contains a conditions table with time, device, temperature and humidity columns.

To finish off, let’s try to obtain the data with a regular POST method to http://localhost:5000/graphql. This is the command to use:

curl -X POST -H “Content-Type: application/json” -d ‘{“query”:”{\n allConditionsList(condition: {device: \”pg-1\”}, orderBy: TIME_DESC, first: 10) {\n time\n device\n temperature\n }\n}\n”,”variables”:null}’ http://localhost:5000/graphql

Ugly but it works. To be honest, there is some noise in the above command because of the \n escapes. They are the result of me grabbing the body from the network traffic sent by GraphiQL:

Yes, lazy me grabbing the request payload from GraphiQL and not cleaning it up πŸ˜‰

There is much, much, much more you can do with GraphQL in general and PostGraphile in particular but this was all I needed for now. Hopefully this can help you if you have to throw something together quickly. In a production setting, there is of course much more to think about: hosting the API (preferably in a container), authentication, authorization, performance, etc…

Further improvements to the IoT Hub to TimescaleDB Azure Function

In the post Improving an Azure Function that writes IoT Hub data to TimescaleDB, we added some improvements to an Azure Function that uses the Event Hub trigger to write messages from IoT Hub to TimescaleDB:

  • use of the Event Hub enqueuedTime timestamp instead of NOW() in the INSERT statement (yes, I know, using NOW() did not make sense πŸ˜‰)
  • make the code idempotent to handle duplicates (basically do nothing when a unique constraint is violated)

In general, I prefer to use application time (time at the event publisher) versus the time the message was enqueued. If you don’t have that timestamp, enqueuedTime is the next best thing.

How can we optimize the function even further? Read on about the cardinality setting!

Event Hub trigger cardinality setting

Our JavaScript Azure Function has its settings in function.json. For reference, here is its content:

{
"bindings": [
{
"type": "eventHubTrigger",
"name": "IoTHubMessages",
"direction": "in",
"eventHubName": "hub-pg",
"connection": "EH",
"cardinality": "one",
"consumerGroup": "pg"
}
]
}

Clearly, the function uses the eventHubTrigger for an Event Hub called hub-pg. In connection, EH refers to an Application Setting which contains the connections string to the Event Hub. Yes, I excel at naming stuff! The Event Hub has defined a consumer group called pg that we are using in this function.

The cardinality setting is currently set to “one”, which means that the function can only process one message at a time. As a best practice, you should use a cardinality of “many” in order to process batches of messages. A setting of “many” is the default.

To make the required change, modify function.json and set cardinality to “many”. You will also have to modify the Azure Function to process a batch of messages versus only one:

Processing batches of messages

With cardinality set to many, the IoTHubMessages parameter of the function is now an array. To retrieve the enqueuedTime from the messages, grab it from the enqueuedTimeUtcArray array using the index of the current message. Notice I also switched to JavaScript template literals to make the query a bit more readable.

The number of messages in a batch is controlled by maxBatchSize in host.json. By default, it is set to 64. Another setting,prefetchCount, determines how many messages are retrieved and cached before being sent to your function. When you change maxBatchSize, it is recommended to set prefetchCount to twice the maxBatchSize setting. For instance:

{
"version": "2.0",
"extensions": {
"eventHubs": {
"batchCheckpointFrequency": 1,
"eventProcessorOptions": {
"maxBatchSize": 128,
"prefetchCount": 256
}
}
}
}

It’s great to have these options but how should you set them? As always, the answer is in this book:

Afbeeldingsresultaat voor it depends joke

A great resource to get a feel for what these settings do is this article. It also comes with a Power BI report that allows you to set the parameters to see the results of load tests.

Conclusion

In this post, we used the function.json cardinality setting of “many” to process a batch of messages per function call. By default, Azure Functions will use batches of 64 messages without prefetching. With the host.json settings of maxBatchSize and prefetchCount, that can be changed to better handle your scenario.