Azure AD pod-managed identities in AKS revisited

A long time ago, I wrote a blog post about assigning managed identities to pods in Azure Kubernetes Services (AKS) to authenticate to Azure Storage. The implementation was based on the aad-pod-identity project on GitHub. You can look at the walkthrough to see how it worked.

Microsoft recently released a preview that enables you to turn on pod identity during cluster creation. It uses the same building blocks as before but makes it fully supported and part of AKS (although preview now). To create a basic cluster with pod identity enabled, you can use the following commands:

az group create -n RESOURCEGROUP -l LOCATION
az aks create -g RESOURCEGROUP -n CLUSTERNAME --enable-managed-identity --enable-pod-identity --network-plugin azure

Note: you need to use Azure CNI networking here; kubenet will not work

Before you deploy the cluster, make sure you follow the prerequisites in the documentation (Before you begin). At the time of writing (December 2020), the section in the documentation that tells you how to create the AKS cluster does not use the Azure CNI plugin. Make sure you add that!

What does –enable-pod-identity do?

When you use –enable-pod-identity, you should see nmi pods on your cluster in the kube-system namespace:

NMI pods

These pods are created from a DaemonSet so you will have one pod per cluster node (Linux nodes only ). When your application wants to use a managed identity, it does a request to the Instance Metadata Service (IMDS) endpoint which is 169.254.169.254. Requests to that IP address are intercepted by the NMI pods via iptables rules. The NMI pod that intercepts the request then makes an Azure AD Authentication Library (ADAL) request to Azure AD to obtain a token for the managed identity and returns it to your application.

Next to the NMI pods, other things are added as well, such as custom resource definitions. Some of those are discussed below.

How to request the token?

It’s great to know that the NMI pods intercept requests to the IMDS endpoint but how do you make such a request? I put together a small example in Python in the following git repository: https://github.com/gbaeke/python-msi. The code is in the rg-api folder in server.py:

from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient, SubscriptionClient
from fastapi import FastAPI

app = FastAPI()

try:
    credentials = DefaultAzureCredential()
    subscription_client = SubscriptionClient(credentials)
    subscription = next(subscription_client.subscriptions.list())
    subscription_id = subscription.subscription_id
    resource_client = ResourceManagementClient(credentials, subscription_id)
except:
    print("error obtaining credentials")

@app.get("/")
def read_root():
    groups=[]
    try:
        for resource_group in resource_client.resource_groups.list():
            groups.append(resource_group.name)
    except:
        print("error obtaining groups")
    
    return groups

The code does the following:

  • use the azure-identity Python library to obtain credentials via DefaultAzureCredential() function. Note that that function tries multiple authentication options. If you run the code on your local computer and you are logged on to Azure with the Azure CLI, it will also work
  • use the azure-mgmt-resource Python library to enumerate resource groups in the current subscription
  • create a very simple API with FastAPI to ask for the list of resource groups; we can use a kubectl port forward later to obtain the JSON response; if authentication fails, the call will return an empty list instead of HTTP errors as you normally would

On my system, this is the result of the call when pod identity is working:

A bunch of resource groups in my test subscription… messy as usual

The repo also contains a Dockerfile to build a container with the app. I built and pushed that container to Docker Hub as gbaeke/rgapi.

Creating and using the identity

If we want the pod that runs the above code to use a specific identity, we have to create the identity and then tell the pod to use it. To create the managed identity, use the following command:

 az identity create --resource-group  rg-clu-msi --name rgapi 

The output of this command contains an id field that we need in another command later. The result of the above command is a User Assigned Managed Identity called rgapi. I already granted the Contributor role at the subscription level.

User Assigned Managed Identity rgapi

Note that this has nothing to do with AKS. To create a pod identity to use in AKS, you will need to run another command:

az aks pod-identity add --resource-group rg-clu-msi --cluster-name clu-msi --namespace  rgapi  --name rgapi --identity-resource-id "id field from previous command" 

The above command creates a pod identity called rgapi in the namespace rgapi. This namespace will be created if it does not exist. You can see the pod identity by running the below command:

 kubectl get azureidentities.aadpodidentity.k8s.io

If you look inside such an object, you would find the reference to the managed identity by its resource id (the id field from earlier). There are other custom resource definitions used by pod identity that we will not bother with now.

Now we need to create a pod and associate it with the pod identity. You can do so with the following YAML:

apiVersion: v1
kind: Pod
metadata:
  name: rgapi
  namespace: rgapi
  labels:
    aadpodidbinding: rgapi
spec:
  containers:
  - name: rgapi
    image: gbaeke/rgapi
  nodeSelector:
    kubernetes.io/os: linux

The important bit above is the aadpodidbinding label which refers to the pod identity we created earlier. When the above pod gets scheduled, it will call out to the IMDS endpoint. You should see that in the logs of the NMI pod on the same node as your application pod. For example:

no clientID or resourceID in request. rgapi/rgapi has been matched with azure identity rgapi/rgapi
status (200) took 12677813 ns for req.method=GET reg.path=/metadata/identity/oauth2/token req.remote=10.240.0.36

The first line indicates that I did not specifically set a clientID in my request but that the request is matched to the rgapi identity. The second line shows the NMI pod requesting a token for the identity from the Azure AD token endpoint.

Great! We now have a pod running that can retrieve resource groups with our custom managed identity. We did not have to add credentials manually or grab them from Key Vault. Our pod automatically picks up the pod identity. 🎉

Conclusion

Although it is still not super simple (is identity ever simple really?), the new method to enable pod identities is a definite improvement. It is currently in preview so it should not be used in production. Once it goes GA however, you will have a fully supported method of using user assigned managed identity with your pods and use specific identities per pod following least privilege methods.

Azure Key Vault Provider for Secrets Store CSI Driver

In the previous post, I talked about akv2k8s. akv2k8s is a Kubernetes controller that synchronizes secrets and certificates from Key Vault. Besides synchronizing to a regular secret, it can also inject secrets into pods.

Instead of akv2k8s, you can also use the secrets store CSI driver with the Azure Key Vault provider. As a CSI driver, its main purpose is to mount secrets and certificates as storage volumes. Next to that, it can also create regular Kubernetes secrets that can be used with an ingress controller or mounted as environment variables. That might be required if the application was not designed to read the secret from the file system.

In the previous post, I used akv2k8s to grab a certificate from Key Vault, create a Kubernetes secret and use that secret with nginx ingress controller:

certificate in Key Vault ------akv2aks periodic sync -----> Kubernetes secret ------> nginx ingress controller

Let’s briefly look at how to do this with the secrets store CSI driver.

Installation

Follow the guide to install the Helm chart with Helm v3:

helm repo add csi-secrets-store-provider-azure https://raw.githubusercontent.com/Azure/secrets-store-csi-driver-provider-azure/master/charts
helm install csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --generate-name

This will install the components in the current Kubernetes namespace.

Easy no?

Syncing the certificate

Following the same example as with akv2aks, we need to point at the certificate in Key Vault, set the right permissions, and bring the certificate down to Kubernetes.

You will first need to decide how to access Key Vault. You can use the managed identity of your AKS cluster or be more granular and use pod identity. If you have setup AKS with a managed identity, that is the simplest solution. You just need to grab the clientId of the managed identity like so:

az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.clientId -o tsv

Next, create a file with the content below and apply it to your cluster in a namespace of your choosing.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: azure-gebakv
  namespace: YOUR NAMESPACE
spec:
  provider: azure
  secretObjects:
  - secretName: nginx-cert
    type: kubernetes.io/tls
    data:
    - objectName: nginx
      key: tls.key
    - objectName: nginx
      key: tls.crt
  parameters:
    useVMManagedIdentity: "true"
    userAssignedIdentityID: "CLIENTID YOU OBTAINED ABOVE" 
    keyvaultName: "gebakv"         
    objects:  |
      array:
        - |
          objectName: nginx
          objectType: secret        
    tenantId: "ID OF YOUR AZURE AD TENANT"

Compared to the akv2k8s controller, the above configuration is a bit more complex. In the parameters section, in the objects array, you specify the name of the certificate in Key Vault and its object type. Yes, you saw that correctly, the objectType actually has to be secret for this to work.

The other settings are self-explanatory: we use the managed identity, set its clientId and in keyvaultName we set the short name of our Key Vault.

The settings in the parameters section are actually sufficient to mount the secret/certificate in a pod. With the secretObjects section though, we can also ask for the creation of regular Kubernetes secrets. Here, we ask for a secret of type kubernetes.io/tls with name nginx-cert to be created. You need to explicitly set both the tls.key and the tls.crt value and correctly reference the objectName in the array.

The akv2k8s controller is simpler to use as you only need to point it to your certificate in Key Vault (and specify it’s a certificate, not a secret) and set a secret name. There is no need to set the different values in the secret.

Using the secret

The advantage of the secrets store CSI driver is that the secret is only mounted/created when an application requires it. That also means we have to instruct our application to mount the secret explicitly. You do that via a volume as the example below illustrates (part of a deployment):

spec:
      containers:
      - name: realtimeapp
        image: gbaeke/fluxapp:1.0.2
        volumeMounts:
          - mountPath: "/mnt/secrets-store"
            name: secrets-store-inline
            readOnly: true
        env:
        - name: REDISHOST
          value: "redis:6379"
        resources:
          requests:
            cpu: 25m
            memory: 50Mi
          limits:
            cpu: 150m
            memory: 150Mi
        ports:
        - containerPort: 8080
      volumes:
      - name: secrets-store-inline
        csi:
          driver: secrets-store.csi.k8s.io
          readOnly: true
          volumeAttributes:
            secretProviderClass: "azure-gebakv"

In the above YAML, the following happens:

  • in volumes: we create a volume called secrets-store-inline and use the csi driver to mount the secrets we specified in the SecretProviderClass we created earlier (azure-gebakv)
  • in volumeMounts: we mount the volume on /mnt/secrets-store

Because we used secretObjects in our SecretProviderClass, this mount is accompanied by the creation of a regular Kubernetes secret as well.

When you remove the deployment, the Kubernetes secret will be removed instead of lingering behind for all to see.

Of course, the pods in my deployment do not need the mounted volume. It was not immediately clear to me how to avoid the mount but still create the Kubernetes secret (not exactly the point of a CSI driver 😀). On the other hand, there is a way to have the secret created as part of ingress controller creation. That approach is more useful in this case because we want our ingress controller to use the certificate. More information can be found here. In short, it roughly works as follows:

  • instead of creating and mounting a volume in your application pod, a volume should be created and mounted on the ingress controller
  • to do so, you modify the deployment of your ingress controller (e.g. ingress-nginx) with extraVolumes: and extraVolumeMounts: sections; depending on the ingress controller you use, other settings might be required

Be aware that you need to enable auto rotation of secrets manually and that it is an alpha feature at this point (December 2020). The akv2k8s controller does that for you out of the box.

Conclusion

Both the akv2k8s controller and the Secrets Store CSI driver (for Azure) can be used to achieve the same objective: syncing secrets, keys and certificates from Key Vault to AKS. In my experience, the akv2k8s controller is easier to use. The big advantage of the Secrets Store CSI driver is that it is a broader solution (not just for AKS) and supports multiple secret stores. Next to Azure Key Vault, it also supports Hashicorp’s Vault for example. My recommendation: for Azure Key Vault and AKS, keep it simple and try akv2k8s first!

An introduction to Flux v2

If you have read my blog and watched my Youtube channel, you know I have worked with Flux in the past. Flux, by weaveworks, is a GitOps Kubernetes Operator that ensures that your cluster state matches the desired state described in a git repository. There are other solutions as well, such as Argo CD.

With Flux v2, GitOps on Kubernetes became a lot more powerful and easier to use. Flux v2 is built on a set of controllers and APIs called the GitOps Toolkit. The toolkit contains the following components:

  • Source controller: allows you to create sources such as a GitRepository or a HelmRepository; the source controller acts on several custom resource definitions (CRDs) as defined in the docs
  • Kustomize controller: runs continuous delivery pipelines defined with Kubernetes manifests (YAML) files; although you can use kustomize and define kustomization.yaml files, you do not have to; internally though, Flux v2 uses kustomize to deploy your manifests; the kustomize controller acts on Kustomization CRDs as defined here
  • Helm controller: deploy your workloads based on Helm charts but do so declaratively; there is no need to run helm commands; see the docs for more information
  • Notification controller: responds to incoming events (e.g. from a git repo) and sends outgoing events (e.g. to Teams or Slack); more info here

If you throw it all together, you get something like this:

GitOps Toolkit components that make up Flux v2 (from https://toolkit.fluxcd.io/)

Getting started

To get started, you should of course look at the documentation over at https://toolkit.fluxcd.io. I also created a series of videos about Flux v2. The first one talks about Flux v2 in general and shows how to bootstrap a cluster.

Part 1 in the series about Flux v2

Although Flux v2 works with other source control systems than GitHub, for instance GitLab, I use GitHub in the above video. I also use kind, to make it easy to try out Flux v2 on your local machine. In subsequent videos, I use Azure Kubernetes Services (AKS).

In Flux v2, it is much easier to deploy Flux on your cluster with the flux bootstrap command. Flux v2 itself is basically installed and managed via GitOps principles by pushing all Flux v2 manifests to a git repository and running reconciliations to keep the components running as intended.

Kustomize

Flux v1 already supported kustomize but v2 takes it to another level. Whenever you want to deploy to Kubernetes with YAML manifests, you will create a kustomization, which is based on the Kustomization CRD. A kustomization is defined as below:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: realtimeapp-dev
  namespace: flux-system
spec:
  healthChecks:
  - kind: Deployment
    name: realtime-dev
    namespace: realtime-dev
  - kind: Deployment
    name: redis-dev
    namespace: realtime-dev
  interval: 1m0s
  path: ./deploy/overlays/dev
  prune: true
  sourceRef:
    kind: GitRepository
    name: realtimeapp-infra
  timeout: 2m0s
  validation: client

A kustomization requires a source. In this case, the source is a git repository called realtimeapp-infra that was already defined in advance. The source just points to a public git repository on Github: https://github.com/gbaeke/realtimeapp-infra.

The source contains a deploy folder, which contains a bases and an overlays folder. The kustomization points to the ./deploy/overlays/dev folder as set in path. That folder contains a kustomization.yaml file that deploys an application in a development namespace and uses the base from ./deploy/bases/realtimeapp as its source. If you are not sure what kustomize exactly does, I made a video that tries 😉 to explain it:

An introduction to kustomize

It is important to know that you do not need to use kustomize in your source files. If you point a Flux v2 kustomization to a path that just contains a bunch of YAML files, it will work equally well. You do not have to create a kustomization.yaml file in that folder that lists the resources (YAML files) that you want to deploy. Internally though, Flux v2 will use kustomize to deploy the manifests and uses the deployment order that kustomize uses: first namespaces, then services, then deployments, etc…

The interval in the kustomization (above set at 1 minute) means that your YAML files are applied at that interval, even if the source has not changed. This ensures that, if you modified resources on your cluster, the kustomization will reset the changes to the state as defined in the source. The source itself has its own interval. If you set a GitRepository source to 1 minute, the source is checked every 1 minute. If the source has changes, the kustomizations that depend on the source will be notified and proceed to deploy the changes.

A GitRepository source can refer to a specific branch, but can also refer to a semantic versioning tag if you use a semver range in the source. See checkout strategies for more information.

Deploying YAML manifests

If the above explanation of sources and kustomizations does not mean much to you, I created a video that illustrates these aspects more clearly:

In the above video, the source that points to https://github.com/gbaeke/realtimeapp-infra gets created first (see it at this mark). Next, I create two kustomizations, one for development and one for production. I use a kustomize base for the application plus two overlays, one for dev and one for production.

What to do when the app container images changes?

Flux v1 has a feature that tracks container images in a container registry and updates your cluster resources with a new image based on a filter you set. This requires read/write access to your git repository because Flux v1 set the images in your source files. Flux v2 does not have this feature yet (November 2020, see https://toolkit.fluxcd.io/roadmap).

In my example, I use a GitHub Action in the application source code repository to build and push the application image to Docker Hub. The GitHub action triggers a build job on two events:

  • push to main branch: build a container image with a short sha as the tag (e.g. gbaeke/flux-rt:sha-94561cb
  • published release: build a container image with the release version as the tag (e.g. gbaeke/flux-rt:1.0.1)

When the build is caused by a push to main, the update-dev-image job runs. It modifies kustomization.yaml in the dev overlay with kustomize edit:

update-dev-image:
    runs-on: ubuntu-latest
    if: contains(github.ref, 'heads')
    needs:
    - build
    steps:
    - uses: imranismail/setup-kustomize@v1
      with:
        kustomize-version: 3.8.6
    - run: git clone https://${REPO_TOKEN}@github.com/gbaeke/realtimeapp-infra.git .
      env:
        REPO_TOKEN: ${{secrets.REPO_TOKEN}}
    - run: kustomize edit set image gbaeke/flux-rt:sha-$(git rev-parse --short $GITHUB_SHA)
      working-directory: ./deploy/overlays/dev
    - run: git add .
    - run: |
        git config user.email "$EMAIL"
        git config user.name "$GITHUB_ACTOR"
      env:
        EMAIL: ${{secrets.EMAIL}}
    - run: git commit -m "Set dev image tag to short sha"
    - run: git push

Similarly, when the build is caused by a release, the image is updated in the production overlay’s kustomization.yaml file.

Conclusion

If you are interested in GitOps as an alternative for continuous delivery to Kubernetes, do check out Flux v2 and see if it meets your needs. I personally like it a lot and believe that they are setting the standard for GitOps on Kubernetes. I have not covered Helm deployments, monitoring and alerting features yet. I will create additional videos and posts about those features in the near future. Stay tuned!

Docker without Docker: a look at Podman

I have been working with Docker for quite some time. More and more however, I see people switching to tools like Podman and Buildah and decided to give that a go.

I installed a virtual machine in Azure with the following Azure CLI command:

az vm create \
  	--resource-group RESOURCEGROUP \
  	--name VMNAME \
  	--image UbuntuLTS \
	--authentication-type password \
  	--admin-username azureuser \
  	--admin-password PASSWORD \
	--size Standard_B2ms

Just replace RESOURCEGROUP, VMNAME and PASSWORD with the values you want to use and you are good to go. Note that the above command results in Ubuntu 18.04 at the time of writing.

SSH into that VM for the following steps.

Installing Podman

Installation of Podman is easy enough. The commands below do the trick:

. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade 
sudo apt-get -y install podman

You can find more information at https://podman.io/getting-started/installation.

Where Docker uses a client/server model, with a privileged Docker daemon and a docker client that communicates with it, Podman uses a fork/exec model. The container process is a child of the Podman process. This also means you do not require root to run a container which is great from a security and auditing perspective.

You can now just use the podman command. It supports the same arguments as the docker command. If you want, you can even create a docker alias for the podman command.

To check if everything is working, run the following command:

podman run hello-world

It will pull down the hello-world image from Docker Hub and display a message.

I wanted to start my gbaeke/nasnet container with podman, using the following command:

podman run  -p 80:9090 -d gbaeke/nasnet

Of course, the above command will fail. I am not running as root, which means I cannot bind a process to a port below 1024. There are ways to fix that but I changed the command to:

podman run  -p 9090:9090 -d gbaeke/nasnet

The gbaeke/nasnet container is large, close to 3 GB. Pulling the container from Docker Hub went fast but Podman took a very long time during the Storing signatures phase. While the command was running, I checked disk space on the VM with df and noticed that the machine’s disk was quickly filling up.

On WSL2 (Windows Subsystem for Linux), I did not have trouble with pulling large images. With the docker info command, I found that it was using overlay2 as the storage driver:

Docker on WSL2 uses overlay2

You can find more information about Docker and overlay2, see https://docs.docker.com/storage/storagedriver/overlayfs-driver/.

With podman, run podman info to check the storage driver podman uses. Look for graphDriverName in the output. In my case, podman used vfs. Although vfs is well supported and runs anywhere, it does full copies of layers (represented by directories on your filesystem) in the image which results in using a lot of diskspace. If the disk is not super fast, this will result in long wait times when pulling an image and waste of disk space.

Without getting bogged down in the specifics of the storage drivers and their pros and cons, I decided to switch Podman from vfs to fuse-overlayfs. Fuse stands for Filesystem in Userspace, so fuse-overlayfs is the implementation of overlayfs in userspace (using FUSE). It supports deduplication of layers which will result in less consumption of disk space. This should be very noticeable when pulling a large image.

IMPORTANT: remove the containers folder in ~/.local/share to clear out container storage before installing overlayfs. Use the command below;

rm -rf ~/.local/share/containers

Installing fuse-overlayfs

The installation instructions are at https://github.com/containers/fuse-overlayfs. I needed to use the static build because I am running Ubuntu 18.04. On newer versions of Ubuntu, you can use apt install libfuse3-dev.

It’s of no use here to repeat the static build steps. Just head over to the GitHub repo and follow the steps. When asked to clone the git repo, use the following command:

git clone https://github.com/containers/fuse-overlayfs.git

The final step in the instructions is to copy fuse-overlayfs (which was just built with buildah) to /usr/bin.

If you now run podman info, the graphDrivername should be overlay. There’s nothing you need to do to make that happen:

overlay storage driver with /usr/bin/fuse-overlayfs as the executable

When you now run the gbaeke/nasnet container, or any sufficiently large container, the process should be much smoother. I can still take a couple of minutes though. Note that at the end, your output will be somewhat like below:

Output from podman run -p 9090:9090 -d gbaeke/nasnet

Now you can run podman ps and you should see the running container:

gbaeke/nasnet container is running

Go to http://localhost:9090 and you should see the UI. Go ahead and classify an image! 😉

Conclusion

Installing and using Podman is easy, especially if you are familiar with Docker somewhat. I did have trouble with performance and disk storage with large images but that can be fixed by swapping out vfs with something like overlayfs. Be aware that there are many other options and that it is quite complex under the hood. But with the above steps, you should be good to go.

Will I use podman from now on? Probably not as Docker provides me all I need for now and a lot of tools I use are dependent on it.

From MQTT to InfluxDB with Dapr

In a previous post, we looked at using the Dapr InfluxDB component to write data to InfluxDB Cloud. In this post, we will take a look at reading data from an MQTT topic and storing it in InfluxDB. We will use Dapr 0.10, which includes both components.

To get up to speed with Dapr, please read the previous post and make sure you have an InfluxDB instance up and running in the cloud.

If you want to see a video instead:

MQTT to Influx with Dapr

Note that the video sends output to both InfluxDB and Azure SignalR. In addition, the video uses Dapr 0.8 with a custom compiled Dapr because I was still developing and testing the InfluxDB component.

MQTT Server

Although there are cloud-based MQTT servers you can use, let’s mix it up a little and run the MQTT server from Docker. If you have Docker installed, type the following:

docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto

The above command runs Mosquitto and exposes port 1883 on your local machine. You can use a tool such as MQTT Explorer to send data. Install MQTT Explorer on your local machine and run it. Create a connection like in the below screenshot:

MQTT Explorer connection

Now, click Connect to connect to Mosquitto. With MQTT, you send data to topics of your choice. Publish a json message to a topic called test as shown below:

Publish json data to the test topic

You can now click the topic in the list of topics and see its most recent value:

Subscribing to the test topic

Using MQTT with Dapr

You are now ready to read data from an MQTT topic with Dapr. If you have Dapr installed, you can run the following code to read from the test topic and store the data in InfluxDB:

const express = require('express');
const bodyParser = require('body-parser');

const app = express();
app.use(bodyParser.json());

const port = 3000;

// mqtt component will post messages from influx topic here
app.post('/mqtt', (req, res) => {
    console.log("MQTT Binding Trigger");
    console.log(req.body)

    // body is expected to contain room and temperature
    room = req.body.room
    temperature = req.body.temperature

    // room should not contain spaces
    room = room.split(" ").join("_")

    // create message for influx component
    message = {
        "measurement": "stat",
        "tags": `room=${room}`,
        "values": `temperature=${temperature}`
    };
    
    // send the message to influx output binding
    res.send({
        "to": ["influx"],
        "data": message
    });
});

app.listen(port, () => console.log(`Node App listening on port ${port}!`));

In this example, we use Node.js instead of Python to illustrate that Dapr works with any language. You will also need this package.json and run npm install:

{
  "name": "mqttapp",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.18.3",
    "express": "^4.16.4"
  }
}

In the previous post about InfluxDB, we used an output binding. You use an output binding by posting data to a Dapr HTTP URI.

To use an input binding like MQTT, you will need to create an HTTP server. Above, we create an HTTP server with Express, and listen on port 3000 for incoming requests. Later, we will instruct Dapr to listen for messages on an MQTT topic and, when a message arrives, post it to our server. We can then retrieve the message from the request body.

To tell Dapr what to do, we’ll create a components folder in the same folder that holds the Node.js code. Put a file in that folder with the following contents:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt
spec:
  type: bindings.mqtt
  metadata:
  - name: url
    value: mqtt://localhost:1883
  - name: topic
    value: test

Above, we configure the MQTT component to list to topic test on mqtt://localhost:1883. The name we use (in metadata) is important because that needs to correspond to our HTTP handler (/mqtt).

Like in the previous post, there’s another file that configures the InfluxDB component:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: influx
spec:
  type: bindings.influx
  metadata:
  - name: Url
    value: http://localhost:9999
  - name: Token
    value: ""
  - name: Org
    value: ""
  - name: Bucket
    value: ""

Replace the parameters in the file above with your own.

Saving the MQTT request body to InfluxDB

If you look at the Node.js code, you have probably noticed that we send a response body in the /mqtt handler:

res.send({
        "to": ["influx"],
        "data": message
    });

Dapr is written to accept responses that include a to and a data field in the JSON response. The above response simply tells Dapr to send the message in the data field to the configured influx component.

Does it work?

Let’s run the code with Dapr to see if it works:

dapr run --app-id mqqtinflux --app-port 3000 --components-path=./components node app.js

In dapr run, we also need to specify the port our app uses. Remember that Dapr will post JSON data to our /mqtt handler!

Let’s post some JSON with the expected fields of temperature and room to our MQTT server:

Posting data to the test topic

The Dapr logs show the following:

Logs from the APP (appear alongside the Dapr logs)

In InfluxDB Cloud table view:

Data stored in InfluxDB Cloud (posted some other data points before)

Conclusion

Dapr makes it really easy to retrieve data with input bindings and send that data somewhere else with output bindings. There are many other input and output bindings so make sure you check them out on GitHub!

Using the Dapr InfluxDB component

A while ago, I created a component that can write to InfluxDB 2.0 from Dapr. This component is now included in the 0.10 release. In this post, we will briefly look at how you can use it.

If you do not know what Dapr is, take a look at https://dapr.io. I also have some videos on Youtube about Dapr. And be sure to check out the video below as well:

Let’s jump in and use the component.

Installing Dapr

You can install Dapr on Windows, Mac and Linux by following the instructions on https://dapr.io/. Just click the Download link and select your operating system. I installed Dapr on WSL 2 (Windows Subsystem for Linux) on Windows 10 with the following command:

wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash

The above command just installs the Dapr CLI. To initialize Dapr, you need to run dapr init.

Getting an InfluxDB database

InfluxDB is a time-series database. You can easily run it in a container on your local machine but you can also use InfluxDB Cloud. In this post, we will simply use a free cloud instance. Just head over to https://cloud2.influxdata.com/signup and signup for an account. Just follow the steps and use a free account. It stores data for maximum 30 days and has some other limits as well.

You will need the following information to write data to InfluxDB:

  • Organization: this will be set to the e-mail account you signed up with; it can be renamed if you wish
  • Bucket: your data is stored in a bucket; by default you get a bucket called e-mail-prefix’s Bucket (e.g. geert.baeke’s Bucket)
  • Token: you need a token that provides the necessary access rights such as read and/or write

Let’s rename the bucket to get a feel for the user interface. Click Data, Buckets and then Settings as shown below:

Getting to the bucket settings

Click Rename and follow the steps to rename the bucket:

Renaming the bucket

Now, let’s create a token. In the Load Data screen, click Tokens. Click Generate and then click Read/Write Token. Describe the token and create it like below:

Creating a token

Now click the token you created and copy it to the clipboard. You now have the organization name, a bucket name and a token. You still need a URL to connect to but that just the URL you see in the browser (the yellow part):

URL to send your data

Your URL will depend on the cloud you use.

Python code to write to InfluxDB with Dapr

The code below requires Python 3. I used version 3.6.9 but it will work with more recent versions of course.

import time
import requests
import os

dapr_port = os.getenv("DAPR_HTTP_PORT", 3500)

dapr_url = "http://localhost:{}/v1.0/bindings/influx".format(dapr_port)
n = 0.0
while True:
    n += 1.0
    payload = { 
        "data": {
            "measurement": "temp",
            "tags": "room=dorm,building=building-a",
            "values": "sensor=\"sensor X\",avg={},max={}".format(n, n*2)
            }, 
        "operation": "create" 
    }
    print(payload, flush=True)
    try:
        response = requests.post(dapr_url, json=payload)
        print(response, flush=True)

    except Exception as e:
        print(e, flush=True)

    time.sleep(1)

The code above is just an illustration of using the InfluxDB output binding from Dapr. It is crucial to understand that a Dapr process needs to be running, either locally on your system or as a Kubernetes sidecar, that the above program communicates with. To that end, we get the Dapr port number from an environment variable or use the default port 3500.

The Python program uses the InfluxDB output binding simply by posting data to an HTTP endpoint. The endpoint is constructed as follows:

dapr_url = "http://localhost:{}/v1.0/bindings/influx".format(dapr_port)

The dapr_url above is set to a URI that uses localhost over the Dapr port and then uses the influx binding by appending /v1.0/bindings/influx. All bindings have a specific name like influx, mqtt, etc… and that name is then added to /v1.0/bindings/ to make the call work.

So far so good, but how does the binding know where to connect and what organization, bucket and token to use? That’s where the component .yaml file comes in. In the same folder where you save your Python code, create a folder called components. In the folder, create a file called influx.yaml (you can give it any name you want). The influx.yaml contents is shown below:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: influx
spec:
  type: bindings.influx
  metadata:
  - name: Url
    value: YOUR URL
  - name: Token
    value: "YOUR TOKEN HERE"
  - name: Org
    value: "YOUR ORG"
  - name: Bucket
    value: "YOUR BUCKET"

Of course, replace the uppercase values above with your own. We will later tell Dapr to look for files like this in the components folder. Automatically, because you use the influx binding in your Python code, Dapr will go look for the file above (type: bindings.influx) and retrieve the required metadata. If any of the metadata is not set or if the file is missing or improperly formatted, you will get an error.

To actually use the binding, we need to post some data to the URI we constructed. The data we send is in the payload variable as shown below:

 payload = { 
        "data": {
            "measurement": "temp",
            "tags": "room=dorm,building=building-a",
            "values": "sensor=\"sensor X\",avg={},max={}".format(n, n*2)
            }, 
        "operation": "create" 
    }

It requires a measurement field, a tags and a values field and uses the InfluxDB line protocol to send the data. You can find more information about that here.

The data field in the payload is specific to the Influx component. The operation field is required by this Dapr component as it is written to listen for create operations.

Running the code

On your local machine, you will need to run Dapr together with your code to make it work. You use dapr run for this. To run the Python code (saved to app.py in my case), run the command below from the folder that contains the code and the components folder:

dapr run --app-id influx -d ./components python3 app.py

This starts Dapr and our application with app id influx. With -d, we point to the components file.

When you run the code, Dapr logs and your logs will be printed to the screen. In InfluxDB Cloud, we can check the data from the user interface:

Date Explorer (Note: other organization and bucket than the one used in this post)

Conclusion

Dapr can be used in the cloud and at the edge, in containers or without. In both cases, you often have to write data to databases. With Dapr, you can now easily write data as time series to InfluxDB. Note that Dapr also has an MQTT input and output binding. Using the same simple technique you learned in this post, you can easily read data from an MQTT topic and forward it to InfluxDB. In a later post, we will take a look at that scenario as well. Or check this video instead: https://youtu.be/2vCT79KG24E. Note that the video uses a custom compiled Dapr 0.8 with the InfluxDB component because this video was created during development.

Dapr Service Invocation between an HTTP Python client and a GRPC Go server

Recently, I published several videos about Dapr on my Youtube channel. The videos cover the basics of state management, PubSub and service invocation.

The Getting Started with state management and service invocation:

Let’s take a closer look at service invocation with HTTP, Python and Node.

Service Invocation with HTTP

Service invocation (from Dapr docs)

The services you write (here Service A and B) talk to each other using the Dapr runtime. On Kubernetes, you talk to a Dapr sidecar deployed alongside your service container. On your development machine, you run your services via dapr run.

If you want to expose a method on Service B and you use HTTP, you just need to expose an HTTP handler or route. For example, with Express in Node you would use something like:

const express = require('express');
const app = express();
app.post('/neworder', (req, res) => { your code }

You then run your service and annotate it with the proper Dapr annotations (Kubernetes):

annotations:
        dapr.io/enabled: "true"
        dapr.io/id: "node"
        dapr.io/port: "3000"

On your local machine, you would just run the service via dapr run:

dapr run --app-id node --app-port 3000 node app.js

In the last example, the Dapr id is node and we indicate that the service is listening on port 3000. To invoke the method from service A, it can use the following code (Python example shown):

dapr_port = os.getenv("DAPR_HTTP_PORT", 3500)
dapr_url = "http://localhost:{}/v1.0/invoke/node/method/neworder".format(dapr_port)
message = {"data": {"orderId": 1234}}
response = requests.post(dapr_url, json=message, timeout=5)

As you can see, service A does not contact service B directly. It just talks to its Dapr sidecar on localhost (or Dapr on your dev machine) and asks it to invoke the neworder method via a service that uses Dapr id node. It is also clear that both service A and B use HTTP only. Because you just use HTTP to expose and invoke methods, you can use any language or framework.

You can find a complete example here with Node and Python.

Service Invocation with HTTP and GRPC

Dapr has SDKs available for C#, Go and other languages. You might prefer those over the generic HTTP approach. In the case of Go, the SDK uses GRPC to interface with the Dapr runtime. With Dapr in between, one service can use HTTP while another uses GRPC.

Let’s take a look at a service that exposes a method (HelloFromGo) from a Go application. The full example is here. Instead of creating an HTTP route with the name of your method, you use an OnInvoke handler that looks like this (only the start is shown, see the full code):

func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) {
	var response string

	switch in.Method {
	case "HelloFromGo":

		response = s.HelloFromGo()

Naturally, you also have to implement an HelloFromGo() method as well:

// HelloFromGo is a simple demo method to invoke
func (s *server) HelloFromGo() string {
	return "Hello"

}

Another service can use any language or framework and invoke the above method with a POST to the following URL if the Dapr id of the Go service is goserver:

http://localhost:3500/v1.0/invoke/goserver/method/HelloFromGo

A POST to the above URL tells Dapr to execute the OnInvoke method via GRPC which will run the HelloFromGo function. It is perfectly possible to include a payload in your POST and have the OnInvoke handler to process that payload. The full example is here which also includes sending and processing a JSON payload and sending back a text response. You will need to somewhat understand how GRPC works and also understand protocol buffers. A good book on GRPC is the following one: https://learning.oreilly.com/library/view/grpc-up-and/9781492058328/.

Conclusion

Dapr allows you to choose between HTTP and GRPC interfaces to interact with the runtime. You can choose whatever is most comfortable to you. One team can use HTTP with Python, JavaScript etc… while other teams use GRPC with their language of choice. Whatever you choose, the Dapr runtime will make sure service invocation just works allowing you to focus on the code.

Integrate an Azure Storage Account with Active Directory

A customer with a Windows Virtual Desktop deployment needed access to several file shares for one of their applications. The integration of Azure Storage Accounts with Active Directory allows us to provide this functionality without having to deploy and maintain file services on a virtual machine.

A sketch of the environment looks something like this:

Azure File Share integration with Active Directory

Based on the sketch above, you should think about the requirements to make this work:

  • Clients that access the file share need to be joined to a domain. This can be an Azure Active Directory Domain Services (AADDS) managed domain or just plain old Active Directory Domain Services (ADDS). The steps to integrate the storage account with AADDS or AADS are different as we will see later. I will only look at ADDS integration via a PowerShell script. In this case, the WVD virtual machines are joined to ADDS and domain controllers are available on Azure.
  • Users that access the file share need to have an account in ADDS that is synced to Azure AD (AAD). This is required because users or groups are given share-level permissions at the storage account level to their AAD identity. The NTFS-level permissions are given to the ADDS identity. Since this is a Windows Virtual Desktop deployment, that is already the case.
  • You should think about how the clients (WVD here) connect to the file share. If you only need access from Azure subnets, then VNET Service Endpoints are a good choice. This will configure direct routing to the storage account in the subnet’s route table and also provides the necessary security as public access to the storage account is blocked. You could also use Private Link or just access the storage account via public access. I do not recommend the latter so either use service endpoints or private link.

Configuring the integration

In the configuration of the storage account, you will see the following options:

Storage account AD integration options

Integration with AADDS is just a click on Enabled. For ADDS integration however, you need to follow another procedure from a virtual machine that is joined to the ADDS domain.

On that virtual machine, log on with an account that can create a computer object in ADDS in an OU that you set in the script. For best results, the account you use should be synced to AAD and should have at least the Contributor role on the storage account.

Next, download the Microsoft provided scripts from here and unzip them in a folder like C:\scripts. You should have the following scripts and modules in there:

Scripts and PowerShell module for Azure Storage Account integration

Next, add a script to the folder with the following contents and replace the <PLACEHOLDERS>:

Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser

.\CopyToPSPath.ps1 

Import-Module -Name AzFilesHybrid

Connect-AzAccount

$SubscriptionId = "<YOUR SUB ID>"
$ResourceGroupName = "<YOUR RESOURCE GROUP>"
$StorageAccountName = "<YOUR STORAGE ACCOUNT NAME>"

Select-AzSubscription -SubscriptionId $SubscriptionId 

Join-AzStorageAccountForAuth `
        -ResourceGroupName $ResourceGroupName `
        -Name $StorageAccountName `
        -DomainAccountType "ComputerAccount" ` 
        -OrganizationalUnitName "<OU DISTINGUISHED NAME>

Debug-AzStorageAccountAuth -StorageAccountName $StorageAccountName -ResourceGroupName $ResourceGroupName -Verbose

Run the script from the C:\scripts folder so it can execute CopyToPSPath.ps1 and import the AzFilesHybrid module. The Join-AzStorageAccountForAuth cmdlet does the actual work. When you are asked to rerun the script, do so!

The result of the above script should be a computer account in the OU that you chose. The computer account has the name of the storage account.

In the storage account configuration, you should see the following:

The blurred section will show the domain name

Now we can proceed to granting “share-level” access rights, similar to share-level rights on a Windows file server.

Granting share-level access

Navigate to the file share and click IAM. You will see the following:

IAM on the file share level

Use + Add to add AAD users or groups. You can use the following roles:

  • Storage File Data SMB Share Reader: read access
  • Storage File Data SMB Share Contributor: read, write and delete
  • Storage File Data SMB Share Elevated Contributor: read, write and delete + modify ACLs at the NTFS level

For example, if I needed to grant read rights to the group APP_READERS in ADDS, I would grant the Storage File Data SMB Share Reader role to the APP_READERS group in Azure AD (synced from ADDS).

Like on a Windows File Server, share permissions are not sufficient. Let’s add some good old NTFS rights…

Granting NTFS Access Rights

For a smooth experience, log on to a domain-joined machine with an ADDS account that is synced to an AAD account that has at least the Contributor role on the storage account.

To grant the NTFS right, map a drive with the storage account key. Use the following command:

net use <desired-drive-letter>: \\<storage-account-name>.file.core.windows.net\<share-name> /user:Azure\<storage-account-name> <storage-account-key>

Get the storage account key from here:

Storage account access keys

Now you can open the mapped drive in Explorer and set NTFS rights. Alternatively, you can use icacls.exe or other tools.

Mapping the drive for users

Now that the storage account is integrated with ADDS, a user can log on to a domain-joined machine and mount the share without having to provide credentials. As long as the user has the necessary share and NTFS rights, she can access the data.

Mapping the drive can be done in many ways but a simple net use Z: \\storageaccountname.file.core.windows.net\share will suffice.

Securing the connection

You should configure the storage account in such a way that it only allows access from selected clients. In this case, because the clients are Windows Virtual Desktops in a specific Azure subnet, we can use Virtual Network Service Endpoints. They can be easily configured from Firewalls and Virtual Networks:

Access from selected networks only: 3 subnets in this case

Granting access to specific subnets results in the configuration of virtual network service endpoints and a modification of the subnet route table with a direct route to the storage account on the Microsoft network. Note that you are still connecting to the public IP of the storage account.

If you decide to use Private Link instead, you would get a private IP in your subnet that is mapped to the storage account. In that case, even on-premises clients could connect to the storage account over the VPN or ExpressRoute private peerings. Of course, using private link would require extra DNS configuration as well.

Some extra notes

  • when you cannot configure the integration with the PowerShell script, follow the steps to enable the integration manually; do not forget the set the Kerberos password properly
  • it is recommended to put the AD computer accounts that represent the storage accounts in a separate OU; enable a Group Policy on that OU that prevents password resets on the computer accounts

Conclusion

Although there is some work to be done upfront and there are requirements such as Azure AD and Azure AD Connect, using an Azure Storage Account to host Active Directory integrated file shares is recommended. Remember that it works with both AADDS and ADDS. In this post, we looked at ADDS only and integration via the Microsoft-provided PowerShell scripts.

Adding SMB1 protocol support to Windows Server 2019

I realize this is not a very exciting post, especially compared to my other wonderful musing on this site, but I felt I really had to write it to share the pain!

A colleague I work with needed to enable this feature on an Azure Windows Server 2019 machine to communicate with some old system that only supports Server Message Block version 1 (SMB1). Easy enough to add that right?

Trying the installation

Let’s first get some information about the feature:

Get-WindowsOptionalFeature -Online -FeatureName SMB1Protocol

The output:

Notice the State property? The feature is disabled and the payload (installation files) are not on the Azure virtual machine.

When you try to install the feature, you get:

I guess I need the Windows Server 2019 sources!

Downloading Windows Server 2019

I downloaded Windows Server 2019 (November 2019 version) from https://my.visualstudio.com/Downloads?q=SQL%20Server%202019. I am not sure if you can use the evaluation version of Windows Server 2019 because I did not try that. I downloaded the ISO to the Azure virtual machine.

Mount ISO and copy install.wim

On recent versions of Windows, you can right click an ISO and mount it. In the mounted ISO, search for install.wim and copy that file to a folder on your C: disk like c:\wim. Under c:\wim, create a folder called mount and run the following command:

dism /mount-wim /wimfile:c:\wim\install.wim /index:4 /mountdir:c:\wim\mount /readonly

The contents of install.wim is now available in c:\wim\mount. Now don’t try to enable the feature by pointing to the sources with the -source parameter of Enable-WindowsOptionalFeature. It will not work yet!

Patch the mounted files

The Azure Windows Server 2019 image (at time of writing, June 2020) has a cumulative update installed: https://support.microsoft.com/en-us/help/4551853/windows-10-update-kb4551853. For this installatin to work, I needed to download the update from https://www.catalog.update.microsoft.com/home.aspx and put it somewhere like c:\patches.

Now we can update the mounted files offline with the following command:

Dism /Add-Package /Image:"C:\wim\mount" /PackagePath="c:\patches\windows10.0-kb4551853-x64_ce1ea7def481ee2eb8bba6db49ddb42e45cba54f.msu" 

It will take a while to update! You need to do this because the files mounted from the downloaded ISO do not match the version of the Windows Server 2019 image. Without this update, the installation of the SMB1 feature will not succeed.

Enable the SMB1 feature

Now we can enable the feature with the following command:

dism /online /Enable-Feature /FeatureName:SMB1Protocol /All /source:c:\wim\mount\windows\winsxs /limitaccess 

You will need to reboot. After rebooting, from a PowerShell prompt run the command below to check the installation:

Get-WindowsOptionalFeature -Online -FeatureName SMB1Protocol

The State property should say: Enabled

Conclusion

Something this trivial took me way too long. Is there a simpler way? Let me know! 👍

And by the way, don’t enable SMB1. It’s not secure at all. But in some cases, there’s just no way around it.

Adding Authentication and Authorization to an Azure Static Web App

In a previous post, we created a static web app that retrieves documents from Cosmos DB via an Azure Function. The Azure Function got deployed automatically and runs off the same domain as your app. In essence, that frees you from having to setup Azure Functions separately and configuring CORS in the process.

Instead of allowing anonymous users to call the api at https://yourwebapp/api/device, I only want to allow specific users to do so. In this post, we will explore how that works.

You can find the source code of the static web app and the API on GitHub: https://github.com/gbaeke/az-static-web-app.

More into video tutorials? Then check out the video below. I recommend 1.2x speed! 😉

Full version about creating the app and protecting the API

Create a routes.json

To define the protected routes, you need routes.json in the root of your project:

routes.json to protect /api/*

The routes.json file serves multiple purposes. Check out how it works here. In my case, I just want to protect the /api/* routes and allow the Authenticated users role. The Authenticated role is a built-in role but you should create custom roles to protect sensitive data (more info near the end of this post). For our purposes, the platform error override is not needed and be removed. These overrides are useful though as they allow you to catch errors and act accordingly.

Push the above change to your repository for routes.json to go in effect. Once you do, access to /api/* requires authentication. Without it, you will get a 401 Unauthorized error. To fix that, invite your users and define roles.

Inviting Users

In Role Management, you can invite individual users to your app:

User gbaeke (via GitHub) user identity added

Just click Invite and fill in the blanks. Inviting a user results in an invitation link you should send the user. Below is an example for my Twitter account:

Let’s invite myself via my Twitter account

When I go to the invite link, I can authorize the app:

Authorizing Static Web Apps to access my account

After this, you will also get a Consent screen:

Granting Consent (users can always remove their data later; yeah right 😉)

When consent is given, the application will open with authentication. I added some code to the HTML page to display when the user is authenticated. The user name can be retrieved with a call to .auth/me (see later).

App with Twitter handle shown

In the Azure Portal, the Twitter account is now shown as well.

User added to roles of the web app

Note: anyone can actually authenticate to your app; you do not have to invite them; you invite users only when you want to assign them custom roles

Simple authentication code

The HTML code in index.html contains some links to login and logout:

  • To login: a link to /.auth/login/github
  • To logout: a link to /.auth/logout

Microsoft provides these paths under /.auth automatically to support the different authentication scenarios. In my case, I only have a GitHub login. To support Twitter or Facebook logins, I would need to provide some extra logic for the user to choose the provider.

In the HTML, the buttons are shown/hidden depending on the existence of user.UserDetails. The user information is retrieved via a call to the system-provided /.auth/me with the code below that uses fetch:

async  getUser() {
     const response = await fetch("/.auth/me");
     const payload = await response.json();
     const { clientPrincipal } = payload;
     this.user = clientPrincipal;

user.UserDetails is just the username on the platform: gbaeke on GitHub, geertbaeke on Twitter, etc…

The combination of the routes.json file that protects /api/* and the authentication logic above results in the correct retrieval of the Cosmos DB documents. Note that when you are not authorized, the list is just empty with a 401 error in the console. In reality, you should catch the error and ask the user to authenticate.

One way of doing so is redirecting to a login page. Just add logic to routes.json that serves the path you want to use when the errorType is Unauthenticated as shown below:

"platformErrorOverrides": [
    {
      "errorType": "NotFound",
      "serve": "/custom-404.html"
    },
    {
      "errorType": "Unauthenticated",
      "serve": "/login"
    }
  ]

The danger of the Authenticated role

Above, we used the Authenticated role to provide access to the /api/* routes. That is actually not a good idea once you realize that non-invited users can authenticate to your app as well. As a general rule: always use a custom role to allow access to sensitive resources. Below, I changed the role in routes.json to reader. Now you can invite users and set their role to reader to make sure that only invited users can access the API!

"routes": [
      {
        "route": "/api/*",
        "allowedRoles": ["reader"]
      }

      
    ]

Below you can clearly see the effect of this. I removed GitHub user gbaeke from the list of users but I can still authenticate with the account. Because I am missing the reader role, the drop down list is not populated and a 401 error is shown:

Authenticated but not in the reader role

Conclusion

In this post, we looked at adding authentication and authorization to protect calls to our Azure Functions API. Azure Static Web Apps tries to make that process as easy as possible and we all now how difficult authentication and authorization can be in reality! And remember: protect sensitive API calls with custom roles instead of the built-in Authenticated role.