Deploy and bootstrap your Kubernetes cluster with Azure DevOps and GitOps

A while ago, I published a post about deploying AKS with Azure DevOps with extras like Nginx Ingress, cert-manager and several others. An Azure Resource Manager (ARM) template is used to deploy Azure Kubernetes Service (AKS). The extras are installed with Helm charts and Helm installer tasks. I mainly use it for demo purposes but I often refer to it in my daily work as well.

Although this works, there is another approach that combines an Azure DevOps pipeline with GitOps. From a high level point of view, that works as follows:

  • Deploy AKS with an Azure DevOps pipeline: declarative and idempotent thanks to the ARM template; the deployment is driven from an Azure DevOps pipeline but other solutions such as GitHub Actions will do as well (push)
  • Use a GitOps tool to deploy the GitOps agents on AKS and bootstrap the cluster by pointing the GitOps tool to a git repository (pull)

In this post, I will use Flux v2 as the GitOps tool of choice. Other tools, such as Argo CD, are capable of achieving the same goal. Note that there are ways to deploy Kubernetes using GitOps in combination with the Cluster API (CAPI). CAPI is quite a beast so let’s keep this post a bit more approachable. 😉

Let’s start with the pipeline (YAML):

# AKS deployment pipeline
trigger: none

variables:
  CLUSTERNAME: 'CLUSTERNAME'
  RG: 'CLUSTER_RESOURCE_GROUP'
  GITHUB_REPO: 'k8s-bootstrap'
  GITHUB_USER: 'GITHUB_USER'
  KEY_VAULT: 'KEYVAULT_SHORTNAME'

stages:
- stage: DeployGitOpsCluster
  jobs:
  - job: 'Deployment'
    pool:
      vmImage: 'ubuntu-latest'
    steps: 
    # DEPLOY AKS
    - task: AzureResourceGroupDeployment@2
      inputs:
        azureSubscription: 'SUBSCRIPTION_REF'
        action: 'Create Or Update Resource Group'
        resourceGroupName: '$(RG)'
        location: 'YOUR LOCATION'
        templateLocation: 'Linked artifact'
        csmFile: 'aks/deploy.json'
        csmParametersFile: 'aks/deployparams.gitops.json'
        overrideParameters: '-clusterName $(CLUSTERNAME)'
        deploymentMode: 'Incremental'
        deploymentName: 'aks-gitops-deploy'
       
    # INSTALL KUBECTL
    - task: KubectlInstaller@0
      name: InstallKubectl
      inputs:
        kubectlVersion: '1.18.8'

    # GET CREDS TO K8S CLUSTER WITH ADMIN AND INSTALL FLUX V2
    - task: AzureCLI@1
      name: RunAzCLIScripts
      inputs:
        azureSubscription: 'AzureMPN'
        scriptLocation: 'inlineScript'
        inlineScript: |
          export GITHUB_TOKEN=$(GITHUB_TOKEN)
          az aks get-credentials -g $(RG) -n $(CLUSTERNAME) --admin
          msi="$(az aks show -n CLUSTERNAME -g CLUSTER_RESOURCE_GROUP | jq .identityProfile.kubeletidentity.objectId -r)"
          az keyvault set-policy --name $(KEY_VAULT) --object-id $msi --secret-permissions get
          curl -s https://toolkit.fluxcd.io/install.sh | sudo bash
          flux bootstrap github --owner=$(GITHUB_USER) --repository=$(GITHUB_REPO) --branch=main --path=demo-cluster --personal

A couple of things to note here:

  • The above pipeline contains several strings in UPPERCASE; replace them with your own values
  • GITHUB_TOKEN is a secret defined in the Azure DevOps pipeline and set as an environment variable in the last task; it is required for the flux bootstrap command to configure the GitHub repo (e.g. deploy key)
  • the AzureResourceGroupDeployment task deploys the AKS cluster based on parameters defined in deployparams.gitops.json; that file is in a private Azure DevOps git repo; I have also added them to the gbaeke/k8s-bootstrap repository for reference
  • The AKS deployment uses a managed identity versus a service principal with manually set client id and secret (recommended)
  • The flux bootstrap command deploys an Azure Key Vault to Kubernetes Secrets controller that requires access to Key Vault; the script in the last task retrieves the managed identity object id and uses az keyvault set-policy to grant get key permissions; if you delete and recreate the cluster many times, you will have several UNKNOWN access policies at the Key Vault level

The pipeline is of course short due to the fact that nginx-ingress, cert-manager, dapr, KEDA, etc… are all deployed via the gbaeke/k8s-bootstrap repo. The demo-cluster folder in that repo contains a source and four kustomizations:

  • source: reference to another git repo that contains the actual deployments
  • k8s-akv2k8s-kustomize.yaml: deploys the Azure Key Vault to Kubernetes Secrets controller (akv2k8s)
  • k8s-secrets-kustomize.yaml: deploys secrets via custom resources picked up by the akv2k8s controller; depends on akv2k8s
  • k8s-common-kustomize.yaml: deploys all components in the ./deploy folder of gbaeke/k8s-common (nginx-ingress, external-dns, cert-manager, KEDA, dapr, …)

Overall, the big picture looks like this:

Note that the kustomizations that point to ./akv2k8s and ./deploy actually deploy HelmReleases to the cluster. For instance in ./akv2k8s, you will find the following manifest:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: akv2k8s
  namespace: flux-system
spec:
  chart:
    spec:
      chart: akv2k8s
      sourceRef:
        kind: HelmRepository
        name: akv2k8s-repo
  interval: 5m0s
  releaseName: akv2k8s
  targetNamespace: akv2k8s

This manifest tells Flux to deploy a Helm chart, akv2k8s, from the HelmRepository source akv2k8s-repo that is defined as follows:

---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
  name: akv2k8s-repo
  namespace: flux-system
spec:
  interval: 1m0s
  url: http://charts.spvapi.no/

It is perfectly valid to use a kustomization that deploys manifests that contain resources of kind HelmRelease and HelmRepository. In fact, you can even patch those via a kustomization.yaml file if you wish.

You might wonder why I deploy the akv2k8s controller first, and then deploy a secret with the following manifest (upercase strings to be replaced):

apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
  name: secret-sync 
  namespace: flux-system
spec:
  vault:
    name: KEYVAULTNAME # name of key vault
    object:
      name: SECRET # name of the akv object
      type: secret # akv object type
  output: 
    secret: 
      name: SECRET # kubernetes secret name
      dataKey: values.yaml # key to store object value in kubernetes secret

The external-dns chart I deploy in later steps requires configuration to be able to change DNS settings in Cloudflare. Obviously, I do not want to store the Cloudflare secret in the k8s-common git repo. One way to solve that is to store the secrets in Azure Key Vault and then grab those secrets and convert them to Kubernetes secrets. The external-dns HelmRelease can then reference the secret to override values.yaml of the chart. Indeed, that requires storing a file in Key Vault which is easy to do like so (replace uppercase strings):

az keyvault secret set --name SECRETNAME --vault-name VAULTNAME --file ./YOURFILE.YAML

You can call the secret what you want but the Kubernetes secret dataKey should be values.yaml for the HelmRelease to work properly.

There are other ways to work with secrets in GitOps. The Flux v2 documentation mentions SealedSecrets and SOPS and you are of course welcome to use that.

Take a look at the different repos I outlined above to see the actual details. I think it makes the deployment of a cluster and bootstrapping the cluster much easier compared to suing a bunch of Helm install tasks and manifest deployments in the pipeline. What do you think?

An introduction to Flux v2

If you have read my blog and watched my Youtube channel, you know I have worked with Flux in the past. Flux, by weaveworks, is a GitOps Kubernetes Operator that ensures that your cluster state matches the desired state described in a git repository. There are other solutions as well, such as Argo CD.

With Flux v2, GitOps on Kubernetes became a lot more powerful and easier to use. Flux v2 is built on a set of controllers and APIs called the GitOps Toolkit. The toolkit contains the following components:

  • Source controller: allows you to create sources such as a GitRepository or a HelmRepository; the source controller acts on several custom resource definitions (CRDs) as defined in the docs
  • Kustomize controller: runs continuous delivery pipelines defined with Kubernetes manifests (YAML) files; although you can use kustomize and define kustomization.yaml files, you do not have to; internally though, Flux v2 uses kustomize to deploy your manifests; the kustomize controller acts on Kustomization CRDs as defined here
  • Helm controller: deploy your workloads based on Helm charts but do so declaratively; there is no need to run helm commands; see the docs for more information
  • Notification controller: responds to incoming events (e.g. from a git repo) and sends outgoing events (e.g. to Teams or Slack); more info here

If you throw it all together, you get something like this:

GitOps Toolkit components that make up Flux v2 (from https://toolkit.fluxcd.io/)

Getting started

To get started, you should of course look at the documentation over at https://toolkit.fluxcd.io. I also created a series of videos about Flux v2. The first one talks about Flux v2 in general and shows how to bootstrap a cluster.

Part 1 in the series about Flux v2

Although Flux v2 works with other source control systems than GitHub, for instance GitLab, I use GitHub in the above video. I also use kind, to make it easy to try out Flux v2 on your local machine. In subsequent videos, I use Azure Kubernetes Services (AKS).

In Flux v2, it is much easier to deploy Flux on your cluster with the flux bootstrap command. Flux v2 itself is basically installed and managed via GitOps principles by pushing all Flux v2 manifests to a git repository and running reconciliations to keep the components running as intended.

Kustomize

Flux v1 already supported kustomize but v2 takes it to another level. Whenever you want to deploy to Kubernetes with YAML manifests, you will create a kustomization, which is based on the Kustomization CRD. A kustomization is defined as below:

apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: realtimeapp-dev
  namespace: flux-system
spec:
  healthChecks:
  - kind: Deployment
    name: realtime-dev
    namespace: realtime-dev
  - kind: Deployment
    name: redis-dev
    namespace: realtime-dev
  interval: 1m0s
  path: ./deploy/overlays/dev
  prune: true
  sourceRef:
    kind: GitRepository
    name: realtimeapp-infra
  timeout: 2m0s
  validation: client

A kustomization requires a source. In this case, the source is a git repository called realtimeapp-infra that was already defined in advance. The source just points to a public git repository on Github: https://github.com/gbaeke/realtimeapp-infra.

The source contains a deploy folder, which contains a bases and an overlays folder. The kustomization points to the ./deploy/overlays/dev folder as set in path. That folder contains a kustomization.yaml file that deploys an application in a development namespace and uses the base from ./deploy/bases/realtimeapp as its source. If you are not sure what kustomize exactly does, I made a video that tries 😉 to explain it:

An introduction to kustomize

It is important to know that you do not need to use kustomize in your source files. If you point a Flux v2 kustomization to a path that just contains a bunch of YAML files, it will work equally well. You do not have to create a kustomization.yaml file in that folder that lists the resources (YAML files) that you want to deploy. Internally though, Flux v2 will use kustomize to deploy the manifests and uses the deployment order that kustomize uses: first namespaces, then services, then deployments, etc…

The interval in the kustomization (above set at 1 minute) means that your YAML files are applied at that interval, even if the source has not changed. This ensures that, if you modified resources on your cluster, the kustomization will reset the changes to the state as defined in the source. The source itself has its own interval. If you set a GitRepository source to 1 minute, the source is checked every 1 minute. If the source has changes, the kustomizations that depend on the source will be notified and proceed to deploy the changes.

A GitRepository source can refer to a specific branch, but can also refer to a semantic versioning tag if you use a semver range in the source. See checkout strategies for more information.

Deploying YAML manifests

If the above explanation of sources and kustomizations does not mean much to you, I created a video that illustrates these aspects more clearly:

In the above video, the source that points to https://github.com/gbaeke/realtimeapp-infra gets created first (see it at this mark). Next, I create two kustomizations, one for development and one for production. I use a kustomize base for the application plus two overlays, one for dev and one for production.

What to do when the app container images changes?

Flux v1 has a feature that tracks container images in a container registry and updates your cluster resources with a new image based on a filter you set. This requires read/write access to your git repository because Flux v1 set the images in your source files. Flux v2 does not have this feature yet (November 2020, see https://toolkit.fluxcd.io/roadmap).

In my example, I use a GitHub Action in the application source code repository to build and push the application image to Docker Hub. The GitHub action triggers a build job on two events:

  • push to main branch: build a container image with a short sha as the tag (e.g. gbaeke/flux-rt:sha-94561cb
  • published release: build a container image with the release version as the tag (e.g. gbaeke/flux-rt:1.0.1)

When the build is caused by a push to main, the update-dev-image job runs. It modifies kustomization.yaml in the dev overlay with kustomize edit:

update-dev-image:
    runs-on: ubuntu-latest
    if: contains(github.ref, 'heads')
    needs:
    - build
    steps:
    - uses: imranismail/setup-kustomize@v1
      with:
        kustomize-version: 3.8.6
    - run: git clone https://${REPO_TOKEN}@github.com/gbaeke/realtimeapp-infra.git .
      env:
        REPO_TOKEN: ${{secrets.REPO_TOKEN}}
    - run: kustomize edit set image gbaeke/flux-rt:sha-$(git rev-parse --short $GITHUB_SHA)
      working-directory: ./deploy/overlays/dev
    - run: git add .
    - run: |
        git config user.email "$EMAIL"
        git config user.name "$GITHUB_ACTOR"
      env:
        EMAIL: ${{secrets.EMAIL}}
    - run: git commit -m "Set dev image tag to short sha"
    - run: git push

Similarly, when the build is caused by a release, the image is updated in the production overlay’s kustomization.yaml file.

Conclusion

If you are interested in GitOps as an alternative for continuous delivery to Kubernetes, do check out Flux v2 and see if it meets your needs. I personally like it a lot and believe that they are setting the standard for GitOps on Kubernetes. I have not covered Helm deployments, monitoring and alerting features yet. I will create additional videos and posts about those features in the near future. Stay tuned!

Docker without Docker: a look at Podman

I have been working with Docker for quite some time. More and more however, I see people switching to tools like Podman and Buildah and decided to give that a go.

I installed a virtual machine in Azure with the following Azure CLI command:

az vm create \
  	--resource-group RESOURCEGROUP \
  	--name VMNAME \
  	--image UbuntuLTS \
	--authentication-type password \
  	--admin-username azureuser \
  	--admin-password PASSWORD \
	--size Standard_B2ms

Just replace RESOURCEGROUP, VMNAME and PASSWORD with the values you want to use and you are good to go. Note that the above command results in Ubuntu 18.04 at the time of writing.

SSH into that VM for the following steps.

Installing Podman

Installation of Podman is easy enough. The commands below do the trick:

. /etc/os-release
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
sudo apt-get update
sudo apt-get -y upgrade 
sudo apt-get -y install podman

You can find more information at https://podman.io/getting-started/installation.

Where Docker uses a client/server model, with a privileged Docker daemon and a docker client that communicates with it, Podman uses a fork/exec model. The container process is a child of the Podman process. This also means you do not require root to run a container which is great from a security and auditing perspective.

You can now just use the podman command. It supports the same arguments as the docker command. If you want, you can even create a docker alias for the podman command.

To check if everything is working, run the following command:

podman run hello-world

It will pull down the hello-world image from Docker Hub and display a message.

I wanted to start my gbaeke/nasnet container with podman, using the following command:

podman run  -p 80:9090 -d gbaeke/nasnet

Of course, the above command will fail. I am not running as root, which means I cannot bind a process to a port below 1024. There are ways to fix that but I changed the command to:

podman run  -p 9090:9090 -d gbaeke/nasnet

The gbaeke/nasnet container is large, close to 3 GB. Pulling the container from Docker Hub went fast but Podman took a very long time during the Storing signatures phase. While the command was running, I checked disk space on the VM with df and noticed that the machine’s disk was quickly filling up.

On WSL2 (Windows Subsystem for Linux), I did not have trouble with pulling large images. With the docker info command, I found that it was using overlay2 as the storage driver:

Docker on WSL2 uses overlay2

You can find more information about Docker and overlay2, see https://docs.docker.com/storage/storagedriver/overlayfs-driver/.

With podman, run podman info to check the storage driver podman uses. Look for graphDriverName in the output. In my case, podman used vfs. Although vfs is well supported and runs anywhere, it does full copies of layers (represented by directories on your filesystem) in the image which results in using a lot of diskspace. If the disk is not super fast, this will result in long wait times when pulling an image and waste of disk space.

Without getting bogged down in the specifics of the storage drivers and their pros and cons, I decided to switch Podman from vfs to fuse-overlayfs. Fuse stands for Filesystem in Userspace, so fuse-overlayfs is the implementation of overlayfs in userspace (using FUSE). It supports deduplication of layers which will result in less consumption of disk space. This should be very noticeable when pulling a large image.

IMPORTANT: remove the containers folder in ~/.local/share to clear out container storage before installing overlayfs. Use the command below;

rm -rf ~/.local/share/containers

Installing fuse-overlayfs

The installation instructions are at https://github.com/containers/fuse-overlayfs. I needed to use the static build because I am running Ubuntu 18.04. On newer versions of Ubuntu, you can use apt install libfuse3-dev.

It’s of no use here to repeat the static build steps. Just head over to the GitHub repo and follow the steps. When asked to clone the git repo, use the following command:

git clone https://github.com/containers/fuse-overlayfs.git

The final step in the instructions is to copy fuse-overlayfs (which was just built with buildah) to /usr/bin.

If you now run podman info, the graphDrivername should be overlay. There’s nothing you need to do to make that happen:

overlay storage driver with /usr/bin/fuse-overlayfs as the executable

When you now run the gbaeke/nasnet container, or any sufficiently large container, the process should be much smoother. I can still take a couple of minutes though. Note that at the end, your output will be somewhat like below:

Output from podman run -p 9090:9090 -d gbaeke/nasnet

Now you can run podman ps and you should see the running container:

gbaeke/nasnet container is running

Go to http://localhost:9090 and you should see the UI. Go ahead and classify an image! 😉

Conclusion

Installing and using Podman is easy, especially if you are familiar with Docker somewhat. I did have trouble with performance and disk storage with large images but that can be fixed by swapping out vfs with something like overlayfs. Be aware that there are many other options and that it is quite complex under the hood. But with the above steps, you should be good to go.

Will I use podman from now on? Probably not as Docker provides me all I need for now and a lot of tools I use are dependent on it.

Azure Application Gateway and Cloudflare

I often work with customers that build web applications on cloud platforms like Azure, AWS or Digital Ocean. The web application is usually built by a third party that specializes in e-Commerce, logistics or industrial applications in a wide range of industries. More often than not, these applications use CloudFlare for DNS, caching, and security.

In this post, we will take a look at such a case with the application running in containers on Azure Kubernetes Service (AKS). I have substituted the application with one of my own, the go-realtime app.

There’s also a video:

The big picture

Sketch of the “architecture”

The application runs in containers on an AKS cluster. Although we could expose the application using an Azure load balancer, a layer 7 load balancer such as Azure Application Gateway, referred to as AG below, is more appropriate here because it allows routing based on URLs and paths and much more.

Because Kubernetes is a dynamic environment, a component is required that configures AG automatically. Application Gateway Ingress Controller (AGIC) plays that part. AGIC configures the AG based on the ingresses we create in the cluster. In essence, that will result in a listener on the public IP that is associated with AG.

In Cloudflare, we will need to configure DNS records that use proxying. The records will point to the IP address of the AG. Below is an example of a DNS record with proxying turned on (orange cloud):

A record at Cloudflare with proxying; blurred out address of AG

Let’s look at these components in a bit more detail.

Application Gateway

Microsoft has a lot of documentation on AG, including the AGIC component. There are many options and approaches when it comes to using AG together with AKS. Some are listed below:

  • Install AKS, AG and AGIC in one step: see the docs for more information; in general, I would not follow this approach and use the next option
  • Install AKS and AG separately: you can find an example here; this allows you to deploy AKS and AG (plus its public IP) using your automation tools of choice such as ARM, Terraform or Pulumi

In most cases, we deploy AKS with Azure CNI networking. This requires a virtual network (VNet) with a subnet specifically for your AKS cluster. Only one cluster should be in the subnet.

AG also requires a subnet. You can create that subnet in the same VNet and size it according to the documentation. In virtually all cases, you should go for AG v2.

In the video above, I install AG with Azure CLI. Once AKS and AG are deployed, you will need to deploy the AGIC component.

Application Gateway Ingress Controller

You basically have two options to install AGIC:

  • Install via an AKS addon: discussed further
  • Install with a Helm chart: see Helm greenfield and Helm brownfield deployment for more information

Although the installation via an AKS addon is preferred, at the time of writing (October 2020), this method is in preview. After configuring your subscription to enable this feature and after installing the aks-preview addon for Azure CLI, you can use the following command to install AGIC:

appgwId=$(az network application-gateway show -n AGname -g AGresourcegroup -o tsv --query "id")
az aks enable-addons -n AKSclustername -g AKSResourcegroup -a ingress-appgw --appgw-id $appgwId

Indeed, you first need to find the id of the AG you deployed. This id can be found in the portal or with the first command above, which saves the result in a variable (Linux shell). The az aks enable-addons command is the command to install any addon in AKS, including the AGIC addon. The AGIC addon is called ingress-appgw.

Installation via the addon is preferred because that makes the AGIC installation part of AKS and part of the managed service for maintenance and upgrades. If you install AGIC via Helm, you are responsible for maintaining and upgrading it. In addition, the Helm deployment requires AAD pod identity, which complicates matters further. From the moment the addon is GA (generally available), I would recommend to use it exclusively, as long as your scenario supports it.

That last sentence is important because there are quite some differences between AGIC installed with Helm and AGIC installed with the addon. Those differences should disappear over time though.

Required access rights for AGIC

AGIC configures AG via ARM (Azure Resource Manager). As such, AGIC requires read and write access to AG. To check if AGIC has the correct access, use the AGIC pod logs to do so.

Indeed, the AGIC installation results in a pod in the kube-system namespace. On my system, it looks like this (from kubectl get pods -n kube-system)

ingress-appgw-deployment-7dd969fddb-jfps5 1/1 Running 0 6h50m

When you check the logs of that pod, you should see output like below:

AGIC logs displayed via the wonderful K9S tool 👍

The logs show that AGIC can connect to AG properly. If however, you get 403 errors, AGIC does not have the correct access rights. That can easily be fixed by granting the Contributor role on your AG to the user managed identity used by AGIC (if the AKS addon was used). In my case, that is the following account:

User Assigned Managed Identity ingressapplicationgateway-clustername

Configuring AG via Ingresses

Now that AG and AGIC are installed and AGIC has read and write access to AG, we can created Kubernetes Ingress objects like we usually do. Below is an example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtimeapp-ingress
  annotations:
    kubernetes.io/ingress.class: azure/application-gateway
    appgw.ingress.kubernetes.io/appgw-ssl-certificate: "origin"
    

spec:
  rules:
  - host: rt.baeke.info
    http:
      paths:
      - path: /
        backend:
          serviceName: realtimeapp
          servicePort: 80

This is a regular Ingress definition. The ingress.class annotation is super important because it tells AGIC to do its job. The second annotation is part of our use case because we want AG to create an HTTPS listener and we want to use a certificate that is already installed on AG. That certificate contains a Cloudflare origin certificate valid for *.baeke.info and expiring somewhere in 2035. I must make sure I update that certificate at that time! 😉

Note that this is just one way of configuring the certificate. You can also save the certificate as a Kubernetes secret and refer to it in your Ingress definition. AGIC will then push that certificate to AG. AGIC also supports Let’s Encrypt, with some help from certmgr. I will let you have some fun with that though! Tell me how it went!

From the moment we create the Ingress, AGIC will pick it up and configure AG. Here’s the listener for instance:

AG listener as created by AGIC

By the way, to create the certificate in AG, use the command below with a cert.pfx file containing the certificate and private key in the same folder:

az network application-gateway ssl-cert create -g resourcegroupname --gateway-name AGname -n origin --cert-file cert.pfx --cert-password SomePassword123

Of course, you can choose any name you like for the -n parameter.

CloudFlare Configuration

As mentioned before, you need to create proxied A or CNAME records. The user connection will go to Cloudflare, Cloudflare will do its thing and then connect to the public IP of AG, returning the results to the user.

To enforce end-to-end encryption, set the mode to Full (strict):

Cloudflare Full (strict) SSL/TLS encryption

As your Edge certificate (used at the Cloudflare edge locations), you have several options. One of those options is to use a CloudFlare Universal SSL Certificate which is free. Another option is to use the Advanced Certificate Manager which comes at an extra cost. On higher plans, you can upload your own certificates. In my case, I have Universal SSL applied but we mostly use the other two options in production scenarios:

Cloudflare Universal SSL

Via the edge certificate, users can connect securely to a Cloudflare edge location. Cloudflare itself, needs to connect securely to AG. We now need to generate an origin certificate we can install on AG:

Creating the origin certificate

The questions that follow are straightforward and not discussed here. Here they are:

Generating the origin cert

After clicking next, you will get your certificate and private key in PEM format (default), which you can use to create the .pfx file. You can use the openssl tool as discussed here. Just copy and paste the certificate and private key to separate text files, for example cert.pem and cert.key, and use them as input to the openssl command. Once you have the .pfx file, use the command shown earlier to upload it to AG.

In the Edge Certificates section, it is recommended to also enable Always use HTTPS which redirects HTTP to HTTPS traffic.

Redirect HTTP to HTTPS

Restricting AG to Cloudflare traffic

Application Gateway v2 is automatically deployed with a public IP. You can restrict access to that IP address with an NSG.

It is important to understand how the NSG works before you start creating it. The documentation provides all the information you need, but be aware the steps are different for AG v2 compared to AG v1.

Here is a screenshot of my NSG inbound rules, outbound rules were left at the default:

NSG on the AG subnet

Note that the second rule only allows access on port 443 from Cloudflare addresses as found here.

Let’s check if only Cloudflare has access with curl. First I run the following command without the NSG applied:

curl --header "Host: rt.baeke.info" https://52.224.72.167 --insecure

The above command responds with:

Response from curl command

When I apply the NSG and give it some time, the curl command times out.

Conclusion

In this post, we looked at using Application Gateway Ingress Controller, which configures Application Gateway based on Kubernetes Ingress definitions. We have also looked at combining Application Gateway with Cloudflare, by using Cloudflare proxying in combination with an Azure Network Security Group that only allows access to Application Gateway from well-known IP addresses. I hoped you liked this and if you have any remarks or spotted errors, let me know!

HashiCorp Waypoint Image Tagging

Recently (October, 2020) I posted an introduction to HashiCorp Waypoint on my YouTube channel. It shows how to build, push, deploy and release applications to Kubernetes with a single waypoint up command. If you want to check out that video first, see below ⬇⬇⬇

After watching that video, it should be clear that you drive the process from a file called waypoint.hcl. The waypoint.hcl to deploy the Azure Function app in the video, is shown below:

project = "wptest-hello"

app "wptest-hello" {
  labels = {
    "service" = "wptest-hello",
    "env" = "dev"
  }

  build {
    use "docker" {}
    registry {
        use "docker" {
          image = "gbaeke/wptest-hello"
          tag = "latest"
          local = false
        }
    }
  }

  deploy {
    use "kubernetes" {
        service_port = 80
        probe_path = "/"
    }
  }

  release {
    use "kubernetes" {
       load_balancer =  true
    }
  }
}

In the build stanza, use “docker” tells Waypoint to build the container image from a local Dockerfile. With registry, we push that image to, in this case, Docker Hub. Instead of Docker Hub, other registries can be used as well. Before the image is pushed to the registry, it is first tagged with the tag you specify. Here, that is the latest tag. Although that is easy, you should not use that tag in your workflow because you will not get different images per application version. And you certainly want that when you do multiple deploys based on different code.

To make the tag unique, you can replace “latest” with the gitrefpretty() function, as shown below:

build {
    use "docker" {}
    registry {
        use "docker" {
          image = "gbaeke/wptest-hello"
          tag = gitrefpretty()
          local = false
        }
    }
  }

Assuming you work with git and commit your code changes 😉, gitrefpretty() will return the git commit sha at the time of build.

You can check the commit sha of each commit with git log:

git log showing each commit with its sha-1 checksum

When you use gitrefpretty() and you issue the waypoint build command, the images will be tagged with the sha-1 checksum. In Docker Hub, that is clearly shown:

Image with commit sha tag pushed to Docker Hub

That’s it for this quick post. If you have further questions, just hit me up on Twitter or leave a comment!

Azure Private Link and DNS

When you are just starting out with Azure Private Link, it can be hard figuring out how name resolution works and how DNS has to be configured. In this post, we will take a look at some of the internals and try to clear up some of the confusion. If you end up even more confused then I’m sorry in advance. Drop me your questions in the comments if that happens. 😉 I will illustrate the inner workings with a Cosmos DB account. It is similar for other services.

Wait! What is Private Link?

Azure Private Link provides private IP addresses for services such as Cosmos DB, Azure SQL Database and many more. You choose where the private IP address comes from by specifying a VNET and subnet. Without private link, these services are normally accessed via a public IP address or via Network Service Endpoints (also the public IP but over the Azure network and restricted to selected subnets). There are several issues or shortcomings with those options:

  • for most customers, accessing databases and other services over the public Internet is just not acceptable
  • although network service endpoints provide a solution, this only works for systems that run inside an Azure Virtual Network (VNET)

When you want to access a service like Cosmos DB from on-premises networks and keep the traffic limited to your on-premises networks and Azure virtual networks, Azure Private Link is the way to go. In addition, you can filter the traffic with Azure Firewall or a virtual appliance, typically installed in a hub site. Now let’s take a look at how this works with Cosmos DB.

Azure Private Link for Cosmos DB

I deployed a Cosmos DB account in East US and called it geba-cosmos. To access this account and work with collections, I can use the following name: https://geba-cosmos.documents.azure.com:443/. As explained before, geba-cosmos.document.azure.com resolves to a public IP address. Note that you can still control who can connect to this public IP address. Below, only my home IP address is allowed to connect:

Cosmos DB configured to allow access from selected networks

In order to connect to Cosmos DB using a private IP address in your Azure Virtual Network, just click Private Endpoint Connections below Firewall and virtual networks:

Private Endpoint Connections for a Cosmos DB account with one private endpoint configured

To create a new private endpoint, click + Private Endpoint and follow the steps. The private endpoint is a resource on its own which needs a name and region. It should be in the same region as the virtual network you want to grab an IP address from. In the second screen, you can select the resource you want the private IP to point to (can be in a different region):

Private endpoint that will connect to a Cosmos DB account in my directory (target sub-resource indicates the Cosmos DB API, here the Core SQL API is used)

In the next step, you select the virtual network and subnet you want to grab an IP address from:

VNET and subnet to grab the IP address for the private endpoint

In this third step (Configuration), you will be asked if you want Private DNS integration. The default is Yes but I will select No for now.

Note: it is not required to use a Private DNS zone with Private Link

When you finish the wizard and look at the created private endpoint, it will look similar to the screenshot below:

Private endpoint configured

In the background, a network interface was created and attached to the selected virtual network. Above, the network interface is pe-geba-cosmos.nic.a755f7ad-9d54-4074-996c-8a14e9434898. The network interface screen will look like the screenshot below:

Network interface attached to subnet servers in VNET vnet-us1; it grabbed the next available IP of 10.1.0.5 as primary (but also 10.1.0.6 as secondary; click IP configurations to see that)

The interesting part is the Custom DNS Settings. How can you resolve the name geba-cosmos.documents.azure.com to 10.1.0.5 when a client (either in Azure or on-premises) requests it? Let’s look at DNS resolution next…

DNS Resolution

Let’s use dig to check what a request for a Cosmos DB account return without private link. I have another account, geba-test, that I can use for that:

dig with a Cosmos DB account without private link

The above DNS request was made on my local machine, using public DNS servers. The response from Microsoft DNS servers for geba-test.documents.azure.com is a CNAME to a cloudapp.net name which results in IP address 40.78.226.8.

The response from the DNS server will be different when private link is configured. When I resolve geba-cosmos.documents.azure.com, I get the following:

Resolving the Cosmos DB hostname with private link configured

As you can see, the Microsoft DNS servers respond with a CNAME of accountname.privatelink.documents.azure.com. but by default that CNAME goes to a cloudapp.net name that resolves to a public IP.

This means that, if you don’t take specific action to resolve accountname.privatelink.documents.azure.com to the private IP, you will just end up with the public IP address. In most cases, you will not be able to connect because you will restrict public access to Cosmos DB. It’s important to note that you do not have to restrict public access and that you can enable both private and public access. Most customers I work with though, restrict public access.

Resolving to the private IP address

Before continuing, it’s important to state that developers should connect to https://accountname.documents.azure.com (if they use the gateway mode). In fact, Cosmos DB expects you to use that name. Don’t try to connect with the IP address or some other name because it will not work. This is similar for services other than Cosmos DB. In the background though, we will make sure that accountname.documents.azure.com goes to the internal IP. So how do we make that happen? In what follows, I will list a couple of solutions. I will not discuss using a hosts file on your local pc, although it is possible to make that work.

Create privatelink DNS zones on your DNS servers
This means that in this case, we create a zone for privatelink.documents.azure.com on our own DNS servers and add the following records:

  • geba-cosmos.privatelink.documents.azure.com. IN A 10.1.0.5
  • geba-cosmos-eastus.privatelink.documents.azure.com. IN A 10.1.0.6

Note: use a low TTL like 10s (similar to Azure Private DNS; see below)

When the DNS server has to resolve geba-cosmos.documents.azure.com, it will get the CNAME response of geba-cosmos.privatelink.documents.azure.com and will be able to answer authoritatively that that is 10.1.0.5.

If you use this solution, you need to make sure that you register the custom DNS settings listed by the private endpoint resource manually. If you want to try this yourself, you can easily do this with a Windows virtual machine with the DNS role or a Linux VM with bind.

Use Azure Private DNS zones
If you do not want to register the custom DNS settings of the private endpoint manually in your own DNS servers, you can use Azure Private DNS. You can create the private DNS zone during the creation of the private endpoint. An internal zone for privatelink.documents.azure.com will be created and Azure will automatically add the required DNS configuration the private endpoint requires:

Azure Private DNS with automatic registration of the required Cosmos DB A records

This is great for systems running in Azure virtual networks that are associated with the private DNS zone and that use the DNS servers provided by Azure but you still need to integrate your on-premises DNS servers with these private DNS zones. The way to do that is explained in the documentation. In particular, the below diagram is important:

On-premises forwarding to Azure DNS
Source: Microsoft docs

The example above is for Azure SQL Database but it is similar to our Cosmos DB example. In essence, you need the following:

  • DNS forwarder in the VNET (above, that is 10.5.0.254): this is an extra (!!!) Windows or Linux VM configured as a DNS forwarder; it should forward to 168.63.129.16 which points to the Azure-provided DNS servers; if the virtual network of the VM is integrated with the private DNS zone that hosts privatelink.documents.azure.com, the A records in that zone can be resolved properly
  • To allow the on-premises server to return the privatelink A records, setup conditional forwarding for documents.azure.com to the DNS forwarder in the virtual network

What should you do?

That’s always difficult to answer but most customers I work with tend to go for option 1. They create a zone for privatelink.x.y.z and register the records manually. Although that could be automated, it’s often a manual step.

I actually prefer the private DNS method because of the automatic registration of the records. Although I don’t like the extra DNS server, it will not be needed most of the time because customers tend to work with the hub/spoke model and the hub already contains DNS servers. Those DNS servers can then be configured to enable the resolution of the privatelink zones.

How to delete an stubborn Azure Virtual Hub

A while ago, I created an Azure Virtual WAN (Standard) and added a virtual hub. For some reason, the virtual hub ended up in the state below:

Image
Hub and routing status: Failed (Ouch!)

I tried to reset the router and virtual hub but to no avail. Next, I tried to delete the hub. In the portal, this resulted in a validating state that did not end. In the Azure CLI, an error was thrown.

Because this is a Microsoft Partner Network (MPN) subscription, I also did not have technical support or an easy way to enable it. I ended up buying Developer Support for a month just to open a service request.

The (helpful) support engineer asked me to do the following:

  • Open Azure Resource Explorer
  • Navigate to subscriptions and the resource group that contains the hub
  • Under providers, navigate to Microsoft.Network
  • Locate the virtual hub and do GET, EDIT, PUT (set Read/Write mode first)
After clicking GET and EDIT, PUT can be clicked

At first it did not seem to work but in my case, the PUT operation just took a very long time. After the PUT operation finished, I could delete the virtual hub from the portal.

Long story short: if you ever have a resource you cannot delete, give Azure Resource Explorer and the above procedure a try. Your mileage may vary though!

From MQTT to InfluxDB with Dapr

In a previous post, we looked at using the Dapr InfluxDB component to write data to InfluxDB Cloud. In this post, we will take a look at reading data from an MQTT topic and storing it in InfluxDB. We will use Dapr 0.10, which includes both components.

To get up to speed with Dapr, please read the previous post and make sure you have an InfluxDB instance up and running in the cloud.

If you want to see a video instead:

MQTT to Influx with Dapr

Note that the video sends output to both InfluxDB and Azure SignalR. In addition, the video uses Dapr 0.8 with a custom compiled Dapr because I was still developing and testing the InfluxDB component.

MQTT Server

Although there are cloud-based MQTT servers you can use, let’s mix it up a little and run the MQTT server from Docker. If you have Docker installed, type the following:

docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto

The above command runs Mosquitto and exposes port 1883 on your local machine. You can use a tool such as MQTT Explorer to send data. Install MQTT Explorer on your local machine and run it. Create a connection like in the below screenshot:

MQTT Explorer connection

Now, click Connect to connect to Mosquitto. With MQTT, you send data to topics of your choice. Publish a json message to a topic called test as shown below:

Publish json data to the test topic

You can now click the topic in the list of topics and see its most recent value:

Subscribing to the test topic

Using MQTT with Dapr

You are now ready to read data from an MQTT topic with Dapr. If you have Dapr installed, you can run the following code to read from the test topic and store the data in InfluxDB:

const express = require('express');
const bodyParser = require('body-parser');

const app = express();
app.use(bodyParser.json());

const port = 3000;

// mqtt component will post messages from influx topic here
app.post('/mqtt', (req, res) => {
    console.log("MQTT Binding Trigger");
    console.log(req.body)

    // body is expected to contain room and temperature
    room = req.body.room
    temperature = req.body.temperature

    // room should not contain spaces
    room = room.split(" ").join("_")

    // create message for influx component
    message = {
        "measurement": "stat",
        "tags": `room=${room}`,
        "values": `temperature=${temperature}`
    };
    
    // send the message to influx output binding
    res.send({
        "to": ["influx"],
        "data": message
    });
});

app.listen(port, () => console.log(`Node App listening on port ${port}!`));

In this example, we use Node.js instead of Python to illustrate that Dapr works with any language. You will also need this package.json and run npm install:

{
  "name": "mqttapp",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "body-parser": "^1.18.3",
    "express": "^4.16.4"
  }
}

In the previous post about InfluxDB, we used an output binding. You use an output binding by posting data to a Dapr HTTP URI.

To use an input binding like MQTT, you will need to create an HTTP server. Above, we create an HTTP server with Express, and listen on port 3000 for incoming requests. Later, we will instruct Dapr to listen for messages on an MQTT topic and, when a message arrives, post it to our server. We can then retrieve the message from the request body.

To tell Dapr what to do, we’ll create a components folder in the same folder that holds the Node.js code. Put a file in that folder with the following contents:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: mqtt
spec:
  type: bindings.mqtt
  metadata:
  - name: url
    value: mqtt://localhost:1883
  - name: topic
    value: test

Above, we configure the MQTT component to list to topic test on mqtt://localhost:1883. The name we use (in metadata) is important because that needs to correspond to our HTTP handler (/mqtt).

Like in the previous post, there’s another file that configures the InfluxDB component:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: influx
spec:
  type: bindings.influx
  metadata:
  - name: Url
    value: http://localhost:9999
  - name: Token
    value: ""
  - name: Org
    value: ""
  - name: Bucket
    value: ""

Replace the parameters in the file above with your own.

Saving the MQTT request body to InfluxDB

If you look at the Node.js code, you have probably noticed that we send a response body in the /mqtt handler:

res.send({
        "to": ["influx"],
        "data": message
    });

Dapr is written to accept responses that include a to and a data field in the JSON response. The above response simply tells Dapr to send the message in the data field to the configured influx component.

Does it work?

Let’s run the code with Dapr to see if it works:

dapr run --app-id mqqtinflux --app-port 3000 --components-path=./components node app.js

In dapr run, we also need to specify the port our app uses. Remember that Dapr will post JSON data to our /mqtt handler!

Let’s post some JSON with the expected fields of temperature and room to our MQTT server:

Posting data to the test topic

The Dapr logs show the following:

Logs from the APP (appear alongside the Dapr logs)

In InfluxDB Cloud table view:

Data stored in InfluxDB Cloud (posted some other data points before)

Conclusion

Dapr makes it really easy to retrieve data with input bindings and send that data somewhere else with output bindings. There are many other input and output bindings so make sure you check them out on GitHub!

Using the Dapr InfluxDB component

A while ago, I created a component that can write to InfluxDB 2.0 from Dapr. This component is now included in the 0.10 release. In this post, we will briefly look at how you can use it.

If you do not know what Dapr is, take a look at https://dapr.io. I also have some videos on Youtube about Dapr. And be sure to check out the video below as well:

Let’s jump in and use the component.

Installing Dapr

You can install Dapr on Windows, Mac and Linux by following the instructions on https://dapr.io/. Just click the Download link and select your operating system. I installed Dapr on WSL 2 (Windows Subsystem for Linux) on Windows 10 with the following command:

wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash

The above command just installs the Dapr CLI. To initialize Dapr, you need to run dapr init.

Getting an InfluxDB database

InfluxDB is a time-series database. You can easily run it in a container on your local machine but you can also use InfluxDB Cloud. In this post, we will simply use a free cloud instance. Just head over to https://cloud2.influxdata.com/signup and signup for an account. Just follow the steps and use a free account. It stores data for maximum 30 days and has some other limits as well.

You will need the following information to write data to InfluxDB:

  • Organization: this will be set to the e-mail account you signed up with; it can be renamed if you wish
  • Bucket: your data is stored in a bucket; by default you get a bucket called e-mail-prefix’s Bucket (e.g. geert.baeke’s Bucket)
  • Token: you need a token that provides the necessary access rights such as read and/or write

Let’s rename the bucket to get a feel for the user interface. Click Data, Buckets and then Settings as shown below:

Getting to the bucket settings

Click Rename and follow the steps to rename the bucket:

Renaming the bucket

Now, let’s create a token. In the Load Data screen, click Tokens. Click Generate and then click Read/Write Token. Describe the token and create it like below:

Creating a token

Now click the token you created and copy it to the clipboard. You now have the organization name, a bucket name and a token. You still need a URL to connect to but that just the URL you see in the browser (the yellow part):

URL to send your data

Your URL will depend on the cloud you use.

Python code to write to InfluxDB with Dapr

The code below requires Python 3. I used version 3.6.9 but it will work with more recent versions of course.

import time
import requests
import os

dapr_port = os.getenv("DAPR_HTTP_PORT", 3500)

dapr_url = "http://localhost:{}/v1.0/bindings/influx".format(dapr_port)
n = 0.0
while True:
    n += 1.0
    payload = { 
        "data": {
            "measurement": "temp",
            "tags": "room=dorm,building=building-a",
            "values": "sensor=\"sensor X\",avg={},max={}".format(n, n*2)
            }, 
        "operation": "create" 
    }
    print(payload, flush=True)
    try:
        response = requests.post(dapr_url, json=payload)
        print(response, flush=True)

    except Exception as e:
        print(e, flush=True)

    time.sleep(1)

The code above is just an illustration of using the InfluxDB output binding from Dapr. It is crucial to understand that a Dapr process needs to be running, either locally on your system or as a Kubernetes sidecar, that the above program communicates with. To that end, we get the Dapr port number from an environment variable or use the default port 3500.

The Python program uses the InfluxDB output binding simply by posting data to an HTTP endpoint. The endpoint is constructed as follows:

dapr_url = "http://localhost:{}/v1.0/bindings/influx".format(dapr_port)

The dapr_url above is set to a URI that uses localhost over the Dapr port and then uses the influx binding by appending /v1.0/bindings/influx. All bindings have a specific name like influx, mqtt, etc… and that name is then added to /v1.0/bindings/ to make the call work.

So far so good, but how does the binding know where to connect and what organization, bucket and token to use? That’s where the component .yaml file comes in. In the same folder where you save your Python code, create a folder called components. In the folder, create a file called influx.yaml (you can give it any name you want). The influx.yaml contents is shown below:

apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: influx
spec:
  type: bindings.influx
  metadata:
  - name: Url
    value: YOUR URL
  - name: Token
    value: "YOUR TOKEN HERE"
  - name: Org
    value: "YOUR ORG"
  - name: Bucket
    value: "YOUR BUCKET"

Of course, replace the uppercase values above with your own. We will later tell Dapr to look for files like this in the components folder. Automatically, because you use the influx binding in your Python code, Dapr will go look for the file above (type: bindings.influx) and retrieve the required metadata. If any of the metadata is not set or if the file is missing or improperly formatted, you will get an error.

To actually use the binding, we need to post some data to the URI we constructed. The data we send is in the payload variable as shown below:

 payload = { 
        "data": {
            "measurement": "temp",
            "tags": "room=dorm,building=building-a",
            "values": "sensor=\"sensor X\",avg={},max={}".format(n, n*2)
            }, 
        "operation": "create" 
    }

It requires a measurement field, a tags and a values field and uses the InfluxDB line protocol to send the data. You can find more information about that here.

The data field in the payload is specific to the Influx component. The operation field is required by this Dapr component as it is written to listen for create operations.

Running the code

On your local machine, you will need to run Dapr together with your code to make it work. You use dapr run for this. To run the Python code (saved to app.py in my case), run the command below from the folder that contains the code and the components folder:

dapr run --app-id influx -d ./components python3 app.py

This starts Dapr and our application with app id influx. With -d, we point to the components file.

When you run the code, Dapr logs and your logs will be printed to the screen. In InfluxDB Cloud, we can check the data from the user interface:

Date Explorer (Note: other organization and bucket than the one used in this post)

Conclusion

Dapr can be used in the cloud and at the edge, in containers or without. In both cases, you often have to write data to databases. With Dapr, you can now easily write data as time series to InfluxDB. Note that Dapr also has an MQTT input and output binding. Using the same simple technique you learned in this post, you can easily read data from an MQTT topic and forward it to InfluxDB. In a later post, we will take a look at that scenario as well. Or check this video instead: https://youtu.be/2vCT79KG24E. Note that the video uses a custom compiled Dapr 0.8 with the InfluxDB component because this video was created during development.

Dapr Service Invocation between an HTTP Python client and a GRPC Go server

Recently, I published several videos about Dapr on my Youtube channel. The videos cover the basics of state management, PubSub and service invocation.

The Getting Started with state management and service invocation:

Let’s take a closer look at service invocation with HTTP, Python and Node.

Service Invocation with HTTP

Service Invocation Diagram
Service invocation (image from Dapr docs)

The services you write (here Service A and B) talk to each other using the Dapr runtime. On Kubernetes, you talk to a Dapr sidecar deployed alongside your service container. On your development machine, you run your services via dapr run.

If you want to expose a method on Service B and you use HTTP, you just need to expose an HTTP handler or route. For example, with Express in Node you would use something like:

const express = require('express');
const app = express();
app.post('/neworder', (req, res) => { your code }

You then run your service and annotate it with the proper Dapr annotations (Kubernetes):

annotations:
        dapr.io/enabled: "true"
        dapr.io/id: "node"
        dapr.io/port: "3000"

On your local machine, you would just run the service via dapr run:

dapr run --app-id node --app-port 3000 node app.js

In the last example, the Dapr id is node and we indicate that the service is listening on port 3000. To invoke the method from service A, it can use the following code (Python example shown):

dapr_port = os.getenv("DAPR_HTTP_PORT", 3500)
dapr_url = "http://localhost:{}/v1.0/invoke/node/method/neworder".format(dapr_port)
message = {"data": {"orderId": 1234}}
response = requests.post(dapr_url, json=message, timeout=5)

As you can see, service A does not contact service B directly. It just talks to its Dapr sidecar on localhost (or Dapr on your dev machine) and asks it to invoke the neworder method via a service that uses Dapr id node. It is also clear that both service A and B use HTTP only. Because you just use HTTP to expose and invoke methods, you can use any language or framework.

You can find a complete example here with Node and Python.

Service Invocation with HTTP and GRPC

Dapr has SDKs available for C#, Go and other languages. You might prefer those over the generic HTTP approach. In the case of Go, the SDK uses GRPC to interface with the Dapr runtime. With Dapr in between, one service can use HTTP while another uses GRPC.

Let’s take a look at a service that exposes a method (HelloFromGo) from a Go application. The full example is here. Instead of creating an HTTP route with the name of your method, you use an OnInvoke handler that looks like this (only the start is shown, see the full code):

func (s *server) OnInvoke(ctx context.Context, in *commonv1pb.InvokeRequest) (*commonv1pb.InvokeResponse, error) {
	var response string

	switch in.Method {
	case "HelloFromGo":

		response = s.HelloFromGo()

Naturally, you also have to implement an HelloFromGo() method as well:

// HelloFromGo is a simple demo method to invoke
func (s *server) HelloFromGo() string {
	return "Hello"

}

Another service can use any language or framework and invoke the above method with a POST to the following URL if the Dapr id of the Go service is goserver:

http://localhost:3500/v1.0/invoke/goserver/method/HelloFromGo

A POST to the above URL tells Dapr to execute the OnInvoke method via GRPC which will run the HelloFromGo function. It is perfectly possible to include a payload in your POST and have the OnInvoke handler to process that payload. The full example is here which also includes sending and processing a JSON payload and sending back a text response. You will need to somewhat understand how GRPC works and also understand protocol buffers. A good book on GRPC is the following one: https://learning.oreilly.com/library/view/grpc-up-and/9781492058328/.

Conclusion

Dapr allows you to choose between HTTP and GRPC interfaces to interact with the runtime. You can choose whatever is most comfortable to you. One team can use HTTP with Python, JavaScript etc… while other teams use GRPC with their language of choice. Whatever you choose, the Dapr runtime will make sure service invocation just works allowing you to focus on the code.