Giving Argo CD a spin

If you have followed my blog a little, you have seen a few posts about GitOps with Flux CD. This time, I am taking a look at Argo CD which, like Flux CD, is a GitOps tool to deploy applications from manifests in a git repository.

Don’t want to read this whole thing?

Here’s the video version of this post

There are several differences between the two tools:

  • At first glance, Flux appears to use a single git repo for your cluster where Argo immediately introduces the concept of apps. Each app can be connected to a different git repo. However Flux can also use multiple git repositories in the same cluster. See https://github.com/fluxcd/multi-tenancy for more information
  • Flux has the concept of workloads which can be automated. This means that image repositories are scanned for updates. When an update is available (say from tag v1.0.0 to v1.0.1), Flux will update your application based on filters you specify. As far as I can see, Argo requires you to drive the update from your CI process, which might be preferred.
  • By default, Argo deploys an administrative UI (next to a CLI) with a full view on your deployment and its dependencies
  • Argo supports RBAC and integrates with external identity providers (e.g. Azure Active Directory)

The Argo CD admin interface is shown below:

Argo CD admin interface… not too shabby

Let’s take a look at how to deploy Argo and deploy the app you see above. The app is deployed using a single yaml file. Nothing fancy yet such as kustomize or jsonnet.

Deployment

The getting started guide is pretty clear, so do have a look over there as well. To install, just run (with a deployed Kubernetes cluster and kubectl pointing at the cluster):

kubectl create namespace argocd 

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Note that I installed Argo CD on Azure (AKS).

Next, install the CLI. On a Mac, that is simple (with Homebrew):

brew tap argoproj/tap

brew install argoproj/tap/argocd

You will need access to the API server, which is not exposed over the Internet by default. For testing, port forwarding is easiest. In a separate shell, run the following command:

kubectl port-forward svc/argocd-server -n argocd 8080:443

You can now connect to https://localhost:8080 to get to the UI. You will need the admin password by running:

kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2

You can now login to the UI with the user admin and the displayed password. You should also login from the CLI and change the password with the following commands:

argocd login localhost:8080

argocd account update-password

Great! You are all set now to deploy an application.

Deploying an application

We will deploy an application that has a couple of dependencies. Normally, you would install those dependencies with Argo CD as well but since I am using a cluster that has these dependencies installed via Azure DevOps, I will just list what you need (Helm commands):

helm upgrade --namespace kube-system --install --set controller.service.loadBalancerIP=<IPADDRESS>,controller.publishService.enabled=true --wait nginx stable/nginx-ingress 

helm upgrade --namespace kube-system --install --values /home/vsts/work/1/s/externaldns/values.yaml --set cloudflare.apiToken=<CF_SECRET> --wait externaldns stable/external-dns

kubectl create ns cert-manager

helm upgrade --namespace cert-manager --install --wait --version v0.12.0 cert-manager jetstack/cert-manager

To know more about these dependencies and use an Azure DevOps YAML pipeline to deploy them, see this post. If you want, you can skip the externaldns installation and create a DNS record yourself that resolves to the public IP address of Nginx Ingress. If you do not want to use an Azure static IP address, you can remove the loadBalancerIP parameter from the first command.

The manifests we will deploy with Argo CD can be found in the following public git repository: https://github.com/gbaeke/argo-demo. The application is in three YAML files:

  • Two YAML files that create a certificate cluster issuer based on custom resource definitions (CRDs) from cert-manager
  • realtime.yaml: Redis deployment, Redis service (ClusterIP), realtime web app deployment (based on this), realtime web app service (ClusterIP), ingress resource for https://real.baeke.info (record automatically created by externaldns)

It’s best that you fork my repo and modify realtime.yaml’s ingress resource with your own DNS name.

Create the Argo app

Now you can create the Argo app based on my forked repo. I used the following command with my original repo:

argocd app create realtime \   
--repo https://github.com/gbaeke/argo-demo.git \
--path manifests \
--dest-server https://kubernetes.default.svc \
--dest-namespace default

The command above creates an app called realtime based on the specified repo. The app should use the manifests folder and apply (kubectl apply) all the manifests in that folder. The manifests are deployed to the cluster that Argo CD runs in. Note that you can run Argo CD in one cluster and deploy to totally different clusters.

The above command does not configure the repository to be synced automatically, although that is an option. To sync manually, use the following command:

argocd app sync realtime

The application should now be synced and viewable in the UI:

Application installed and synced

In my case, this results in the following application at https://real.baeke.info:

Not Secure because we use Let’s Encrypt staging for this app

Set up auto-sync

Let’s set up this app to automatically sync with the repo (default = every 3 minutes). This can be done from both the CLI and the UI. Let’s do it from the UI. Click on the app and then click App Details. You will find a Sync Policy in the app details where you can enable auto-sync

Setting up auto-sync from the UI

You can now make changes to the git repo like changing the image tag for gbaeke/fluxapp (yes, I used this image with the Flux posts as well 😊 ) to 1.0.6 and wait for the sync to happen. Or sync manually from the CLI or the UI.

Conclusion

This was a quick tour of Argo CD. There is much more you can do but the above should get you started quickly. I must say I quite like the solution and am eager to see what the collaboration of Flux CD, Argo CD and Amazon comes up with in the future.

Kustomize and Flux

Flux has a feature called manifest generation that works together with Kustomize. Instead of just picking YAML files from a git repo and applying them, customisation is performed with the kustomize build command. The resulting YAML then gets applied to your cluster.

If you don’t know how customisation works (without Flux), take a look at the article I wrote earlier. Or look at the core docs.

You need to be aware of a few things before you get started. In order for Flux to use this method, you need to turn on manifest generation. With the Flux Helm chart, just pass the following parameter:

--set manifestGeneration=true

In my case, I have plain YAML files without customisation in a config folder. I want the files that use customisation in a different folder, say kustomize, like so:

Two folders to pass as git.path

To pass these folders to the Helm chart, use the following parameter:

--set git.path="config\,kustomize"

The kustomize folder contains the following files:

base files with environments dev and prod

There is nothing special about the base folder here. It is as explained in my previous post. The dev and prod folders are similar so I will focus only on dev.

The dev folder contains a .flux.yaml file, which is required by Flux. In this simple example, it contains the following:

version: 1
patchUpdated:
  generators:
    - command: kustomize build .
  patchFile: flux-patch.yaml

The file specifies the generator to use, in this case Kustomize. The kustomize executable is in the Flux image. I specify one patchFile which contains patches for several resources separated by —:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.realtime: semver:~1
  name: realtime
  namespace: realtime-dev
spec:
  template:
    spec:
      $setElementOrder/containers:
      - name: realtime
      containers:
      - image: gbaeke/fluxapp:1.0.6
        name: realtime
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtime-ingress
  namespace: realtime-dev
spec:
  rules:
  - host: realdev.baeke.info
    http:
      paths:
      - backend:
          serviceName: realtime
          servicePort: 80
        path: /
  tls:
  - hosts:
    - realdev.baeke.info
    secretName: real-dev-baeke-info-tls

Above, you see the patches for the dev environment:

  • the workload should be automated by Flux, installing new images based on the semantic version filter ~1
  • the ingress should use host realdev.baeke.info with a different name for the secret as well (the secret will be created by cert-manager)

The prod folder contains a similar configuration. Perhaps naively, I thought that specifying the kustomize folder in git.path was sufficient for Flux to scan the folders and run customisation wherever a .flux.yaml file was found. Sadly, that is not the case. ☹️With just the kustomization folder specified, Flux find conflicts between base, dev and prod folders because they contain similar files. That is expected behaviour for regular YAML files but , in my opinion, should not happen in this case. There is a bit of a clunky way to make this work though. Just specify the following as git.path:

--set git.path="config\,kustomize/dev\,kustomize/prod"

With the above parameter, Flux will find no conflicts and will happily apply the customisations.

As a side note, you should also specify the namespace in the patch file explicitly. It is not added automatically even though kustomization.yaml contains the namespace.

Let’s look at the cluster when Flux has applied the changes.

Namespaces for dev and prod created via Flux & Kustomize

And here is the deployed “production app”:

Who chose that ugly colour!

The way customisations are handled could be improved. It’s unwieldy to specify every “customisation” folder in the git.path parameter. Just give me a –git-kustomize-path parameter and scan the paths in that parameter for .flux.yaml files. On the other hand, maybe I am missing something here so remarks are welcome.

Creating Kubernetes secrets from Key Vault

If you do any sort of development, you often have to deal with secrets. There are many ways to deal with secrets, one of them is retrieving the secrets from a secure system from your own code. When your application runs on Kubernetes and your code (or 3rd party code) cannot be configured to retrieve the secrets directly, you have several options. This post looks at one such solution: Azure Key Vault to Kubernetes from Sparebanken Vest, Norway.

In short, the solution connects to Azure Key Vault and does one of two things:

In my scenario, I just wanted regular secrets to use in a KEDA project that processes IoT Hub messages. The following secrets were required:

  • Connection string to a storage account: AzureWebJobsStorage
  • Connection string to IoT Hub’s event hub: EventEndpoint

In the YAML that deploys the pods that are scaled by KEDA, the secrets are referenced as follows:

env:
 - name: AzureFunctionsJobHost__functions__0
   value: ProcessEvents
 - name: FUNCTIONS_WORKER_RUNTIME
   value: node
 - name: EventEndpoint
   valueFrom:
     secretKeyRef:
       name: kedasample-event
       key: EventEndpoint
 - name: AzureWebJobsStorage
   valueFrom:
     secretKeyRef:
       name: kedasample-storage
       key: AzureWebJobsStorage

Because the YAML above is deployed with Flux from a git repo, we need to get the secrets from an external system. That external system in this case, is Azure Key Vault.

To make this work, we first need to install the controller that makes this happen. This is very easy to do with the Helm chart. By default, this Helm chart will work well on Azure Kubernetes Service as long as you give the AKS security principal read access to Key Vault:

Access policies in Key Vault (azure-cli-2019-… is the AKS service principal here)

Next, define the secrets in Key Vault:

Secrets in Key Vault

With the access policies in place and the secrets defined in Key Vault, the controller installed by the Helm chart can do its work with the following YAML:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: eventendpoint
  namespace: default
spec:
  vault:
    name: gebakv
    object:
      name: EventEndpoint
      type: secret
  output:
    secret: 
      name: kedasample-event
      dataKey: EventEndpoint
      type: opaque
---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: azurewebjobsstorage
  namespace: default
spec:
  vault:
    name: gebakv
    object:
      name: AzureWebJobsStorage
      type: secret
  output:
    secret: 
      name: kedasample-storage
      dataKey: AzureWebJobsStorage
      type: opaque     

The above YAML defines two objects of kind AzureKeyVaultSecret. In each object we specify the Key Vault secret to read (vault) and the Kubernetes secret to create (output). The above YAML results in two Kubernetes secrets:

Two regular secrets

When you look inside such a secret, you will see:

Inside the secret

To double check the secret, just do echo RW5K… | base64 -d to see the decoded secret and that it matches the secret stored in Key Vault. You can now reference the secret with ValueFrom as shown earlier in this post.

Conclusion

If you want to turn Azure Key Vault secrets into regular Kubernetes secrets for use in your manifests, give the solution from Sparebanken Vest a go. It is very easy to use. If you do not want regular Kubernetes secrets, opt for the Env Injector instead, which injects the environment variables directly in your pod.

Deploy AKS with Nginx, External DNS, Helm Operator and Flux

A while ago, I blogged about an Azure YAML pipeline to deploy AKS together with Traefik. As a variation on that theme, this post talks about deploying AKS together with Nginx, External DNS, a Helm Operator and Flux CD. I blogged about Flux before if you want to know what it does.

Video version (1.5x speed recommended)

I added the Azure DevOps pipeline to the existing GitHub repo, in the nginx-dns-helm-flux folder.

Let’s break the pipeline down a little. In what follows, replace AzureMPN with a reference to your own subscription. The first two tasks, AKS deployment and IP address deployment are ARM templates that deploy these resources in Azure. Nothing too special there. Note that the AKS cluster is one with default networking, no Azure AD integration and without VMSS (so no multiple node pools either).

Note: I modified the pipeline to deploy a VMSS-based cluster with a standard load balancer, which is recommended instead of a cluster based on an availability set with a basic load balancer.

The third task takes the output of the IP address deployment and parses out the IP address using jq (last echo statement on one line):

task: Bash@3
      name: GetIP
      inputs:
        targetType: 'inline'
        script: |
          echo "##vso[task.setvariable variable=test-ip;]$(echo '$(armoutputs)' | jq .ipaddress.value -r)"

The IP address is saved in a variable test-ip for easy reuse later.

Next, we install kubectl and Helm v3. Indeed, Azure DevOps now supports installation of Helm v3 with:

- task: HelmInstaller@1
      inputs:
        helmVersionToInstall: 'latest'

Next, we need to run a script to achieve a couple of things:

  • Get AKS credentials with Azure CLI
  • Add Helm repositories
  • Install a custom resource definition (CRD) for the Helm operator

This is achieved with the following inline Bash script:

- task: AzureCLI@1
      inputs:
        azureSubscription: 'AzureMPN'
        scriptLocation: 'inlineScript'
        inlineScript: |
          az aks get-credentials -g $(aksTestRG) -n $(aksTest) --admin
          helm repo add stable https://kubernetes-charts.storage.googleapis.com/
          helm repo add fluxcd https://charts.fluxcd.io
          helm repo update
          kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/flux-helm-release-crd.yaml

Next, we create a Kubernetes namespace called fluxcd. I create the namespace with some inline YAML in the Kubernetes@1 task:

- task: Kubernetes@1
      inputs:
        connectionType: 'None'
        command: 'apply'
        useConfigurationFile: true
        configurationType: 'inline'
        inline: |
          apiVersion: v1
          kind: Namespace
          metadata:
            name: fluxcd

It’s best to use the approach above instead of kubectl create ns. If the namespace already exists, you will not get an error.

Now we are ready to deploy Nginx, External DNS, Helm operator and Flux CD

Nginx

This is a pretty basic installation with the Azure DevOps Helm task:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'stable/nginx-ingress'
        releaseName: 'nginx'
        overrideValues: 'controller.service.loadBalancerIP=$(test-ip),controller.publishService.enabled=true,controller.metrics.enabled=true'

For External DNS to work, I found I had to set controller.publishService.enabled=true. As you can see, the Nginx service is configured to use the IP we created earlier. Azure will create a load balancer with a front end IP configuration that uses this address. This all happens automatically.

Note: controller.metrics.enabled enables a Prometheus scraping endpoint; that is not discussed further in this blog

External DNS

External DNS can automatically add DNS records for ingresses and services you add to Kubernetes. For instance, if I create an ingress for test.baeke.info, External DNS can create this record in the baeke.info zone and use the IP address of the Ingress Controller (nginx here). Installation is pretty straightforward but you need to provide credentials to your DNS provider. In my case, I use CloudFlare. Many others are available. Here is the task:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'stable/external-dns'
        releaseName: 'externaldns'
        overrideValues: 'cloudflare.apiToken=$(CFAPIToken)'
        valueFile: 'externaldns/values.yaml'

On CloudFlare, I created a token that has the required access rights to my zone (read, edit). I provide that token to the chart via the CFAPIToken variable defined as a secret on the pipeline. The valueFile looks like this:

rbac:
  create: true

provider: cloudflare

logLevel: debug

cloudflare:
  apiToken: CFAPIToken
  email: email address
  proxied: false

interval: "1m"

policy: sync # or upsert-only

domainFilters: [ 'baeke.info' ]

In the beginning, it’s best to set the logLevel to debug in case things go wrong. With interval 1m, External DNS checks for ingresses and services every minute and syncs with your DNS zone. Note that External DNS only touches the records it created. It does so by creating TXT records that provide a record that External DNS is indeed the owner.

With External DNS in place, you just need to create an ingress like below to have the A record real.baeke.info created:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtime-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: real.baeke.info
    http:
      paths:
      - path: /
        backend:
          serviceName: realtime
          servicePort: 80

Helm Operator

The Helm Operator allows us to install Helm chart by simply using a yaml file. First, we install the operator:

- task: HelmDeploy@0
      name: HelmOp
      displayName: Install Flux CD Helm Operator
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'fluxcd/helm-operator'
        releaseName: 'helm-operator'
        overrideValues: 'extraEnvs[0].name=HELM_VERSION,extraEnvs[0].value=v3,image.repository=docker.io/fluxcd/helm-operator-prerelease,image.tag=helm-v3-dev-53b6a21d'
        arguments: '--namespace fluxcd'

This installs the latest version of the operator at the time of this writing (image.repository and image.tag) and also sets Helm to v3. With this installed, you can install a Helm chart by submitting files like below:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: influxdb
  namespace: default
spec:
  releaseName: influxdb
  chart:
    repository: https://charts.bitnami.com/bitnami
    name: influxdb
    version: 0.2.4

You can create files that use kind HelmRelease (HR) because we installed the Helm Operator CRD before. To check installed Helm releases in a namespace, you can run kubectl get hr.

The Helm operator is useful if you want to install Helm charts from a git repository with the help of Flux CD.

Flux CD

Deploy Flux CD with the following task:

- task: HelmDeploy@0
      name: FluxCD
      displayName: Install Flux CD
      inputs:
        connectionType: 'None'
        namespace: 'fluxcd'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'fluxcd/flux'
        releaseName: 'flux'
        overrideValues: 'git.url=git@github.com:$(gitURL),git.pollInterval=1m'

The gitURL variable should be set to a git repo that contains your cluster configuration. For instance: gbaeke/demo-clu-flux. Flux will check the repo for changes every minute. Note that we are using a public repo here. Private repos and systems other than GitHub are supported.

Take a look at GitOps with Weaveworks Flux for further instructions. Some things you need to do:

  • Install fluxctl
  • Use fluxctl identity to obtain the public key from the key pair created by Flux (when you do not use your own)
  • Set the public key as a deploy key on the git repo
GitHub deploy key

By connecting the https://github.com/gbaeke/demo-clu-flux repo to Flux CD (as done here), the following is done based on the content of the repo (the complete repo is scanned:

  • Install InfluxDB Helm chart
  • Add a simple app that uses a Go socket.io implementation to provide realtime updates based on Redis channel content; this app is published via nginx and real.baeke.info is created in DNS (by External DNS)
  • Adds a ConfigMap that is used to configure Azure Monitor to enable Prometheus endpoint scraping (to show this can be used for any object you need to add to Kubernetes)

Note that the ingress of the Go app has an annotation (in realtime.yaml, in the git repo) to issue a certificate via cert-manager. If you want to make that work, add an extra task to the pipeline that installs cert-manager:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'cert-manager'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'jetstack/cert-manager'
        releaseName: 'cert-manager'
        arguments: '--version v0.12.0'

You will also need to create another namespace, cert-manager, just like we created the fluxcd namespace.

In order to make the above work, you will need Issuers or ClusterIssuers. The repo used by Flux CD contains two ClusterIssuers, one for Let’s Encrypt staging and one for production. The ingress resource uses the production issuer due to the following annotation:

cert-manager.io/cluster-issuer: "letsencrypt-prod" 

The Go app that is deployed by Flux now has TLS enabled by default:

https on the Go app

I often use this deployment in demo’s of all sorts. I hope it is helpful for you too in that way!

Front Door with WordPress on Azure App Service

Here’s a quick overview of the steps you need to take to put Front Door in front of an Azure Web App. In this case, the web app runs a WordPress site.

Step 1: DNS

Suppose you deployed the Web App and its name is gebawptest.azurewebsites.net and you want to reach the site via wp.baeke.info. Traffic will flow like this:

user types wp.baeke.info ---CNAME to xyz.azurefd.net--> Front Door --- connects to gebawptest.azurewebsites.net using wp.baeke.info host header

It’s clear that later, in Front Door, you will have to specify the host header (wp.baeke.info in this case). More on that later…

If you have worked with Azure Web App before, you probably know you need to configure the host header sent by the browser as a custom domain on the web app. Something like this:

Custom domain in Azure Web App (no https configured – hence the red warning)

In this case, we do not want to resolve wp.baeke.info to the web app but to Front Door. To make the custom domain assignment work (because the web app will verify the custom name), add the following TXT record to DNS:

TXT awverify.wp gebawptest.azurewebsites.net 

For example in CloudFlare:

awverify txt record in CloudFlare DNS

With the above TXT record, I could easily add wp.baeke.info as a custom domain to the gebawptest.azurewebsites.net web app.

Note: wp.baeke.info is a CNAME to your Front Door domain (see below)

Step 2: Front Door

My Front Door designer looks like this:

Front Door designer

When you create a Front Door, you need to give it a name. In my case that is gebafd.azurefd.net. With wp.baeke.info as a CNAME for gebafd.azurefd.net, you can easily add wp.baeke.info as an additional Frontend host.

The backend pool is the Azure Web App. It’s configured as follows:

Front Door backend host (only one in the pool); could also have used the Azure App Service backend type

You should connect to the web app using its original name but send wp.baeke.info as the host header. This allows Front Door to connect to the web app correctly.

The last part of the Front Door config is a simple rule that connects the frontend wp.baeke.info to the backend pool using HTTP only.

Step 3: WordPress config

With the default Azure WordPress templates, you do not need to modify anything because wp-config.php contains the following settings:

define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/');                                                        define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST'] . '/');

If you want, you can change this to:

define('WP_SITEURL', 'http://wp.baeke.info/' );                                                                         define('WP_HOME', 'http://wp.baeke.info/');   

Step 4: Blocking access from other locations

In general, you want users to only connect to the site via Front Door. To achieve this, add the following access restrictions to the Web App:

Access restrictions to only allow traffic from Front Door and Azure basic infrastructure services

GitOps with Weaveworks Flux

If you have ever deployed applications to Kubernetes or other platforms, you are probably used to the following approach:

  • developers check in code which triggers CI (continuous integration) and eventually results in deployable artifacts
  • a release process deploys the artifacts to one or more environments such as a development and a production environment

In the case of Kubernetes, the artifact is usually a combination of a container image and a Helm chart. The release process then authenticates to the Kubernetes cluster and deploys the artifacts. Although this approach works, I have always found this deployment process overly complicated with many release pipelines configured to trigger on specific conditions.

What if you could store your entire cluster configuration in a git repository as the single source of truth and use simple git operations (is there such a thing? 😁) to change your configuration? Obviously, you would need some extra tooling that synchronizes the configuration with the cluster, which is exactly what Weaveworks Flux is designed to do. Also check the Flux git repo.

In this post, we will run through a simple example to illustrate the functionality. We will do the following over two posts:

Post one:

  • Create a git repo for our configuration
  • Install Flux and use the git repo as our configuration source
  • Install an Ingress Controller with a Helm chart

Post two:

  • Install an application using standard YAML (including ingress definition)
  • Update the application automatically when a new version of the application image is available

Let’s get started!

Create a git repository

To keep things simple, make sure you have an account on GitHub and create a new repository. You can also clone my demo repository. To clone it, use the following command:

git clone https://github.com/gbaeke/gitops-sample.git

Note: if you clone my repo and use it in later steps, the resources I defined will get created automatically; if you want to follow the steps, use your own empty repo

Install Flux

Flux needs to be installed on Kubernetes, so make sure you have a cluster at your disposal. In this post, I use Azure Kubernetes Services (AKS). Make sure kubectl points to that cluster. If you have kubectl installed, obtain the credentials to the cluster with the Azure CLI and then run kubectl get nodes or kubectl cluster-info to make sure you are connected to the right cluster.

az aks get-credentials -n CLUSTER_NAME -g RESOURCE_GROUP

It is easy to install Flux with Helm and in this post, I will use Helm v3 which is currently in beta. You will need to install Helm v3 on your system. I installed it in Windows 10’s Ubuntu shell. Use the following command to download and unpack it:

curl -sSL "https://get.helm.sh/helm-v3.0.0-beta.3-linux-amd64.tar.gz" | tar xvz

This results in a folder linux-amd64 which contains the helm executable. Make the file executable with chmod +x and copy it to your path as helmv3. Next, run helmv3. You should see the help text:

The Kubernetes package manager
 
Common actions for Helm:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts 
...

Now you are ready to install Flux. First, add the FLux Helm repository to allow helmv3 to find the chart:

helmv3 repo add fluxcd https://charts.fluxcd.io

Create a namespace for Flux:

kubectl create ns flux

Install Flux in the namespace with Helm v3:

helmv3 upgrade -i flux fluxcd/flux --wait \
 --namespace flux \
 --set registry.pollInterval=1m \
 --set git.pollInterval=1m \
 --set git.url=git@github.com:GITHUBUSERNAME/gitops-sample

The above command upgrades Flux but installs it if it is missing (-i). The chart to install is fluxcd/flux. With –wait, we wait until the installation is finished. We will not go into the first two –set options for now. The last option defines the git repository Flux should use to sync the configuration to the cluster. Currently, Flux supports one repository. Because we use a public repository, Flux can easily read its contents. At times, Flux needs to update the git repository. To support that, you can add a deploy key to the repository. First, install the fluxctl tool:

curl -sL https://fluxcd.io/install | sh
export PATH=$PATH:$HOME/.fluxcd/bin

Now run the following commands to obtain the public key to use as deploy key:

export FLUX_FORWARD_NAMESPACE=flux
fluxctl identity

The output of the command is something like:

ssh-rsa AAAAB3NzaC1yc2EAAAA...

Copy and paste this key as a deploy key for your github repo:

git repo deploy key

Phew… Flux should now be installed on your cluster. Time to install some applications to the cluster from the git repo.

Note: Flux also supports private repos; it just so happens I used a public one here

Install an Ingress Controller

Let’s try to install Traefik via its Helm chart. Since I am not using traditional CD with pipelines that run helm commands, we will need something else. Luckily, there’s a Flux Helm Operator that allows us to declaratively install Helm charts. The Helm Operator installs a Helm chart when it detects a custom resource definition (CRD) of type helm.fluxcd.io/v1. Let’s first create the CRD for Helm v3:

kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/flux-helm-release-crd.yaml

Next, install the operator:

helmv3 upgrade -i helm-operator flux/helm-operator --wait \
 --namespace fluxcd \
 --set git.ssh.secretName=flux-git-deploy \
 --set git.pollInterval=1m \
 --set chartsSyncInterval=1m \
 --set configureRepositories.enable=true \
 --set configureRepositories.repositories[0].name=stable \
 --set configureRepositories.repositories[0].url=https://kubernetes-charts.storage.googleapis.com \
 --set extraEnvs[0].name=HELM_VERSION \
 --set extraEnvs[0].value=v3 \
 --set image.repository=docker.io/fluxcd/helm-operator-prerelease \
 --set image.tag=helm-v3-71bc9d62

You didn’t think I found the above myself did you? 😁 It’s from an excellent tutorial here.

When the operator is installed, you should be able to install Traefik with the following YAML:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: traefik
  namespace: default
  annotations:
    fluxcd.io/ignore: "false"
spec:
  releaseName: traefik
  chart:
    repository: https://kubernetes-charts.storage.googleapis.com/
    name: traefik
    version: 1.78.0
  values:
    serviceType: LoadBalancer
    rbac:
      enabled: true
    dashboard:
      enabled: true   

Just add the above YAML to the GitHub repository. I added it to the ingress folder:

traefik.yaml added to the GitHub repo

If you wait a while, or run fluxctl sync, the repo gets synced and the resources created. When the helm.fluxcd.io/v1 object is created, the Helm Operator will install the chart in the default namespace. Traefik will be exposed via an Azure Load Balancer. You can check the release with the following command:

kubectl get helmreleases.helm.fluxcd.io

NAME      RELEASE   STATUS     MESSAGE                  AGE
traefik   traefik   deployed   helm install succeeded   15m

Also check that the Traefik pod is created in the default namespace (only 1 replica; the default):

kubectl get po

NAME                       READY   STATUS    RESTARTS   AGE
traefik-86f4c5f9c9-gcxdb   1/1     Running   0          21m

Also check the public IP of Traefik:

kubectl get svc
 
NAME                TYPE           CLUSTER-IP     EXTERNAL-IP 
traefik             LoadBalancer   10.0.8.59      41.44.245.234   

We will later use that IP when we define the ingress for our web application.

Conclusion

In this post, you learned a tiny bit about GitOps with WeaveWorks Flux. The concept is simple enough: store your cluster config in a git repo as the single source of truth and use git operations to initiate (or rollback) cluster operations. To start, we simply installed Traefik via the Flux Helm Operator. In a later post, we will add an application and look at image management. There’s much more you can do so stay tuned!

Back to basics: DNS ALIAS records

A few days ago, I had to map the domain inity.io to a Netlify domain. If you have only worked with DNS once in your life, you probably know about these two types of records:

With that knowledge in your bag, it would seem that a CNAME record is the way to map inity.io to somedomain.netlify.com. Sadly, that is not the case because CNAMEs cannot coexist with other records for the domain. In the case of the root or apex domain, there are existing records for the root domain such as the NS records.

An ALIAS record is one way of solving the issue. But before reading on, be sure to read this post: https://www.netlify.com/blog/2017/02/28/to-www-or-not-www/.

ALIAS record to the rescue

If your DNS provider supports ALIAS records, you are in luck. From a high level, an ALIAS record works like a CNAME record although there are several lower level differences we won’t all go into.

Since I use namecheap.com and they support ALIAS records, it was easy to map inity.io to somedomain.netlify.com:

Namecheap ALIAS record

The ALIAS record only supports a 1 or 5 minute TTL. The host is @ which represents the root domain. Notice I also redirect http://www.inity.io to the Netlify domain with a regular CNAME.

What does dig say?

Let’s look at what dig returns for both the ALIAS and CNAME record. Here’s the dig output for ALIAS (with some lines removed):

λ geba:~  dig inity.io


;; ANSWER SECTION:
inity.io.               300     IN      A       167.99.129.42

The authoritative server does all the work here and returns the IP address directly to you. That does not happen for the CNAME:

λ geba:~  dig www.inity.io

;; ANSWER SECTION:
www.inity.io.           1799    IN      CNAME   optimistic-panini-9caddc.netlify.com.
optimistic-panini-9caddc.netlify.com. 20 IN A   167.99.129.42

Some more work needs to be done here since you get back the CNAME record which then needs to be resolved to the IP address.

What about Azure and Front Door?

If you work with Front Door and want to map the root or apex domain to a Front Door frontend such as my.azurefd.net, the same issue arises. The Microsoft docs contain a good article explaining the concepts: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-how-to-onboard-apex-domain. From that document, you will learn that Azure DNS also supports “aliases” with an easy dropdown list to select your Front Door frontend host. If you want to use SSL for the frontend host, you will need to bring your own certificate because automatic certificates are not supported with APEX domains.

Note that you do not have to use Azure DNS. An ALIAS record at NameCheap or other providers would work equally well. CloudFlare also supports APEX domains via CNAME Flattening. Just don’t use GoDaddy. 😲

Deploy AKS and Traefik with an Azure DevOps YAML pipeline

This post is a companion to the following GitHub repository: https://github.com/gbaeke/aks-traefik-azure-deploy. The repository contains ARM templates to deploy an AD integrated Kubernetes cluster and an IP address plus a Helm chart to deploy Traefik. Traefik is configured to use the deployed IP address. In addition to those files, the repository also contains the YAML pipeline, ready to be imported in Azure DevOps.

Let’s take a look at the different building blocks!

AKS ARM Template

The aks folder contains the template and a parameters file. You will need to modify the parameters file because it requires settings to integrate the AKS cluster with Azure AD. You will need to specify:

  • clientAppID: the ID of the client app registration
  • serverAppID: the ID of the server app registration
  • tenantID: the ID of your AD tenant

Also specify clientId, which is the ID of the service principal for your cluster. Both the serverAppID and the clientID require a password. The passwords have been set via a pipeline secret variable.

The template configures a fairly standard AKS cluster that uses Azure networking (versus kubenet). It also configures Log Analytics for the cluster (container insights).

Deploying the template from the YAML file is done with the task below. You will need to replace YOUR SUBSCRIPTION with an authorized service connection:

 # DEPLOY AKS IN TEST   
 - task: AzureResourceGroupDeployment@2
   inputs:
     azureSubscription: 'YOUR SUBSCRIPTION'
     action: 'Create Or Update Resource Group'
     resourceGroupName: '$(aksTestRG)'
     location: 'West Europe'
     templateLocation: 'Linked artifact'
     csmFile: 'aks/deploy.json'
     csmParametersFile: 'aks/deployparams.t.json'
     overrideParameters: '-serverAppSecret $(serverAppSecret) -clientIdsecret $(clientIdsecret) -clusterName $(aksTest)'
       deploymentMode: 'Incremental'
       deploymentName: 'CluTest' 

The task uses several variables like $(aksTestRG) etc… If you check azure-pipelines.yaml, you will notice that most are configured at the top of the file in the variables section:

variables:
  aksTest: 'clu-test'
  aksTestRG: 'rg-clu-test'
  aksTestIP: 'clu-test-ip' 

The two secrets are the secret 🔐 vaiables. Naturally, they are configured in the Azure DevOps UI. Note that there are other means to store and obtain secrets, such as Key Vault. In Azure DevOps, the secret variables can be found here:

Azure DevOps secret variables

IP Address Template

The ip folder contains the ARM template to deploy the IP address. We need to deploy the IP address resource to the resource group that holds the AKS agents. With the names we have chosen, that name is MC_rg-clu-test_clu-test_westeurope. It is possible to specify a custom name for the resource group.

Because we want to obtain the IP address after deployment, the ARM template contains an output:

 "outputs": {
        "ipaddress": {
            "type": "string",
            "value": "[reference(concat('Microsoft.Network/publicIPAddresses/', parameters('ipName')), '2017-10-01').ipAddress]"
        }
     } 

The output ipaddress is of type string. Via the reference template function we can extract the IP address.

The ARM template is deployed like the AKS template but we need to capture the ARM outputs. The last line of the AzureResourceGroupDeployment@2 that deploys the IP address contains:

deploymentOutputs: 'armoutputs'

Now we need to extract the IP address and set it as a variable in the pipeline. One way of doing this is via a bash script:

 - task: Bash@3
      inputs:
        targetType: 'inline'
        script: |
          echo "##vso[task.setvariable variable=test-ip;]$(echo '$(armoutputs)' | jq .ipaddress.value -r)" 

You can set a variable in Azure DevOps with echo ##vso[task.setvariable variable=variable_name;]value. In our case, the “value” should be the raw string of the IP address output. The $(armoutputs) variable contains the output of the IP address ARM template as follows:

{"ipaddress":{"type":"String","value":"IP ADDRESS"}}

To extract IP ADDRESS, we pipe the output of “echo $(armoutputs)” to js .ipaddress.value -r which extracts the IP ADDRESS from the JSON. The -r parameter removes double quotes from the IP ADDRESS to give us the raw string. For more info about jq, check https://stedolan.github.io/jq/ .

We now have the IP address in the test-ip variable, to be used in other tasks via $(test-ip).

Taking care of the prerequisites

In a later phase, we install Traefik via Helm. So we need kubectl and helm on the build agent. In addition, we need to install tiller on the cluster. Because the cluster is RBAC-enabled, we need a cluster account and a role binding as well. The following tasks take care of all that:

- task: KubectlInstaller@0
   inputs:
     kubectlVersion: '1.13.5'


- task: HelmInstaller@1
   inputs:
     helmVersionToInstall: '2.14.1'

- task: AzureCLI@1
  inputs:
    azureSubscription: 'YOUR SUB'
    scriptLocation: 'inlineScript'
    inlineScript: 'az aks get-credentials -g $(aksTestRG) -n $(aksTest) --admin'

 - task: Bash@3
   inputs:
     filePath: 'tiller/tillerconfig.sh'
     workingDirectory: 'tiller/' 

Note that we use the AzureCLI built-in task to easily obtain the cluster credentials for kubectl on the build agent. We use the –admin flag to gain full access. Note that this downloads sensitive information to the build agent temporarily.

The last task just runs a shell script to configure the service account and role binding and install tiller. Check the repository to see the contents of this simple script. Note that this is the quick and easy way to install tiller, not the most secure way! 🙇‍♂️

Install Traefik and use the IP address

The repository contains the downloaded chart (helm fetch stable/traefik –untar). The values.yaml file was modified to set the ingressClass to traefik-ext. We could have used the chart from the Helm repository but I prefer having the chart in source control. Here’s the pipeline task:

 - task: HelmDeploy@0
   inputs:
     connectionType: 'None'
     namespace: 'kube-system'
     command: 'upgrade'
     chartType: 'FilePath'
     chartPath: 'traefik-ext/.'
     releaseName: 'traefik-ext'
     overrideValues: 'loadBalancerIP=$(test-ip)'
     valueFile: 'traefik-ext/values.yaml' 

kubectl is configured to use the cluster so connectionType can be set to ‘None’. We simply specify the IP address we created earlier by setting loadBalancerIP to $(test-ip) with the overrides for values.yaml. This sets the loadBalancerIP setting in Traefik’s service definition (in the templates folder). Service.yaml in the templates folder contains the following section:

 spec:
  type: {{ .Values.serviceType }}
  {{- if .Values.loadBalancerIP }}
  loadBalancerIP: {{ .Values.loadBalancerIP }}
  {{- end }} 

Conclusion

Deploying AKS together with one or more public IP addresses is a common scenario. Hopefully, this post together with the GitHub repo gave you some ideas about automating these deployments with Azure DevOps. All you need to do is create a pipeline from the repo. Azure DevOps will read the azure-pipelines.yml file automatically.

Quick Tip: deploying multiple Traefik ingresses

For a customer that is developing a microservices application, the proposed architecture contains two Kubernetes ingresses:

  • internal ingress: exposed via an Azure internal load balancer, deployed in a separate subnet in the customer’s VNET; no need for SSL
  • external ingress: exposed via an external load balancer; SSL via Let’s Encrypt

The internal ingress exposes API endpoints via Azure API Management and its ability to connect to internal subnets. The external ingress exposes web applications via Azure Front Door.

The Ingress Controller of choice is Traefik. We use the Helm chart to deploy Traefik in the cluster. The example below uses Azure Kubernetes Service so I will refer to Azure objects such as VNETs, subnets, etc… Let’s get started!

Internal Ingress

In values.yaml, use ingressClass to set a custom class. For example:

 kubernetes:
  ingressClass: traefik-int 

When you do not set this value, the default ingressClass is traefik. When you define the ingress object, you refer to this class in your manifest via the annotation below:

 annotations:
    kubernetes.io/ingress.class: traefik-int

When we deploy the internal ingress, we need to tell Traefik to create an internal load balancer. Optionally, you can specify a subnet to deploy to. You can add these options under the service section in values.yaml:

service:
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "traefik" 

The above setting makes sure that the annotations are set on the service that the Helm chart creates to expose Traefik to the “outside” world. The settings are not Traefik specific.

Above, we want Kubernetes to deploy the Azure internal load balancer to a subnet called traefik. That subnet needs to exist in the VNET that contains the Kubernetes subnet. Make sure that the AKS service principal has the necessary access rights to deploy the load balancer in the subnet. If it takes a long time to deploy the load balancer, use kubectl get events in the namespace where you deploy Traefik (typically kube-system).

If you want to provide an static IP address to the internal load balancer, you can do so via the loadBalancerIp setting near the top of values.yaml. You can use any free address in the subnet where you deploy the load balancer.

loadBalancerIP: 172.20.3.10 

All done! You can now deploy the internal ingress with:

helm install . --name traefik-int --namespace kube-system

Note that we install the Helm chart from our local file system and that we are in the folder that contains the chart and values.yaml. Hence the dot (.) in the command.

TIP: if you want to use a private DNS zone to resolve the internal services, see the private DNS section in Azure API Management and Azure Kubernetes Service. Private DNS zones are still in preview.

External ingress

The external ingress is simple now. Just set the ingressClass to traefik-ext (or leave it at the default of traefik although that’s not very clear) and remove the other settings. If you want a static public IP address, you can create such an address first and specify it in values.yaml. In an Azure context, you would create a public IP object in the resource group that contains your Kubernetes nodes.

Conclusion

If you need multiple ingresses of the same type or brand, use distinct values for ingressClass and reference the class in your ingress manifest file. Naturally, when you use two different solutions, say Kong for APIs and Traefik for web sites, you do not need to do that since they use different ingressClass values by default (kong and traefik). Hope this quick tip was useful!

Publishing and securing your API with Kong and Azure Front Door

In the post, Securing your API with Kong and CloudFlare, I exposed a dummy API on Kubernetes with Kong and published it securely with CloudFlare. The breadth of features and its ease of use made CloudFlare a joy to work with. It didn’t take long before I got the question: “can’t you do that with Azure only?”. The answer is obvious: “Of course you can!”

In this post, the traffic flow is as follows:

Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer) -- HTTP --> API Kubernetes service --> API pods

Similarly to CloudFlare, Azure Front Door provides a fully trusted certificate for consumers of the API. In contrast to CloudFlare, Azure Front Door does not provide origin certificates which are trusted by Front Door. That’s easy to solve though by using a fully trusted Let’s Encrypt certificate which is stored as a Kubernetes secret and used in the Kubernetes Ingress definition. For this post, I requested a wildcard certificate for *.baeke.info via https://www.sslforfree.com/

Let’s take it step-by-step, starting at the API and Kong level.

APIs and Kong

Just like in the previous posts, we have a Kubernetes service called func and back-end pods that host the API implemented via Azure Functions in a container. Below you see the API pods in the default namespace. For convenience, Kong is also deployed in that namespace (not recommended in production):

A view on the API pods and Kong via k9s

The ingress definition is shown below:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-auth
spec:
  tls:
  - hosts:
    - api-o.baeke.info
    secretName: wildcard-baeke.info.tls
  rules:
    - host: api-o.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80 

Kong will pick up the above definition and configure itself accordingly.

The API is exposed publicly via https://api-o.baeke.info where the o stands for origin. The secret wildcard-baeke.info.tls refers to a secret which contains the wildcard certificate for *.baeke.info:

apiVersion: v1
kind: Secret
metadata:
  name: wildcard-baeke.info.tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: certificate
  tls.key: key

Naturally, certificate and key should be replaced with the base64-encoded strings of the certificate and key you have obtained (in this case from https://www.sslforfree.com).

At the DNS level, api-o.baeke.info should refer to the external IP address of the exposed Kong Ingress Controller (proxy):

The service kong-kong-proxy is exposed via a public IP address (service of type LoadBalancer)

For the rest, the Kong configuration is not very different from the configuration in Securing your API with Kong and CloudFlare. I did remove the whitelisting configuration, which needs to be updated for Azure Front Door.

Great, we now have our API listening on https://api-o.baeke.info but it is not exposed via Azure Front Door and it does not have a WAF policy. Let’s change that.

Web Application Firewall (WAF) Policy

You can create a WAF policy from the portal:

WAF Policy

The above policy is set to detection only. No custom rules have been defined, but a managed rule set is activated:

Managed rule set for OWASP

The WAF policy was saved as baekeapiwaf. It will be attached to an Azure Front Door frontend. When a policy is attached to a frontend, it will be shown in the policy:

Associated frontends (Front Door front-ends)

Azure Front Door

We will now add Azure Front Door to obtain the following flow:

Consumer ---> https://api.baeke.info (Front Door + WAF) --> https://api-o.baeke.info

The final configuration in Front Door Designer looks like this:

Front Door Designer

When a request comes in for api.baeke.info, the response from api-o.baeke.info is served. Caching was not enabled. The frontend and backend are tied together via the routing rule.

The first thing you need to do is to add the azurefd.net frontend which is baeke-api.azurefd.net in the above config. There’s not much to say about that. Just click the blue plus next to Frontend hosts and follow the prompts. I did not attach a WAF policy to that frontend because it will not forward requests to the backend. We will use a custom domain for that.

Next, click the blue plus again to add the custom domain (here api.baeke.info). In your DNS zone, create a CNAME record that maps api.yourdomain.com to the azurefd.net name:

Mapping of custom domain to azurefd.net domain in CloudFlare DNS

I attached the WAF policy baekeapiwaf to the front-end domain:

WAF policy with OWASP rules to protect the API

Next, I added a certificate. When you select Front Door managed, you will get a Digicert managed image. If the CNAME mapping is not complete, you will get an e-mail from Digicert to approve certificate issuance. Make sure you check your e-mails if it takes long to issue the certificate. It will take a long time either way so be patient! 💤💤💤

Now that we have the frontend, specify the backend that Front Door needs to connect to:

Backend pool

The backend pool uses the API exposed at api-o.baeke.info as defined earlier. With only one backend, priority and weight are of no importance. It should be clear that you can add multiple backends, potentially in different regions, and load balance between them.

You will also need a health probe to check for healthy and unhealthy backends:

Health probes of the backend

Note that the above health check does NOT return a 200 OK status code. That is the only status code that would result in a healthy endpoint. With the above config, Kong will respond with a “no Route matched” 404 Not Found error instead. That does not mean that Front Door will not route to this endpoint though! When all endpoints are in a failed state, Front Door considers them healthy anyway 😲😲😲 and routes traffic using round-robin. See the documentation for more info.

Now that we have the frontend and the backend, let’s tie the two together with a rule:

First part of routing rule

In the first part of the rule, we specify that we listen for requests to api.baeke.info (and not the azurefd.net domain) and that we only accept https. The pattern /* basically forwards everything to the backend.

In the route details, we specify the backend to route to:

Backend to route to

Clearly, we want to route to the api-o backend we defined earlier. We only connect to the backend via HTTPS. It only accepts HTTPS anyway, as defined at the Kong level via a KongIngress resource.

Note that it is possible to create a HTTP to HTTPS redirect rule. See the post Azure Front Door Revisited for more information. Without the rule, you will get the following warning:

Please disregard this warning 😎

Test, test, test

Let’s call the API via the http tool:

Clearly, Azure Front Door has served this request as indicated by the X-Azure-Ref header. Let’s try http:

Azure Front Door throws the above error because the routing rule only accepts https on api.baeke.info!

White listing Azure Front Door

To restrict calls to the backend to Azure Front Door, I used the following KongPlugin definition:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: whitelist-fd
  namespace: default
config:
  whitelist: 
  - 147.243.0.0/16
plugin: ip-restriction 

The IP range is documented here. Note that the IP range can and probably will change in the future.

In the ingress definition, I added the plugin via the annotations:

annotations:
  kubernetes.io/ingress.class: kong
  plugins.konghq.com: http-auth, whitelist-fd 

Calling the backend API directly will now fail:

That’s a no no! Please use the Front Door!

Conclusion

Publishing APIs (or any web app), whether they are running on Kubernetes or other systems, is easy to do with the combination of Azure Front Door and Web Application Firewall policies. Do take pricing into account though. It’s a mixture of relatively low fixed prices with variable pricing per GB and requests processed. In general, CloudFlare has the upper hand here, from both a pricing and features perspective. On the other hand, Front Door has advantages when it comes to automating its deployment together with other Azure resources. As always: plan, plan, plan and choose wisely! 🦉