Kustomize and Flux

Flux has a feature called manifest generation that works together with Kustomize. Instead of just picking YAML files from a git repo and applying them, customisation is performed with the kustomize build command. The resulting YAML then gets applied to your cluster.

If you don’t know how customisation works (without Flux), take a look at the article I wrote earlier. Or look at the core docs.

You need to be aware of a few things before you get started. In order for Flux to use this method, you need to turn on manifest generation. With the Flux Helm chart, just pass the following parameter:

--set manifestGeneration=true

In my case, I have plain YAML files without customisation in a config folder. I want the files that use customisation in a different folder, say kustomize, like so:

Two folders to pass as git.path

To pass these folders to the Helm chart, use the following parameter:

--set git.path="config\,kustomize"

The kustomize folder contains the following files:

base files with environments dev and prod

There is nothing special about the base folder here. It is as explained in my previous post. The dev and prod folders are similar so I will focus only on dev.

The dev folder contains a .flux.yaml file, which is required by Flux. In this simple example, it contains the following:

version: 1
patchUpdated:
  generators:
    - command: kustomize build .
  patchFile: flux-patch.yaml

The file specifies the generator to use, in this case Kustomize. The kustomize executable is in the Flux image. I specify one patchFile which contains patches for several resources separated by —:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    flux.weave.works/automated: "true"
    flux.weave.works/tag.realtime: semver:~1
  name: realtime
  namespace: realtime-dev
spec:
  template:
    spec:
      $setElementOrder/containers:
      - name: realtime
      containers:
      - image: gbaeke/fluxapp:1.0.6
        name: realtime
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtime-ingress
  namespace: realtime-dev
spec:
  rules:
  - host: realdev.baeke.info
    http:
      paths:
      - backend:
          serviceName: realtime
          servicePort: 80
        path: /
  tls:
  - hosts:
    - realdev.baeke.info
    secretName: real-dev-baeke-info-tls

Above, you see the patches for the dev environment:

  • the workload should be automated by Flux, installing new images based on the semantic version filter ~1
  • the ingress should use host realdev.baeke.info with a different name for the secret as well (the secret will be created by cert-manager)

The prod folder contains a similar configuration. Perhaps naively, I thought that specifying the kustomize folder in git.path was sufficient for Flux to scan the folders and run customisation wherever a .flux.yaml file was found. Sadly, that is not the case. ☹️With just the kustomization folder specified, Flux find conflicts between base, dev and prod folders because they contain similar files. That is expected behaviour for regular YAML files but , in my opinion, should not happen in this case. There is a bit of a clunky way to make this work though. Just specify the following as git.path:

--set git.path="config\,kustomize/dev\,kustomize/prod"

With the above parameter, Flux will find no conflicts and will happily apply the customisations.

As a side note, you should also specify the namespace in the patch file explicitly. It is not added automatically even though kustomization.yaml contains the namespace.

Let’s look at the cluster when Flux has applied the changes.

Namespaces for dev and prod created via Flux & Kustomize

And here is the deployed “production app”:

Who chose that ugly colour!

The way customisations are handled could be improved. It’s unwieldy to specify every “customisation” folder in the git.path parameter. Just give me a –git-kustomize-path parameter and scan the paths in that parameter for .flux.yaml files. On the other hand, maybe I am missing something here so remarks are welcome.

A quick tour of Kustomize

Image above from: https://kustomize.io/

When you have to deploy an application to multiple environments like dev, test and production there are many solutions available to you. You can manually deploy the app (Nooooooo! 😉), use a CI/CD system like Azure DevOps and its release pipelines (with or without Helm) or maybe even a “GitOps” approach where deployments are driven by a tool such as Flux or Argo based on a git repository.

In the latter case, you probably want to use a configuration management tool like Kustomize for environment management. Instead of explaining what it does, let’s take a look at an example. Suppose I have an app that can be deployed with the following yaml files:

  • redis-deployment.yaml: simple deployment of Redis
  • redis-service.yaml: service to connect to Redis on port 6379 (Cluster IP)
  • realtime-deployment.yaml: application that uses the socket.io library to display real-time updates coming from a Redis channel
  • realtime-service.yaml: service to connect to the socket.io application on port 80 (Cluster IP)
  • realtime-ingress.yaml: ingress resource that defines the hostname and TLS certificate for the socket.io application (works with nginx ingress controller)

Let’s call this collection of files the base and put them all in a folder:

Base files for the application

Now I would like to modify these files just a bit, to install them in a dev namespace called realtime-dev. In the ingress definition I want to change the name of the host to realdev.baeke.info instead of real.baeke.info for production. We can use Kustomize to reach that goal.

In the base folder, we can add a kustomization.yaml file like so:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- realtime-ingress.yaml
- realtime-service.yaml
- redis-deployment.yaml
- redis-service.yaml
- realtime-deployment.yaml

This lists all the resources we would like to deploy.

Now we can create a folder for our patches. The patches define the changes to the base. Create a folder called dev (next to base). We will add the following files (one file blurred because it’s not relevant to this post):

kustomization.yaml contains the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: realtime-dev
resources:
- ./namespace.yaml
bases:
- ../base
patchesStrategicMerge:
- realtime-ingress.yaml
 

The namespace: realtime-dev ensures that our base resource definitions are updated with that namespace. In resources, we ensure that namespace gets created. The file namespace.yaml contains the following:

apiVersion: v1
kind: Namespace
metadata:
  name: realtime-dev 

With patchesStrategicMerge we specify the file(s) that contain(s) our patches, in this case just realtime-ingress.yaml to modify the hostname:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
  name: realtime-ingress
spec:
  rules:
  - host: realdev.baeke.info
    http:
      paths:
      - backend:
          serviceName: realtime
          servicePort: 80
        path: /
  tls:
  - hosts:
    - realdev.baeke.info
    secretName: real-dev-baeke-info-tls

Note that we also use certmanager here to issue a certificate to use on the ingress. For dev environments, it is better to use the Let’s Encrypt staging issuer instead of the production issuer.

We are now ready to generate the manifests for the dev environment. From the parent folder of base and dev, run the following command:

kubectl kustomize dev

The above command generates the patched manifests like so:

apiVersion: v1 
kind: Namespace
metadata:      
  name: realtime-dev
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: realtime
  name: realtime
  namespace: realtime-dev
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: realtime
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis
  name: redis
  namespace: realtime-dev
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: realtime
  name: realtime
  namespace: realtime-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: realtime
  template:
    metadata:
      labels:
        app: realtime
    spec:
      containers:
      - env:
        - name: REDISHOST
          value: redis:6379
        image: gbaeke/fluxapp:1.0.5
        name: realtime
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 150m
            memory: 150Mi
          requests:
            cpu: 25m
            memory: 50Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: redis
  name: redis
  namespace: realtime-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:4-32bit
        name: redis
        ports:
        - containerPort: 6379
        resources:
          requests:
            cpu: 200m
            memory: 100Mi
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
  name: realtime-ingress
  namespace: realtime-dev
spec:
  rules:
  - host: realdev.baeke.info
    http:
      paths:
      - backend:
          serviceName: realtime
          servicePort: 80
        path: /
  tls:
  - hosts:
    - realdev.baeke.info
    secretName: real-dev-baeke-info-tls

Note that namespace realtime-dev is used everywhere and that the Ingress resource uses realdev.baeke.info. The original Ingress resource looked like below:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtime-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - real.baeke.info
    secretName: real-baeke-info-tls
  rules:
  - host: real.baeke.info
    http:
      paths:
      - path: /
        backend:
          serviceName: realtime
          servicePort: 80 

As you can see, Kustomize has updated the host in tls: and rules: and also modified the secret name (which will be created by certmanager).

You have probably seen that Kustomize is integrated with kubectl. It’s also available as a standalone executable.

To directly apply the patched manifests to your cluster, run kubectl apply -k dev. The result:

namespace/realtime-dev created
service/realtime created
service/redis created
deployment.apps/realtime created
deployment.apps/redis created
ingress.extensions/realtime-ingress created

In another post, we will look at using Kustomize with Flux. Stay tuned!

Creating Kubernetes secrets from Key Vault

If you do any sort of development, you often have to deal with secrets. There are many ways to deal with secrets, one of them is retrieving the secrets from a secure system from your own code. When your application runs on Kubernetes and your code (or 3rd party code) cannot be configured to retrieve the secrets directly, you have several options. This post looks at one such solution: Azure Key Vault to Kubernetes from Sparebanken Vest, Norway.

In short, the solution connects to Azure Key Vault and does one of two things:

In my scenario, I just wanted regular secrets to use in a KEDA project that processes IoT Hub messages. The following secrets were required:

  • Connection string to a storage account: AzureWebJobsStorage
  • Connection string to IoT Hub’s event hub: EventEndpoint

In the YAML that deploys the pods that are scaled by KEDA, the secrets are referenced as follows:

env:
 - name: AzureFunctionsJobHost__functions__0
   value: ProcessEvents
 - name: FUNCTIONS_WORKER_RUNTIME
   value: node
 - name: EventEndpoint
   valueFrom:
     secretKeyRef:
       name: kedasample-event
       key: EventEndpoint
 - name: AzureWebJobsStorage
   valueFrom:
     secretKeyRef:
       name: kedasample-storage
       key: AzureWebJobsStorage

Because the YAML above is deployed with Flux from a git repo, we need to get the secrets from an external system. That external system in this case, is Azure Key Vault.

To make this work, we first need to install the controller that makes this happen. This is very easy to do with the Helm chart. By default, this Helm chart will work well on Azure Kubernetes Service as long as you give the AKS security principal read access to Key Vault:

Access policies in Key Vault (azure-cli-2019-… is the AKS service principal here)

Next, define the secrets in Key Vault:

Secrets in Key Vault

With the access policies in place and the secrets defined in Key Vault, the controller installed by the Helm chart can do its work with the following YAML:

apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: eventendpoint
  namespace: default
spec:
  vault:
    name: gebakv
    object:
      name: EventEndpoint
      type: secret
  output:
    secret: 
      name: kedasample-event
      dataKey: EventEndpoint
      type: opaque
---
apiVersion: spv.no/v1alpha1
kind: AzureKeyVaultSecret
metadata:
  name: azurewebjobsstorage
  namespace: default
spec:
  vault:
    name: gebakv
    object:
      name: AzureWebJobsStorage
      type: secret
  output:
    secret: 
      name: kedasample-storage
      dataKey: AzureWebJobsStorage
      type: opaque     

The above YAML defines two objects of kind AzureKeyVaultSecret. In each object we specify the Key Vault secret to read (vault) and the Kubernetes secret to create (output). The above YAML results in two Kubernetes secrets:

Two regular secrets

When you look inside such a secret, you will see:

Inside the secret

To double check the secret, just do echo RW5K… | base64 -d to see the decoded secret and that it matches the secret stored in Key Vault. You can now reference the secret with ValueFrom as shown earlier in this post.

Conclusion

If you want to turn Azure Key Vault secrets into regular Kubernetes secrets for use in your manifests, give the solution from Sparebanken Vest a go. It is very easy to use. If you do not want regular Kubernetes secrets, opt for the Env Injector instead, which injects the environment variables directly in your pod.

Creating a Kubernetes operator on Windows and WSL

I have always wanted to create a Kubernetes operator with the operator framework and tried to give that a go on my Windows 10 system. Note that the emphasis is on creating an operator, not necessarily writing a useful one 😁. All I am doing is using the boilerplate that is generated by the framework. If you have never even seen how this is done, then this post if for you. 👍

An operator is an application-specific controller. A controller is a piece of software that implements a control loop, watching the state of the Kubernetes cluster via the API. It makes changes to the state to drive it towards the desired state.

An operator uses Kubernetes to create and manage complex applications. Many operators can be found here: https://operatorhub.io/. The Cassandra operator for instance, has domain-specific knowledge embedded in it, that knows how to deploy and configure this database. That’s great because that means some of the burden is shifted from you to the operator.

Installation

I installed the Operator SDK CLI from the GitHub releases in WSL, Windows Subsystem for Linux. I am using WSL 1, not WSL 2 as I am not running a Windows Insiders release. The commands to run:

RELEASE_VERSION=v0.13.0 

curl -LO https://github.com/operator-framework/operator-sdk/releases/download/${RELEASE_VERSION}/operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu 

chmod +x operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu && sudo mkdir -p /usr/local/bin/ && sudo cp operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu /usr/local/bin/operator-sdk && rm operator-sdk-${RELEASE_VERSION}-x86_64-linux-gnu 

You should now be able to run operator-sdk in WSL 1.

Creating an operator

In WSL, you should have installed Go. I am using version 1.13.5. Although not required, I used my Go path on Windows to generate the operator and not the %GOPATH set in WSL. My working directory was:

/mnt/c/Users/geert/go/src/github.com/baeke.info

To create the operator, I ran the following commands (one line):

export GO111MODULE=on

operator-sdk new fun-operator --repo github.com/baeke.info/fun-operator

This creates a folder, fun-operator, under baeke.info and sets up the project:

Project structure in VS Code

Before continuing, cd into fun-operator and run go mod tidy. Now we can run the following command:

operator-sdk add api --api-version=fun.baeke.info/v1alpha1 --kind FunOp

This creates a new CRD (Custom Resource Definition) API called FunOp. The API version is fun.baeke.info/v1alpha1 which you choose yourself. With the above you can create CRDs like below that the operator acts upon:

apiVersion: fun.baeke.info/v1alpha1
kind: FunOp
metadata:
  name: example-funop 

Now we can add a controller that watches for the above CRD resource:

operator-sdk add controller --api-version=fun.baeke.info/v1alpha1 --kind=FunOp

The above will generate a file, funop_controller.go, that contains some boilerplate code that creates a busybox pod. The Reconcile function is responsible for doing this work:

Reconcile function in the controller (incomplete)

As stated above, I will just use the boilerplate code and build the project:

operator-sdk build gbaeke/fun-operator

In WSL 1, you cannot run Docker so the above command will build the operator from the Go code but fail while building the container image. Can’t wait for WSL 2! The build creates the following artifact:

fun-operator in _output/bin

The supplied Dockerfile can be used to build the container images in Windows. In Windows, copy the Dockerfile from the build folder to the root of the operator project (in my case C:\Users\geert\go\src\github.com\baeke.info\fun-operator) and run docker build and push:

docker build -t gbaeke/fun-operator .

docker push gbaeke/fun-operator

Deploying the operator

The project folder structure contains a bunch of yaml in the deploy folder:

Great! Some YAML to deploy

The service account, role and role binding make sure your code can create (or delete/update) resources in the cluster. The operator.yaml actually deploys the operator on your cluster. You just need to update the container spec with the name of your image (here gbaeke/fun-operator).

Before you deploy the operator, make sure you deploy the CRD manifest (here fun.baeke.info_funops_crd.yaml).

As always, just use kubectl apply -f with the above YAML files.

Testing the operator

With the operator deployed, create a resource based on the CRD. For instance:

apiVersion: fun.baeke.info/v1alpha1
kind: FunOp
metadata:
  name: example-funop  

From the moment you create this resource with kubectl apply, a pod will be created by the operator.

pod created upon submitting the custom resource

When you delete example-funop, the pod will be removed by the operator.

That’s it! We created a Kubernetes operator with the boilerplate code supplied by the operator-sdk cli. Another time, maybe we’ll create an operator that actually does something useful! 😉

Use a Power Automate Button to start an Azure DevOps build on the go

In a previous post, we built a pipeline to deploy AKS using Azure DevOps. Because it can take while to deploy, it can be handy to start the deployment at any time without having to logon to Azure DevOps. There are many ways to achieve this, but one of the easiest ways is Power Automate.

Microsoft have made it easy to create such a flow because they support Azure DevOps out of the box. The flow looks like this:

Flow to trigger an Azure DevOps build

The flow uses a manual trigger which allows you to start the flow from the iOS app using a button:

Button in the iOS app that triggers the flow

As simple as that… 👍

Check your yaml files with kubeval and GitHub Actions

In the previous post, I deployed AKS, Nginx, External DNS, Helm Operator and Flux with a YAML pipeline in Azure DevOps. Flux got linked to a git repo that contains a bunch of yaml files that deploy applications to the cluster but also configures Azure Monitor. Flux essentially synchronizes your cluster with the configuration in the git repository.

In production, it is not a good idea to simply drop in some yaml and let Flux do its job. Similar to traditional software development, you want to run some tests before you deploy. For Kubernetes yaml files, kubeval is a tool that can run those tests.

I refactored the git repository to have all yaml files in a config folder. To check all yaml files in that folder, the following command can be used:

kubeval -d config --strict --ignore-missing-schemas 

With -d you specify the folder (and all its subfolders) where kubeval should look for yaml files. The –strict option checks for properties in your yaml file that are not part of the official schema. If you know you need those, you can leave out –strict. With –ignore-missing-schemas, kubeval will ignore yaml files that use custom schemas not in the Kubernetes OpenAPI spec. In my case for instance, the yaml file that deploys a Helm chart (of kind HelmRelease) is such a file. You can also instruct kubeval to ignore specific “kinds” with –skip-kinds. Here’s the result of running the command:

Result of kubeval

Using a GitHub action

To automate the testing of your files, you can use any CI system like Azure DevOps, CircleCI, etc… In my case, I decided to use a GitHub action. See the getting started for more information about the basics of GitHub Actions. The action I created is easy (hey, it’s my first time using Actions 😊):

GitHub Action to validate YAML with kubeval

An action is defined in yaml 😉 and consists of jobs and steps, similarly to Azure DevOps and the likes. The action is run on Ubuntu (hosted by GitHub) and uses an action from the marketplace called Kubernetes toolset. You can easily search for actions in the editor:

Actions in the marketplace

The first step uses an action to checkout your code. Indeed, you need to do that explicitly. Then we use the Kubernetes Toolset to give us access to all kinds of Kubernetes related tools such as kubectl and kubeval. The toolset is just a container which you’ll see getting pulled at runtime. After that, we simple run kubeval in the container which will have mounted the working directory which also contains your checked out code.

In the repository settings, I added a branch protection rule that requires a pull request review before merging plus a status check that must pass (the action):

Branch protection rules

The pull request below shows a check that did not pass, a violation of the –strict setting in error.yaml:

Failed status check before merging

There are many other tools and techniques that can be used to validate your configuration but this should get you started with some simple checks on yaml files.

As a last note, know that kubeval generates schemas from the Kubernetes OpenAPI specs. You can set the version of Kubernetes with the -v option.

Happy validating!!!

Deploy AKS with Nginx, External DNS, Helm Operator and Flux

A while ago, I blogged about an Azure YAML pipeline to deploy AKS together with Traefik. As a variation on that theme, this post talks about deploying AKS together with Nginx, External DNS, a Helm Operator and Flux CD. I blogged about Flux before if you want to know what it does.

Video version (1.5x speed recommended)

I added the Azure DevOps pipeline to the existing GitHub repo, in the nginx-dns-helm-flux folder.

Let’s break the pipeline down a little. In what follows, replace AzureMPN with a reference to your own subscription. The first two tasks, AKS deployment and IP address deployment are ARM templates that deploy these resources in Azure. Nothing too special there. Note that the AKS cluster is one with default networking, no Azure AD integration and without VMSS (so no multiple node pools either).

Note: I modified the pipeline to deploy a VMSS-based cluster with a standard load balancer, which is recommended instead of a cluster based on an availability set with a basic load balancer.

The third task takes the output of the IP address deployment and parses out the IP address using jq (last echo statement on one line):

task: Bash@3
      name: GetIP
      inputs:
        targetType: 'inline'
        script: |
          echo "##vso[task.setvariable variable=test-ip;]$(echo '$(armoutputs)' | jq .ipaddress.value -r)"

The IP address is saved in a variable test-ip for easy reuse later.

Next, we install kubectl and Helm v3. Indeed, Azure DevOps now supports installation of Helm v3 with:

- task: HelmInstaller@1
      inputs:
        helmVersionToInstall: 'latest'

Next, we need to run a script to achieve a couple of things:

  • Get AKS credentials with Azure CLI
  • Add Helm repositories
  • Install a custom resource definition (CRD) for the Helm operator

This is achieved with the following inline Bash script:

- task: AzureCLI@1
      inputs:
        azureSubscription: 'AzureMPN'
        scriptLocation: 'inlineScript'
        inlineScript: |
          az aks get-credentials -g $(aksTestRG) -n $(aksTest) --admin
          helm repo add stable https://kubernetes-charts.storage.googleapis.com/
          helm repo add fluxcd https://charts.fluxcd.io
          helm repo update
          kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/flux-helm-release-crd.yaml

Next, we create a Kubernetes namespace called fluxcd. I create the namespace with some inline YAML in the Kubernetes@1 task:

- task: Kubernetes@1
      inputs:
        connectionType: 'None'
        command: 'apply'
        useConfigurationFile: true
        configurationType: 'inline'
        inline: |
          apiVersion: v1
          kind: Namespace
          metadata:
            name: fluxcd

It’s best to use the approach above instead of kubectl create ns. If the namespace already exists, you will not get an error.

Now we are ready to deploy Nginx, External DNS, Helm operator and Flux CD

Nginx

This is a pretty basic installation with the Azure DevOps Helm task:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'stable/nginx-ingress'
        releaseName: 'nginx'
        overrideValues: 'controller.service.loadBalancerIP=$(test-ip),controller.publishService.enabled=true,controller.metrics.enabled=true'

For External DNS to work, I found I had to set controller.publishService.enabled=true. As you can see, the Nginx service is configured to use the IP we created earlier. Azure will create a load balancer with a front end IP configuration that uses this address. This all happens automatically.

Note: controller.metrics.enabled enables a Prometheus scraping endpoint; that is not discussed further in this blog

External DNS

External DNS can automatically add DNS records for ingresses and services you add to Kubernetes. For instance, if I create an ingress for test.baeke.info, External DNS can create this record in the baeke.info zone and use the IP address of the Ingress Controller (nginx here). Installation is pretty straightforward but you need to provide credentials to your DNS provider. In my case, I use CloudFlare. Many others are available. Here is the task:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'stable/external-dns'
        releaseName: 'externaldns'
        overrideValues: 'cloudflare.apiToken=$(CFAPIToken)'
        valueFile: 'externaldns/values.yaml'

On CloudFlare, I created a token that has the required access rights to my zone (read, edit). I provide that token to the chart via the CFAPIToken variable defined as a secret on the pipeline. The valueFile looks like this:

rbac:
  create: true

provider: cloudflare

logLevel: debug

cloudflare:
  apiToken: CFAPIToken
  email: email address
  proxied: false

interval: "1m"

policy: sync # or upsert-only

domainFilters: [ 'baeke.info' ]

In the beginning, it’s best to set the logLevel to debug in case things go wrong. With interval 1m, External DNS checks for ingresses and services every minute and syncs with your DNS zone. Note that External DNS only touches the records it created. It does so by creating TXT records that provide a record that External DNS is indeed the owner.

With External DNS in place, you just need to create an ingress like below to have the A record real.baeke.info created:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: realtime-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
  - host: real.baeke.info
    http:
      paths:
      - path: /
        backend:
          serviceName: realtime
          servicePort: 80

Helm Operator

The Helm Operator allows us to install Helm chart by simply using a yaml file. First, we install the operator:

- task: HelmDeploy@0
      name: HelmOp
      displayName: Install Flux CD Helm Operator
      inputs:
        connectionType: 'None'
        namespace: 'kube-system'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'fluxcd/helm-operator'
        releaseName: 'helm-operator'
        overrideValues: 'extraEnvs[0].name=HELM_VERSION,extraEnvs[0].value=v3,image.repository=docker.io/fluxcd/helm-operator-prerelease,image.tag=helm-v3-dev-53b6a21d'
        arguments: '--namespace fluxcd'

This installs the latest version of the operator at the time of this writing (image.repository and image.tag) and also sets Helm to v3. With this installed, you can install a Helm chart by submitting files like below:

apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: influxdb
  namespace: default
spec:
  releaseName: influxdb
  chart:
    repository: https://charts.bitnami.com/bitnami
    name: influxdb
    version: 0.2.4

You can create files that use kind HelmRelease (HR) because we installed the Helm Operator CRD before. To check installed Helm releases in a namespace, you can run kubectl get hr.

The Helm operator is useful if you want to install Helm charts from a git repository with the help of Flux CD.

Flux CD

Deploy Flux CD with the following task:

- task: HelmDeploy@0
      name: FluxCD
      displayName: Install Flux CD
      inputs:
        connectionType: 'None'
        namespace: 'fluxcd'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'fluxcd/flux'
        releaseName: 'flux'
        overrideValues: 'git.url=git@github.com:$(gitURL),git.pollInterval=1m'

The gitURL variable should be set to a git repo that contains your cluster configuration. For instance: gbaeke/demo-clu-flux. Flux will check the repo for changes every minute. Note that we are using a public repo here. Private repos and systems other than GitHub are supported.

Take a look at GitOps with Weaveworks Flux for further instructions. Some things you need to do:

  • Install fluxctl
  • Use fluxctl identity to obtain the public key from the key pair created by Flux (when you do not use your own)
  • Set the public key as a deploy key on the git repo
GitHub deploy key

By connecting the https://github.com/gbaeke/demo-clu-flux repo to Flux CD (as done here), the following is done based on the content of the repo (the complete repo is scanned:

  • Install InfluxDB Helm chart
  • Add a simple app that uses a Go socket.io implementation to provide realtime updates based on Redis channel content; this app is published via nginx and real.baeke.info is created in DNS (by External DNS)
  • Adds a ConfigMap that is used to configure Azure Monitor to enable Prometheus endpoint scraping (to show this can be used for any object you need to add to Kubernetes)

Note that the ingress of the Go app has an annotation (in realtime.yaml, in the git repo) to issue a certificate via cert-manager. If you want to make that work, add an extra task to the pipeline that installs cert-manager:

- task: HelmDeploy@0
      inputs:
        connectionType: 'None'
        namespace: 'cert-manager'
        command: 'upgrade'
        chartType: 'Name'
        chartName: 'jetstack/cert-manager'
        releaseName: 'cert-manager'
        arguments: '--version v0.12.0'

You will also need to create another namespace, cert-manager, just like we created the fluxcd namespace.

In order to make the above work, you will need Issuers or ClusterIssuers. The repo used by Flux CD contains two ClusterIssuers, one for Let’s Encrypt staging and one for production. The ingress resource uses the production issuer due to the following annotation:

cert-manager.io/cluster-issuer: "letsencrypt-prod" 

The Go app that is deployed by Flux now has TLS enabled by default:

https on the Go app

I often use this deployment in demo’s of all sorts. I hope it is helpful for you too in that way!

Front Door with WordPress on Azure App Service

Here’s a quick overview of the steps you need to take to put Front Door in front of an Azure Web App. In this case, the web app runs a WordPress site.

Step 1: DNS

Suppose you deployed the Web App and its name is gebawptest.azurewebsites.net and you want to reach the site via wp.baeke.info. Traffic will flow like this:

user types wp.baeke.info ---CNAME to xyz.azurefd.net--> Front Door --- connects to gebawptest.azurewebsites.net using wp.baeke.info host header

It’s clear that later, in Front Door, you will have to specify the host header (wp.baeke.info in this case). More on that later…

If you have worked with Azure Web App before, you probably know you need to configure the host header sent by the browser as a custom domain on the web app. Something like this:

Custom domain in Azure Web App (no https configured – hence the red warning)

In this case, we do not want to resolve wp.baeke.info to the web app but to Front Door. To make the custom domain assignment work (because the web app will verify the custom name), add the following TXT record to DNS:

TXT awverify.wp gebawptest.azurewebsites.net 

For example in CloudFlare:

awverify txt record in CloudFlare DNS

With the above TXT record, I could easily add wp.baeke.info as a custom domain to the gebawptest.azurewebsites.net web app.

Note: wp.baeke.info is a CNAME to your Front Door domain (see below)

Step 2: Front Door

My Front Door designer looks like this:

Front Door designer

When you create a Front Door, you need to give it a name. In my case that is gebafd.azurefd.net. With wp.baeke.info as a CNAME for gebafd.azurefd.net, you can easily add wp.baeke.info as an additional Frontend host.

The backend pool is the Azure Web App. It’s configured as follows:

Front Door backend host (only one in the pool); could also have used the Azure App Service backend type

You should connect to the web app using its original name but send wp.baeke.info as the host header. This allows Front Door to connect to the web app correctly.

The last part of the Front Door config is a simple rule that connects the frontend wp.baeke.info to the backend pool using HTTP only.

Step 3: WordPress config

With the default Azure WordPress templates, you do not need to modify anything because wp-config.php contains the following settings:

define('WP_SITEURL', 'http://' . $_SERVER['HTTP_HOST'] . '/');                                                        define('WP_HOME', 'http://' . $_SERVER['HTTP_HOST'] . '/');

If you want, you can change this to:

define('WP_SITEURL', 'http://wp.baeke.info/' );                                                                         define('WP_HOME', 'http://wp.baeke.info/');   

Step 4: Blocking access from other locations

In general, you want users to only connect to the site via Front Door. To achieve this, add the following access restrictions to the Web App:

Access restrictions to only allow traffic from Front Door and Azure basic infrastructure services

Trying Civo’s Kubernetes Service

In a previous post I talked about k3sup, a tool to easily install k3s on any system available over SSH. If you don’t know what k3s is, it’s a lightweight version of Kubernetes. It also runs on ARMv7 and ARM64 processors. That means it’s also compatible with a Raspberry Pi.

If I am not mistaken, Civo is the first cloud provider that offers a managed k3s service. Just like the other Civo services it is very easy to use. At this point in time, the service is in beta and you need to be accepted to participate.

Deploying the cluster

The cluster can be deployed via the portal, CLI or the REST API. Portal deployment is very simple:

  • set a name
  • set the size of the nodes
  • set the number of nodes
Creating a new cluster

After deployment, you will see the cluster as follows:

Yes, a deployed cluster

Marketplace

Kubernetes on Civo comes with a marketplace of Kubernetes apps to install during or after cluster deployment. By default, Traefik is selected but you can add other apps. I added Helm for instance:

My installed apps plus a view on the marketplace

Getting your Kubeconfig

You can use the portal to grab the Kubeconfig file:

Downloading Kubeconfig

Then, in your shell, set the KUBECONFIG environment variable to the path where you downloaded the file. Alternatively, you can use the Civo CLI to obtain the Kubeconfig file.

Deploying an application

Let’s install my image classifier app to the cluster and expose it via Traefik. Let’s look at the Traefik service in the cluster:

Traefik service in the cluster

If you look closely, you will see that the Traefik service is exposed on each node. Currently, there is no integration with Civo’s load balancers. You do get a DNS name that uses round robin over the IP addresses of the nodes. The DNS name is something like 232b548e-897f-41d3-86f6-1a2a38516a58.k8s.civo.com.

Let’s install and expose my image classifier with the following basic YAML:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nasnet-ingress
  namespace: default
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: IPADDRESS.nip.io
    http:
      paths:
      - path: /
        backend:
          serviceName: nasnet-svc
          servicePort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nasnet-svc
spec:
  selector:
    app: nasnet
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9090
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nasnet-app
  labels:
    app: nasnet
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nasnet
  template:
    metadata:
      labels:
        app: nasnet
    spec:
      containers:
      - name: nasnet
        image: gbaeke/nasnet
        resources:
          limits:
            cpu: "0.5"
          requests:
            cpu: "0.2"
        ports:
        - containerPort: 9090

In the above YAML, replace IPADDRESS with one of the IP addresses of your nodes. With a little help of nip.io, that name will resolve to the IP address that you specify.

The result:

Creepy but it works!

Conclusion

This was just a quick look at Civo’s Kubernetes service. It is easy to install and comes with an easy to use marketplace to quickly get started. In a relatively short time, they were able to get this up and running quickly. I am sure it will rapidly evolve into a great contender to the other managed Kubernetes services out there.

Trying out k3sup

k3sup is a utility created by Alex Ellis to easily deploy k3s to any local or remote VM. In this post, I am giving the tool a try on a Civo cloud Ubuntu VM. You can of course pick any cloud provider you want or use a local system.

Deploying a VM on Civo Cloud

There’s not much to say here. Civo cloud is super simple to use and deploys VMs very fast. Just get an account and launch a new instance. Make sure you can access the VM over SSH. I deployed a simple Ubuntu 18.04 VM with 2 GBs of RAM:

VM deployed on Civo Cloud

Note: make sure you enable SSH via private/public key pair; use ssh-keygen to create the key pair and upload the contents of id_rsa.pub to Civo (SSH Keys section)

After deployment, check that you can access the VM with ssh chosen-user@IP-of-VM

Getting k3sup

On my Windows box, I used the Ubuntu shell to install k3sup:

curl -sLS https://get.k3sup.dev | sh 
sudo install k3sup /usr/local/bin/ 

You can now run the k3sup command as follows:

k3sup install --ip PUBLIC-IP-OF-CLOUD-VM --user root

And off it goes…

k3s installation via k3sup over SSH

At the end of the installation, you will see:

Saving file to: /home/gbaeke/kubeconfig

This means you can now use kubectl to interact with k3s. Just make sure kubectl knows where to find your kubeconfig file with (in my case in /home/gbaeke):

export KUBECONFIG=/home/gbaeke/kubeconfig

Before continuing, make sure your cloud VM allows access to TCP port 6443!

Now you can run something like kubectl get nodes:

kubectl running in Ubuntu shell on laptop to access k3s on remote VM

Installing applications

k3sup allows you to install the following applications to k3s via k3sup app install:

To install OpenFaas, just run k3sup app install openfaas. And off it goes….

Installing OpenFaas via k3sup

To install other applications, just use YAML files or any other method you prefer. It’s still Kubernetes! 😊

Conclusion

This was just a quick post (or note to self 😊) about k3sup which allows you to install k3s to any VM over SSH. It really is a great and simple to use tool so highly recommended. Note that Civo has a k3s service as well which is currently in beta. That service makes it easy to provision k3s from the Civo portal, similar to how you deploy AKS or GKE!