Flux has a feature called manifest generation that works together with Kustomize. Instead of just picking YAML files from a git repo and applying them, customisation is performed with the kustomize build command. The resulting YAML then gets applied to your cluster.
If you don’t know how customisation works (without Flux), take a look at the article I wrote earlier. Or look at the core docs.
You need to be aware of a few things before you get started. In order for Flux to use this method, you need to turn on manifest generation. With the Flux Helm chart, just pass the following parameter:
In my case, I have plain YAML files without customisation in a config folder. I want the files that use customisation in a different folder, say kustomize, like so:
To pass these folders to the Helm chart, use the following parameter:
The kustomize folder contains the following files:
There is nothing special about the base folder here. It is as explained in my previous post. The dev and prod folders are similar so I will focus only on dev.
The dev folder contains a .flux.yaml file, which is required by Flux. In this simple example, it contains the following:
Above, you see the patches for the dev environment:
the workload should be automated by Flux, installing new images based on the semantic version filter ~1
the ingress should use host realdev.baeke.info with a different name for the secret as well (the secret will be created by cert-manager)
The prod folder contains a similar configuration. Perhaps naively, I thought that specifying the kustomize folder in git.path was sufficient for Flux to scan the folders and run customisation wherever a .flux.yaml file was found. Sadly, that is not the case. ☹️With just the kustomization folder specified, Flux find conflicts between base, dev and prod folders because they contain similar files. That is expected behaviour for regular YAML files but , in my opinion, should not happen in this case. There is a bit of a clunky way to make this work though. Just specify the following as git.path:
With the above parameter, Flux will find no conflicts and will happily apply the customisations.
As a side note, you should also specify the namespace in the patch file explicitly. It is not added automatically even though kustomization.yaml contains the namespace.
Let’s look at the cluster when Flux has applied the changes.
And here is the deployed “production app”:
The way customisations are handled could be improved. It’s unwieldy to specify every “customisation” folder in the git.path parameter. Just give me a –git-kustomize-path parameter and scan the paths in that parameter for .flux.yaml files. On the other hand, maybe I am missing something here so remarks are welcome.
When you have to deploy an application to multiple environments like dev, test and production there are many solutions available to you. You can manually deploy the app (Nooooooo! 😉), use a CI/CD system like Azure DevOps and its release pipelines (with or without Helm) or maybe even a “GitOps” approach where deployments are driven by a tool such as Flux or Argo based on a git repository.
In the latter case, you probably want to use a configuration management tool like Kustomize for environment management. Instead of explaining what it does, let’s take a look at an example. Suppose I have an app that can be deployed with the following yaml files:
redis-deployment.yaml: simple deployment of Redis
redis-service.yaml: service to connect to Redis on port 6379 (Cluster IP)
realtime-deployment.yaml: application that uses the socket.io library to display real-time updates coming from a Redis channel
realtime-service.yaml: service to connect to the socket.io application on port 80 (Cluster IP)
realtime-ingress.yaml: ingress resource that defines the hostname and TLS certificate for the socket.io application (works with nginx ingress controller)
Let’s call this collection of files the base and put them all in a folder:
Now I would like to modify these files just a bit, to install them in a dev namespace called realtime-dev. In the ingress definition I want to change the name of the host to realdev.baeke.info instead of real.baeke.info for production. We can use Kustomize to reach that goal.
In the base folder, we can add a kustomization.yaml file like so:
This lists all the resources we would like to deploy.
Now we can create a folder for our patches. The patches define the changes to the base. Create a folder called dev (next to base). We will add the following files (one file blurred because it’s not relevant to this post):
The namespace: realtime-dev ensures that our base resource definitions are updated with that namespace. In resources, we ensure that namespace gets created. The file namespace.yaml contains the following:
If you do any sort of development, you often have to deal with secrets. There are many ways to deal with secrets, one of them is retrieving the secrets from a secure system from your own code. When your application runs on Kubernetes and your code (or 3rd party code) cannot be configured to retrieve the secrets directly, you have several options. This post looks at one such solution: Azure Key Vault to Kubernetes from Sparebanken Vest, Norway.
In short, the solution connects to Azure Key Vault and does one of two things:
Create a regular Kubernetes secret with the controller
Because the YAML above is deployed with Flux from a git repo, we need to get the secrets from an external system. That external system in this case, is Azure Key Vault.
To make this work, we first need to install the controller that makes this happen. This is very easy to do with the Helm chart. By default, this Helm chart will work well on Azure Kubernetes Service as long as you give the AKS security principal read access to Key Vault:
Next, define the secrets in Key Vault:
With the access policies in place and the secrets defined in Key Vault, the controller installed by the Helm chart can do its work with the following YAML:
The above YAML defines two objects of kind AzureKeyVaultSecret. In each object we specify the Key Vault secret to read (vault) and the Kubernetes secret to create (output). The above YAML results in two Kubernetes secrets:
When you look inside such a secret, you will see:
To double check the secret, just do echo RW5K… | base64 -d to see the decoded secret and that it matches the secret stored in Key Vault. You can now reference the secret with ValueFrom as shown earlier in this post.
If you want to turn Azure Key Vault secrets into regular Kubernetes secrets for use in your manifests, give the solution from Sparebanken Vest a go. It is very easy to use. If you do not want regular Kubernetes secrets, opt for the Env Injector instead, which injects the environment variables directly in your pod.
I have always wanted to create a Kubernetes operator with the operator framework and tried to give that a go on my Windows 10 system. Note that the emphasis is on creating an operator, not necessarily writing a useful one 😁. All I am doing is using the boilerplate that is generated by the framework. If you have never even seen how this is done, then this post if for you. 👍
An operator is an application-specific controller. A controller is a piece of software that implements a control loop, watching the state of the Kubernetes cluster via the API. It makes changes to the state to drive it towards the desired state.
An operator uses Kubernetes to create and manage complex applications. Many operators can be found here: https://operatorhub.io/. The Cassandra operator for instance, has domain-specific knowledge embedded in it, that knows how to deploy and configure this database. That’s great because that means some of the burden is shifted from you to the operator.
I installed the Operator SDK CLI from the GitHub releases in WSL, Windows Subsystem for Linux. I am using WSL 1, not WSL 2 as I am not running a Windows Insiders release. The commands to run:
You should now be able to run operator-sdk in WSL 1.
Creating an operator
In WSL, you should have installed Go. I am using version 1.13.5. Although not required, I used my Go path on Windows to generate the operator and not the %GOPATH set in WSL. My working directory was:
To create the operator, I ran the following commands (one line):
operator-sdk new fun-operator --repo github.com/baeke.info/fun-operator
This creates a folder, fun-operator, under baeke.info and sets up the project:
Before continuing, cd into fun-operator and run go mod tidy. Now we can run the following command:
operator-sdk add api --api-version=fun.baeke.info/v1alpha1 --kind FunOp
This creates a new CRD (Custom Resource Definition) API called FunOp. The API version is fun.baeke.info/v1alpha1 which you choose yourself. With the above you can create CRDs like below that the operator acts upon:
The above will generate a file, funop_controller.go, that contains some boilerplate code that creates a busybox pod. The Reconcile function is responsible for doing this work:
As stated above, I will just use the boilerplate code and build the project:
operator-sdk build gbaeke/fun-operator
In WSL 1, you cannot run Docker so the above command will build the operator from the Go code but fail while building the container image. Can’t wait for WSL 2! The build creates the following artifact:
The supplied Dockerfile can be used to build the container images in Windows. In Windows, copy the Dockerfile from the build folder to the root of the operator project (in my case C:\Users\geert\go\src\github.com\baeke.info\fun-operator) and run docker build and push:
The project folder structure contains a bunch of yaml in the deploy folder:
The service account, role and role binding make sure your code can create (or delete/update) resources in the cluster. The operator.yaml actually deploys the operator on your cluster. You just need to update the container spec with the name of your image (here gbaeke/fun-operator).
Before you deploy the operator, make sure you deploy the CRD manifest (here fun.baeke.info_funops_crd.yaml).
As always, just use kubectl apply-f with the above YAML files.
Testing the operator
With the operator deployed, create a resource based on the CRD. For instance:
In a previous post, we built a pipeline to deploy AKS using Azure DevOps. Because it can take while to deploy, it can be handy to start the deployment at any time without having to logon to Azure DevOps. There are many ways to achieve this, but one of the easiest ways is Power Automate.
Microsoft have made it easy to create such a flow because they support Azure DevOps out of the box. The flow looks like this:
The flow uses a manual trigger which allows you to start the flow from the iOS app using a button:
In the previous post, I deployed AKS, Nginx, External DNS, Helm Operator and Flux with a YAML pipeline in Azure DevOps. Flux got linked to a git repo that contains a bunch of yaml files that deploy applications to the cluster but also configures Azure Monitor. Flux essentially synchronizes your cluster with the configuration in the git repository.
In production, it is not a good idea to simply drop in some yaml and let Flux do its job. Similar to traditional software development, you want to run some tests before you deploy. For Kubernetes yaml files, kubeval is a tool that can run those tests.
I refactored the git repository to have all yaml files in a config folder. To check all yaml files in that folder, the following command can be used:
With -d you specify the folder (and all its subfolders) where kubeval should look for yaml files. The –strict option checks for properties in your yaml file that are not part of the official schema. If you know you need those, you can leave out –strict. With –ignore-missing-schemas, kubeval will ignore yaml files that use custom schemas not in the Kubernetes OpenAPI spec. In my case for instance, the yaml file that deploys a Helm chart (of kind HelmRelease) is such a file. You can also instruct kubeval to ignore specific “kinds” with –skip-kinds. Here’s the result of running the command:
Using a GitHub action
To automate the testing of your files, you can use any CI system like Azure DevOps, CircleCI, etc… In my case, I decided to use a GitHub action. See the getting started for more information about the basics of GitHub Actions. The action I created is easy (hey, it’s my first time using Actions 😊):
An action is defined in yaml 😉 and consists of jobs and steps, similarly to Azure DevOps and the likes. The action is run on Ubuntu (hosted by GitHub) and uses an action from the marketplace called Kubernetes toolset. You can easily search for actions in the editor:
The first step uses an action to checkout your code. Indeed, you need to do that explicitly. Then we use the Kubernetes Toolset to give us access to all kinds of Kubernetes related tools such as kubectl and kubeval. The toolset is just a container which you’ll see getting pulled at runtime. After that, we simple run kubeval in the container which will have mounted the working directory which also contains your checked out code.
In the repository settings, I added a branch protection rule that requires a pull request review before merging plus a status check that must pass (the action):
The pull request below shows a check that did not pass, a violation of the –strict setting in error.yaml:
There are many other tools and techniques that can be used to validate your configuration but this should get you started with some simple checks on yaml files.
As a last note, know that kubeval generates schemas from the Kubernetes OpenAPI specs. You can set the version of Kubernetes with the -v option.
A while ago, I blogged about an Azure YAML pipeline to deploy AKS together with Traefik. As a variation on that theme, this post talks about deploying AKS together with Nginx, External DNS, a Helm Operator and Flux CD. I blogged about Flux before if you want to know what it does.
Let’s break the pipeline down a little. In what follows, replace AzureMPN with a reference to your own subscription. The first two tasks, AKS deployment and IP address deployment are ARM templates that deploy these resources in Azure. Nothing too special there. Note that the AKS cluster is one with default networking, no Azure AD integration and without VMSS (so no multiple node pools either).
Note: I modified the pipeline to deploy a VMSS-based cluster with a standard load balancer, which is recommended instead of a cluster based on an availability set with a basic load balancer.
The third task takes the output of the IP address deployment and parses out the IP address using jq (last echo statement on one line):
For External DNS to work, I found I had to set controller.publishService.enabled=true. As you can see, the Nginx service is configured to use the IP we created earlier. Azure will create a load balancer with a front end IP configuration that uses this address. This all happens automatically.
Note: controller.metrics.enabled enables a Prometheus scraping endpoint; that is not discussed further in this blog
External DNS can automatically add DNS records for ingresses and services you add to Kubernetes. For instance, if I create an ingress for test.baeke.info, External DNS can create this record in the baeke.info zone and use the IP address of the Ingress Controller (nginx here). Installation is pretty straightforward but you need to provide credentials to your DNS provider. In my case, I use CloudFlare. Many others are available. Here is the task:
On CloudFlare, I created a token that has the required access rights to my zone (read, edit). I provide that token to the chart via the CFAPIToken variable defined as a secret on the pipeline. The valueFile looks like this:
In the beginning, it’s best to set the logLevel to debug in case things go wrong. With interval 1m, External DNS checks for ingresses and services every minute and syncs with your DNS zone. Note that External DNS only touches the records it created. It does so by creating TXT records that provide a record that External DNS is indeed the owner.
With External DNS in place, you just need to create an ingress like below to have the A record real.baeke.info created:
This installs the latest version of the operator at the time of this writing (image.repository and image.tag) and also sets Helm to v3. With this installed, you can install a Helm chart by submitting files like below:
The gitURL variable should be set to a git repo that contains your cluster configuration. For instance: gbaeke/demo-clu-flux. Flux will check the repo for changes every minute. Note that we are using a public repo here. Private repos and systems other than GitHub are supported.
Add a simple app that uses a Go socket.io implementation to provide realtime updates based on Redis channel content; this app is published via nginx and real.baeke.info is created in DNS (by External DNS)
Adds a ConfigMap that is used to configure Azure Monitor to enable Prometheus endpoint scraping (to show this can be used for any object you need to add to Kubernetes)
Note that the ingress of the Go app has an annotation (in realtime.yaml, in the git repo) to issue a certificate via cert-manager. If you want to make that work, add an extra task to the pipeline that installs cert-manager:
You will also need to create another namespace, cert-manager, just like we created the fluxcd namespace.
In order to make the above work, you will need Issuers or ClusterIssuers. The repo used by Flux CD contains two ClusterIssuers, one for Let’s Encrypt staging and one for production. The ingress resource uses the production issuer due to the following annotation:
Here’s a quick overview of the steps you need to take to put Front Door in front of an Azure Web App. In this case, the web app runs a WordPress site.
Step 1: DNS
Suppose you deployed the Web App and its name is gebawptest.azurewebsites.net and you want to reach the site via wp.baeke.info. Traffic will flow like this:
user types wp.baeke.info ---CNAME to xyz.azurefd.net--> Front Door --- connects to gebawptest.azurewebsites.net using wp.baeke.info host header
It’s clear that later, in Front Door, you will have to specify the host header (wp.baeke.info in this case). More on that later…
If you have worked with Azure Web App before, you probably know you need to configure the host header sent by the browser as a custom domain on the web app. Something like this:
In this case, we do not want to resolve wp.baeke.info to the web app but to Front Door. To make the custom domain assignment work (because the web app will verify the custom name), add the following TXT record to DNS:
TXT awverify.wp gebawptest.azurewebsites.net
For example in CloudFlare:
With the above TXT record, I could easily add wp.baeke.info as a custom domain to the gebawptest.azurewebsites.net web app.
Note: wp.baeke.info is a CNAME to your Front Door domain (see below)
Step 2: Front Door
My Front Door designer looks like this:
When you create a Front Door, you need to give it a name. In my case that is gebafd.azurefd.net. With wp.baeke.info as a CNAME for gebafd.azurefd.net, you can easily add wp.baeke.info as an additional Frontend host.
The backend pool is the Azure Web App. It’s configured as follows:
You should connect to the web app using its original name but send wp.baeke.info as the host header. This allows Front Door to connect to the web app correctly.
The last part of the Front Door config is a simple rule that connects the frontend wp.baeke.info to the backend pool using HTTP only.
Step 3: WordPress config
With the default Azure WordPress templates, you do not need to modify anything because wp-config.php contains the following settings:
In a previous post I talked about k3sup, a tool to easily install k3s on any system available over SSH. If you don’t know what k3s is, it’s a lightweight version of Kubernetes. It also runs on ARMv7 and ARM64 processors. That means it’s also compatible with a Raspberry Pi.
If I am not mistaken, Civo is the first cloud provider that offers a managed k3s service. Just like the other Civo services it is very easy to use. At this point in time, the service is in beta and you need to be accepted to participate.
Deploying the cluster
The cluster can be deployed via the portal, CLI or the REST API. Portal deployment is very simple:
set a name
set the size of the nodes
set the number of nodes
After deployment, you will see the cluster as follows:
Kubernetes on Civo comes with a marketplace of Kubernetes apps to install during or after cluster deployment. By default, Traefik is selected but you can add other apps. I added Helm for instance:
Getting your Kubeconfig
You can use the portal to grab the Kubeconfig file:
Then, in your shell, set the KUBECONFIG environment variable to the path where you downloaded the file. Alternatively, you can use the Civo CLI to obtain the Kubeconfig file.
Deploying an application
Let’s install my image classifier app to the cluster and expose it via Traefik. Let’s look at the Traefik service in the cluster:
If you look closely, you will see that the Traefik service is exposed on each node. Currently, there is no integration with Civo’s load balancers. You do get a DNS name that uses round robin over the IP addresses of the nodes. The DNS name is something like 232b548e-897f-41d3-86f6-1a2a38516a58.k8s.civo.com.
Let’s install and expose my image classifier with the following basic YAML:
In the above YAML, replace IPADDRESS with one of the IP addresses of your nodes. With a little help of nip.io, that name will resolve to the IP address that you specify.
This was just a quick look at Civo’s Kubernetes service. It is easy to install and comes with an easy to use marketplace to quickly get started. In a relatively short time, they were able to get this up and running quickly. I am sure it will rapidly evolve into a great contender to the other managed Kubernetes services out there.
k3sup is a utility created by Alex Ellis to easily deploy k3s to any local or remote VM. In this post, I am giving the tool a try on a Civo cloud Ubuntu VM. You can of course pick any cloud provider you want or use a local system.
Deploying a VM on Civo Cloud
There’s not much to say here. Civo cloud is super simple to use and deploys VMs very fast. Just get an account and launch a new instance. Make sure you can access the VM over SSH. I deployed a simple Ubuntu 18.04 VM with 2 GBs of RAM:
Note: make sure you enable SSH via private/public key pair; use ssh-keygen to create the key pair and upload the contents of id_rsa.pub to Civo (SSH Keys section)
After deployment, check that you can access the VM with ssh chosen-user@IP-of-VM
On my Windows box, I used the Ubuntu shell to install k3sup:
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/
To install OpenFaas, just run k3sup app install openfaas. And off it goes….
To install other applications, just use YAML files or any other method you prefer. It’s still Kubernetes! 😊
This was just a quick post (or note to self 😊) about k3sup which allows you to install k3s to any VM over SSH. It really is a great and simple to use tool so highly recommended. Note that Civo has a k3s service as well which is currently in beta. That service makes it easy to provision k3s from the Civo portal, similar to how you deploy AKS or GKE!