I recently gave a talk at TechTrain, a monthly event in Mechelen (Belgium), hosted by Cronos. The talk is called “GitOps with Kubernetes: a better way to deploy” and is an introduction to GitOps with Weaveworks Flux as an example.
You can find a re-recording of the presentation on Youtube:
If you have followed my blog a little, you have seen a few posts about GitOps with Flux CD. This time, I am taking a look at Argo CD which, like Flux CD, is a GitOps tool to deploy applications from manifests in a git repository.
Don’t want to read this whole thing?
There are several differences between the two tools:
At first glance, Flux appears to use a single git repo for your cluster where Argo immediately introduces the concept of apps. Each app can be connected to a different git repo. However Flux can also use multiple git repositories in the same cluster. See https://github.com/fluxcd/multi-tenancy for more information
Flux has the concept of workloads which can be automated. This means that image repositories are scanned for updates. When an update is available (say from tag v1.0.0 to v1.0.1), Flux will update your application based on filters you specify. As far as I can see, Argo requires you to drive the update from your CI process, which might be preferred.
By default, Argo deploys an administrative UI (next to a CLI) with a full view on your deployment and its dependencies
Argo supports RBAC and integrates with external identity providers (e.g. Azure Active Directory)
The Argo CD admin interface is shown below:
Let’s take a look at how to deploy Argo and deploy the app you see above. The app is deployed using a single yaml file. Nothing fancy yet such as kustomize or jsonnet.
The getting started guide is pretty clear, so do have a look over there as well. To install, just run (with a deployed Kubernetes cluster and kubectl pointing at the cluster):
Great! You are all set now to deploy an application.
Deploying an application
We will deploy an application that has a couple of dependencies. Normally, you would install those dependencies with Argo CD as well but since I am using a cluster that has these dependencies installed via Azure DevOps, I will just list what you need (Helm commands):
To know more about these dependencies and use an Azure DevOps YAML pipeline to deploy them, see this post. If you want, you can skip the externaldns installation and create a DNS record yourself that resolves to the public IP address of Nginx Ingress. If you do not want to use an Azure static IP address, you can remove the loadBalancerIP parameter from the first command.
Two YAML files that create a certificate cluster issuer based on custom resource definitions (CRDs) from cert-manager
realtime.yaml: Redis deployment, Redis service (ClusterIP), realtime web app deployment (based on this), realtime web app service (ClusterIP), ingress resource for https://real.baeke.info (record automatically created by externaldns)
It’s best that you fork my repo and modify realtime.yaml’s ingress resource with your own DNS name.
Create the Argo app
Now you can create the Argo app based on my forked repo. I used the following command with my original repo:
The command above creates an app called realtime based on the specified repo. The app should use the manifests folder and apply (kubectl apply) all the manifests in that folder. The manifests are deployed to the cluster that Argo CD runs in. Note that you can run Argo CD in one cluster and deploy to totally different clusters.
The above command does not configure the repository to be synced automatically, although that is an option. To sync manually, use the following command:
argocd app sync realtime
The application should now be synced and viewable in the UI:
Let’s set up this app to automatically sync with the repo (default = every 3 minutes). This can be done from both the CLI and the UI. Let’s do it from the UI. Click on the app and then click App Details. You will find a Sync Policy in the app details where you can enable auto-sync
You can now make changes to the git repo like changing the image tag for gbaeke/fluxapp (yes, I used this image with the Flux posts as well 😊 ) to 1.0.6 and wait for the sync to happen. Or sync manually from the CLI or the UI.
This was a quick tour of Argo CD. There is much more you can do but the above should get you started quickly. I must say I quite like the solution and am eager to see what the collaboration of Flux CD, Argo CD and Amazon comes up with in the future.
Flux has a feature called manifest generation that works together with Kustomize. Instead of just picking YAML files from a git repo and applying them, customisation is performed with the kustomize build command. The resulting YAML then gets applied to your cluster.
If you don’t know how customisation works (without Flux), take a look at the article I wrote earlier. Or look at the core docs.
You need to be aware of a few things before you get started. In order for Flux to use this method, you need to turn on manifest generation. With the Flux Helm chart, just pass the following parameter:
In my case, I have plain YAML files without customisation in a config folder. I want the files that use customisation in a different folder, say kustomize, like so:
To pass these folders to the Helm chart, use the following parameter:
The kustomize folder contains the following files:
There is nothing special about the base folder here. It is as explained in my previous post. The dev and prod folders are similar so I will focus only on dev.
The dev folder contains a .flux.yaml file, which is required by Flux. In this simple example, it contains the following:
Above, you see the patches for the dev environment:
the workload should be automated by Flux, installing new images based on the semantic version filter ~1
the ingress should use host realdev.baeke.info with a different name for the secret as well (the secret will be created by cert-manager)
The prod folder contains a similar configuration. Perhaps naively, I thought that specifying the kustomize folder in git.path was sufficient for Flux to scan the folders and run customisation wherever a .flux.yaml file was found. Sadly, that is not the case. ☹️With just the kustomization folder specified, Flux find conflicts between base, dev and prod folders because they contain similar files. That is expected behaviour for regular YAML files but , in my opinion, should not happen in this case. There is a bit of a clunky way to make this work though. Just specify the following as git.path:
With the above parameter, Flux will find no conflicts and will happily apply the customisations.
As a side note, you should also specify the namespace in the patch file explicitly. It is not added automatically even though kustomization.yaml contains the namespace.
Let’s look at the cluster when Flux has applied the changes.
And here is the deployed “production app”:
The way customisations are handled could be improved. It’s unwieldy to specify every “customisation” folder in the git.path parameter. Just give me a –git-kustomize-path parameter and scan the paths in that parameter for .flux.yaml files. On the other hand, maybe I am missing something here so remarks are welcome.
When you have to deploy an application to multiple environments like dev, test and production there are many solutions available to you. You can manually deploy the app (Nooooooo! 😉), use a CI/CD system like Azure DevOps and its release pipelines (with or without Helm) or maybe even a “GitOps” approach where deployments are driven by a tool such as Flux or Argo based on a git repository.
In the latter case, you probably want to use a configuration management tool like Kustomize for environment management. Instead of explaining what it does, let’s take a look at an example. Suppose I have an app that can be deployed with the following yaml files:
redis-deployment.yaml: simple deployment of Redis
redis-service.yaml: service to connect to Redis on port 6379 (Cluster IP)
realtime-deployment.yaml: application that uses the socket.io library to display real-time updates coming from a Redis channel
realtime-service.yaml: service to connect to the socket.io application on port 80 (Cluster IP)
realtime-ingress.yaml: ingress resource that defines the hostname and TLS certificate for the socket.io application (works with nginx ingress controller)
Let’s call this collection of files the base and put them all in a folder:
Now I would like to modify these files just a bit, to install them in a dev namespace called realtime-dev. In the ingress definition I want to change the name of the host to realdev.baeke.info instead of real.baeke.info for production. We can use Kustomize to reach that goal.
In the base folder, we can add a kustomization.yaml file like so:
This lists all the resources we would like to deploy.
Now we can create a folder for our patches. The patches define the changes to the base. Create a folder called dev (next to base). We will add the following files (one file blurred because it’s not relevant to this post):
The namespace: realtime-dev ensures that our base resource definitions are updated with that namespace. In resources, we ensure that namespace gets created. The file namespace.yaml contains the following:
In a previous post, we installed Weaveworks Flux. Flux synchronizes the contents of a git repository with your Kubernetes cluster. Flux can easily be installed via a Helm chart. As an example, we installed Traefik by adding the following yaml to the synced repository:
It does not matter where you put this file because Flux scans the complete repository. I added the file to a folder called traefik.
If you look more closely at the YAML file, you’ll notice its kind is HelmRelease. You need an operator that can handle this type of file, which is this one. In the previous post, we installed the custom resource definition and the operator manually.
Adding a custom application
Now it’s time to add our own application. You do not need to use Helm packages or the Helm operator to install applications. Regular yaml will do just fine.
The application we will deploy needs a Redis backend. Let’s deploy that first. Add the following yaml file to your repository:
After committing this file, wait a moment or run fluxctl sync. When you run kubectl get pods for the default namespace, you should see the Redis pod:
Now it’s time to add the application. I will use an image, based on the following code: https://github.com/gbaeke/realtime-go (httponly branch because master contains code to automatically request a certificate with Let’s Encrypt). I pushed the image to Docker Hub as gbaeke/fluxapp:1.0.0. Now let’s deploy the app with the following yaml:
In the above yaml, replace IP in the Ingress specification to the IP of the external load balancer used by your Ingress Controller. Once you add the yaml to the git repository and you run fluxctl sync the application should be deployed. You see the following page when you browse to http://realtime.IP.xip.io:
Great, v1.0.0 of the app is deployed using the gbaeke/fluxapp:1.0.0 image. But what if I have a new version of the image and the yaml specification does not change? Read on…
Upgrading the application
If you have been following along, you can now run the following command:
fluxctl list-workloads -a
This will list all workloads on the cluster, including the ones that were not installed by Flux. If you check the list, none of the workloads are automated. When a workload is automated, it can automatically upgrade the application when a new image appears. Let’s try to automate the fluxapp. To do so, you can either add annotations to your yaml or use fluxctl. Let’s use the yaml approach by adding the following to our deployment:
Note: Flux only works with immutable tags; do not use latest
After committing the file and running fluxctl sync, you can run fluxctl list-workloads -a again. The deployment should now be automated:
Now let’s see what happens when we add a new version of the image with tag 1.0.1. That image uses a different header color to show the difference. Flux monitors the repository for changes. When it detects a new version of the image that matches the semver filter, it will modify the deployment. Let’s check with fluxctl list-workloads -a:
And here’s the new color:
But wait… what about the git repo?
With the configuration of a deploy key, Flux has access to the git repository. When a deployment is automated and the image is changed, that change is also reflected in the git repo:
In the yaml, version 1.0.1 is now used:
What if I don’t like this release? With fluxctl, you can rollback to a previous version like so:
Although this works, the deployment will be updated to 1.0.1 again since it is automated. To avoid that, first lock the deployment (or workload) and then force the release of the old image:
In your yaml, there will be an additional annotation: fluxcd.io/locked: ‘true’ and the image will be set to 1.0.0.
In this post, we looked at deploying and updating an application via Flux automation. You only need a couple of annotations to make this work. This was just a simple example. For an example with dev, staging and production branches and promotion from staging to production, be sure to look at https://github.com/fluxcd/helm-operator-get-started as well.
If you have ever deployed applications to Kubernetes or other platforms, you are probably used to the following approach:
developers check in code which triggers CI (continuous integration) and eventually results in deployable artifacts
a release process deploys the artifacts to one or more environments such as a development and a production environment
In the case of Kubernetes, the artifact is usually a combination of a container image and a Helm chart. The release process then authenticates to the Kubernetes cluster and deploys the artifacts. Although this approach works, I have always found this deployment process overly complicated with many release pipelines configured to trigger on specific conditions.
What if you could store your entire cluster configuration in a git repository as the single source of truth and use simple git operations (is there such a thing? 😁) to change your configuration? Obviously, you would need some extra tooling that synchronizes the configuration with the cluster, which is exactly what Weaveworks Flux is designed to do. Also check the Flux git repo.
In this post, we will run through a simple example to illustrate the functionality. We will do the following over two posts:
Create a git repo for our configuration
Install Flux and use the git repo as our configuration source
Install an Ingress Controller with a Helm chart
Install an application using standard YAML (including ingress definition)
Update the application automatically when a new version of the application image is available
Let’s get started!
Create a git repository
To keep things simple, make sure you have an account on GitHub and create a new repository. You can also clone my demo repository. To clone it, use the following command:
Note: if you clone my repo and use it in later steps, the resources I defined will get created automatically; if you want to follow the steps, use your own empty repo
Flux needs to be installed on Kubernetes, so make sure you have a cluster at your disposal. In this post, I use Azure Kubernetes Services (AKS). Make sure kubectl points to that cluster. If you have kubectl installed, obtain the credentials to the cluster with the Azure CLI and then run kubectl get nodes or kubectl cluster-info to make sure you are connected to the right cluster.
az aks get-credentials -n CLUSTER_NAME -g RESOURCE_GROUP
It is easy to install Flux with Helm and in this post, I will use Helm v3 which is currently in beta. You will need to install Helm v3 on your system. I installed it in Windows 10’s Ubuntu shell. Use the following command to download and unpack it:
curl -sSL "https://get.helm.sh/helm-v3.0.0-beta.3-linux-amd64.tar.gz" | tar xvz
This results in a folder linux-amd64 which contains the helm executable. Make the file executable with chmod +x and copy it to your path as helmv3. Next, run helmv3. You should see the help text:
The Kubernetes package manager
Common actions for Helm:
- helm search: search for charts
- helm fetch: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Now you are ready to install Flux. First, add the FLux Helm repository to allow helmv3 to find the chart:
The above command upgrades Flux but installs it if it is missing (-i). The chart to install is fluxcd/flux. With –wait, we wait until the installation is finished. We will not go into the first two –set options for now. The last option defines the git repository Flux should use to sync the configuration to the cluster. Currently, Flux supports one repository. Because we use a public repository, Flux can easily read its contents. At times, Flux needs to update the git repository. To support that, you can add a deploy key to the repository. First, install the fluxctl tool:
curl -sL https://fluxcd.io/install | sh
Now run the following commands to obtain the public key to use as deploy key:
Copy and paste this key as a deploy key for your github repo:
Phew… Flux should now be installed on your cluster. Time to install some applications to the cluster from the git repo.
Note: Flux also supports private repos; it just so happens I used a public one here
Install an Ingress Controller
Let’s try to install Traefik via its Helm chart. Since I am not using traditional CD with pipelines that run helm commands, we will need something else. Luckily, there’s a Flux Helm Operator that allows us to declaratively install Helm charts. The Helm Operator installs a Helm chart when it detects a custom resource definition (CRD) of type helm.fluxcd.io/v1. Let’s first create the CRD for Helm v3:
Just add the above YAML to the GitHub repository. I added it to the ingress folder:
If you wait a while, or run fluxctl sync, the repo gets synced and the resources created. When the helm.fluxcd.io/v1 object is created, the Helm Operator will install the chart in the default namespace. Traefik will be exposed via an Azure Load Balancer. You can check the release with the following command:
kubectl get helmreleases.helm.fluxcd.io
NAME RELEASE STATUS MESSAGE AGE
traefik traefik deployed helm install succeeded 15m
Also check that the Traefik pod is created in the default namespace (only 1 replica; the default):
kubectl get po
NAME READY STATUS RESTARTS AGE
traefik-86f4c5f9c9-gcxdb 1/1 Running 0 21m
Also check the public IP of Traefik:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP
traefik LoadBalancer 10.0.8.59 220.127.116.11
We will later use that IP when we define the ingress for our web application.
In this post, you learned a tiny bit about GitOps with WeaveWorks Flux. The concept is simple enough: store your cluster config in a git repo as the single source of truth and use git operations to initiate (or rollback) cluster operations. To start, we simply installed Traefik via the Flux Helm Operator. In a later post, we will add an application and look at image management. There’s much more you can do so stay tuned!