In a previous post I talked about k3sup, a tool to easily install k3s on any system available over SSH. If you don’t know what k3s is, it’s a lightweight version of Kubernetes. It also runs on ARMv7 and ARM64 processors. That means it’s also compatible with a Raspberry Pi.
If I am not mistaken, Civo is the first cloud provider that offers a managed k3s service. Just like the other Civo services it is very easy to use. At this point in time, the service is in beta and you need to be accepted to participate.
Deploying the cluster
The cluster can be deployed via the portal, CLI or the REST API. Portal deployment is very simple:
set a name
set the size of the nodes
set the number of nodes
After deployment, you will see the cluster as follows:
Kubernetes on Civo comes with a marketplace of Kubernetes apps to install during or after cluster deployment. By default, Traefik is selected but you can add other apps. I added Helm for instance:
Getting your Kubeconfig
You can use the portal to grab the Kubeconfig file:
Then, in your shell, set the KUBECONFIG environment variable to the path where you downloaded the file. Alternatively, you can use the Civo CLI to obtain the Kubeconfig file.
Deploying an application
Let’s install my image classifier app to the cluster and expose it via Traefik. Let’s look at the Traefik service in the cluster:
If you look closely, you will see that the Traefik service is exposed on each node. Currently, there is no integration with Civo’s load balancers. You do get a DNS name that uses round robin over the IP addresses of the nodes. The DNS name is something like 232b548e-897f-41d3-86f6-1a2a38516a58.k8s.civo.com.
Let’s install and expose my image classifier with the following basic YAML:
In the above YAML, replace IPADDRESS with one of the IP addresses of your nodes. With a little help of nip.io, that name will resolve to the IP address that you specify.
This was just a quick look at Civo’s Kubernetes service. It is easy to install and comes with an easy to use marketplace to quickly get started. In a relatively short time, they were able to get this up and running quickly. I am sure it will rapidly evolve into a great contender to the other managed Kubernetes services out there.
k3sup is a utility created by Alex Ellis to easily deploy k3s to any local or remote VM. In this post, I am giving the tool a try on a Civo cloud Ubuntu VM. You can of course pick any cloud provider you want or use a local system.
Deploying a VM on Civo Cloud
There’s not much to say here. Civo cloud is super simple to use and deploys VMs very fast. Just get an account and launch a new instance. Make sure you can access the VM over SSH. I deployed a simple Ubuntu 18.04 VM with 2 GBs of RAM:
Note: make sure you enable SSH via private/public key pair; use ssh-keygen to create the key pair and upload the contents of id_rsa.pub to Civo (SSH Keys section)
After deployment, check that you can access the VM with ssh chosen-user@IP-of-VM
On my Windows box, I used the Ubuntu shell to install k3sup:
curl -sLS https://get.k3sup.dev | sh
sudo install k3sup /usr/local/bin/
To install OpenFaas, just run k3sup app install openfaas. And off it goes….
To install other applications, just use YAML files or any other method you prefer. It’s still Kubernetes! 😊
This was just a quick post (or note to self 😊) about k3sup which allows you to install k3s to any VM over SSH. It really is a great and simple to use tool so highly recommended. Note that Civo has a k3s service as well which is currently in beta. That service makes it easy to provision k3s from the Civo portal, similar to how you deploy AKS or GKE!
In a previous post, we installed Weaveworks Flux. Flux synchronizes the contents of a git repository with your Kubernetes cluster. Flux can easily be installed via a Helm chart. As an example, we installed Traefik by adding the following yaml to the synced repository:
It does not matter where you put this file because Flux scans the complete repository. I added the file to a folder called traefik.
If you look more closely at the YAML file, you’ll notice its kind is HelmRelease. You need an operator that can handle this type of file, which is this one. In the previous post, we installed the custom resource definition and the operator manually.
Adding a custom application
Now it’s time to add our own application. You do not need to use Helm packages or the Helm operator to install applications. Regular yaml will do just fine.
The application we will deploy needs a Redis backend. Let’s deploy that first. Add the following yaml file to your repository:
After committing this file, wait a moment or run fluxctl sync. When you run kubectl get pods for the default namespace, you should see the Redis pod:
Now it’s time to add the application. I will use an image, based on the following code: https://github.com/gbaeke/realtime-go (httponly branch because master contains code to automatically request a certificate with Let’s Encrypt). I pushed the image to Docker Hub as gbaeke/fluxapp:1.0.0. Now let’s deploy the app with the following yaml:
In the above yaml, replace IP in the Ingress specification to the IP of the external load balancer used by your Ingress Controller. Once you add the yaml to the git repository and you run fluxctl sync the application should be deployed. You see the following page when you browse to http://realtime.IP.xip.io:
Great, v1.0.0 of the app is deployed using the gbaeke/fluxapp:1.0.0 image. But what if I have a new version of the image and the yaml specification does not change? Read on…
Upgrading the application
If you have been following along, you can now run the following command:
fluxctl list-workloads -a
This will list all workloads on the cluster, including the ones that were not installed by Flux. If you check the list, none of the workloads are automated. When a workload is automated, it can automatically upgrade the application when a new image appears. Let’s try to automate the fluxapp. To do so, you can either add annotations to your yaml or use fluxctl. Let’s use the yaml approach by adding the following to our deployment:
Note: Flux only works with immutable tags; do not use latest
After committing the file and running fluxctl sync, you can run fluxctl list-workloads -a again. The deployment should now be automated:
Now let’s see what happens when we add a new version of the image with tag 1.0.1. That image uses a different header color to show the difference. Flux monitors the repository for changes. When it detects a new version of the image that matches the semver filter, it will modify the deployment. Let’s check with fluxctl list-workloads -a:
And here’s the new color:
But wait… what about the git repo?
With the configuration of a deploy key, Flux has access to the git repository. When a deployment is automated and the image is changed, that change is also reflected in the git repo:
In the yaml, version 1.0.1 is now used:
What if I don’t like this release? With fluxctl, you can rollback to a previous version like so:
Although this works, the deployment will be updated to 1.0.1 again since it is automated. To avoid that, first lock the deployment (or workload) and then force the release of the old image:
In your yaml, there will be an additional annotation: fluxcd.io/locked: ‘true’ and the image will be set to 1.0.0.
In this post, we looked at deploying and updating an application via Flux automation. You only need a couple of annotations to make this work. This was just a simple example. For an example with dev, staging and production branches and promotion from staging to production, be sure to look at https://github.com/fluxcd/helm-operator-get-started as well.
If you have ever deployed applications to Kubernetes or other platforms, you are probably used to the following approach:
developers check in code which triggers CI (continuous integration) and eventually results in deployable artifacts
a release process deploys the artifacts to one or more environments such as a development and a production environment
In the case of Kubernetes, the artifact is usually a combination of a container image and a Helm chart. The release process then authenticates to the Kubernetes cluster and deploys the artifacts. Although this approach works, I have always found this deployment process overly complicated with many release pipelines configured to trigger on specific conditions.
What if you could store your entire cluster configuration in a git repository as the single source of truth and use simple git operations (is there such a thing? 😁) to change your configuration? Obviously, you would need some extra tooling that synchronizes the configuration with the cluster, which is exactly what Weaveworks Flux is designed to do. Also check the Flux git repo.
In this post, we will run through a simple example to illustrate the functionality. We will do the following over two posts:
Create a git repo for our configuration
Install Flux and use the git repo as our configuration source
Install an Ingress Controller with a Helm chart
Install an application using standard YAML (including ingress definition)
Update the application automatically when a new version of the application image is available
Let’s get started!
Create a git repository
To keep things simple, make sure you have an account on GitHub and create a new repository. You can also clone my demo repository. To clone it, use the following command:
Note: if you clone my repo and use it in later steps, the resources I defined will get created automatically; if you want to follow the steps, use your own empty repo
Flux needs to be installed on Kubernetes, so make sure you have a cluster at your disposal. In this post, I use Azure Kubernetes Services (AKS). Make sure kubectl points to that cluster. If you have kubectl installed, obtain the credentials to the cluster with the Azure CLI and then run kubectl get nodes or kubectl cluster-info to make sure you are connected to the right cluster.
az aks get-credentials -n CLUSTER_NAME -g RESOURCE_GROUP
It is easy to install Flux with Helm and in this post, I will use Helm v3 which is currently in beta. You will need to install Helm v3 on your system. I installed it in Windows 10’s Ubuntu shell. Use the following command to download and unpack it:
curl -sSL "https://get.helm.sh/helm-v3.0.0-beta.3-linux-amd64.tar.gz" | tar xvz
This results in a folder linux-amd64 which contains the helm executable. Make the file executable with chmod +x and copy it to your path as helmv3. Next, run helmv3. You should see the help text:
The Kubernetes package manager
Common actions for Helm:
- helm search: search for charts
- helm fetch: download a chart to your local directory to view
- helm install: upload the chart to Kubernetes
- helm list: list releases of charts
Now you are ready to install Flux. First, add the FLux Helm repository to allow helmv3 to find the chart:
The above command upgrades Flux but installs it if it is missing (-i). The chart to install is fluxcd/flux. With –wait, we wait until the installation is finished. We will not go into the first two –set options for now. The last option defines the git repository Flux should use to sync the configuration to the cluster. Currently, Flux supports one repository. Because we use a public repository, Flux can easily read its contents. At times, Flux needs to update the git repository. To support that, you can add a deploy key to the repository. First, install the fluxctl tool:
curl -sL https://fluxcd.io/install | sh
Now run the following commands to obtain the public key to use as deploy key:
Copy and paste this key as a deploy key for your github repo:
Phew… Flux should now be installed on your cluster. Time to install some applications to the cluster from the git repo.
Note: Flux also supports private repos; it just so happens I used a public one here
Install an Ingress Controller
Let’s try to install Traefik via its Helm chart. Since I am not using traditional CD with pipelines that run helm commands, we will need something else. Luckily, there’s a Flux Helm Operator that allows us to declaratively install Helm charts. The Helm Operator installs a Helm chart when it detects a custom resource definition (CRD) of type helm.fluxcd.io/v1. Let’s first create the CRD for Helm v3:
Just add the above YAML to the GitHub repository. I added it to the ingress folder:
If you wait a while, or run fluxctl sync, the repo gets synced and the resources created. When the helm.fluxcd.io/v1 object is created, the Helm Operator will install the chart in the default namespace. Traefik will be exposed via an Azure Load Balancer. You can check the release with the following command:
kubectl get helmreleases.helm.fluxcd.io
NAME RELEASE STATUS MESSAGE AGE
traefik traefik deployed helm install succeeded 15m
Also check that the Traefik pod is created in the default namespace (only 1 replica; the default):
kubectl get po
NAME READY STATUS RESTARTS AGE
traefik-86f4c5f9c9-gcxdb 1/1 Running 0 21m
Also check the public IP of Traefik:
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP
traefik LoadBalancer 10.0.8.59 220.127.116.11
We will later use that IP when we define the ingress for our web application.
In this post, you learned a tiny bit about GitOps with WeaveWorks Flux. The concept is simple enough: store your cluster config in a git repo as the single source of truth and use git operations to initiate (or rollback) cluster operations. To start, we simply installed Traefik via the Flux Helm Operator. In a later post, we will add an application and look at image management. There’s much more you can do so stay tuned!
With that knowledge in your bag, it would seem that a CNAME record is the way to map inity.io to somedomain.netlify.com. Sadly, that is not the case because CNAMEs cannot coexist with other records for the domain. In the case of the root or apex domain, there are existing records for the root domain such as the NS records.
If your DNS provider supports ALIAS records, you are in luck. From a high level, an ALIAS record works like a CNAME record although there are several lower level differences we won’t all go into.
Since I use namecheap.com and they support ALIAS records, it was easy to map inity.io to somedomain.netlify.com:
The ALIAS record only supports a 1 or 5 minute TTL. The host is @ which represents the root domain. Notice I also redirect http://www.inity.io to the Netlify domain with a regular CNAME.
What does dig say?
Let’s look at what dig returns for both the ALIAS and CNAME record. Here’s the dig output for ALIAS (with some lines removed):
λ geba:~ dig inity.io
;; ANSWER SECTION:
inity.io. 300 IN A 18.104.22.168
The authoritative server does all the work here and returns the IP address directly to you. That does not happen for the CNAME:
λ geba:~ dig www.inity.io
;; ANSWER SECTION:
www.inity.io. 1799 IN CNAME optimistic-panini-9caddc.netlify.com.
optimistic-panini-9caddc.netlify.com. 20 IN A 22.214.171.124
Some more work needs to be done here since you get back the CNAME record which then needs to be resolved to the IP address.
What about Azure and Front Door?
If you work with Front Door and want to map the root or apex domain to a Front Door frontend such as my.azurefd.net, the same issue arises. The Microsoft docs contain a good article explaining the concepts: https://docs.microsoft.com/en-us/azure/frontdoor/front-door-how-to-onboard-apex-domain. From that document, you will learn that Azure DNS also supports “aliases” with an easy dropdown list to select your Front Door frontend host. If you want to use SSL for the frontend host, you will need to bring your own certificate because automatic certificates are not supported with APEX domains.
Note that you do not have to use Azure DNS. An ALIAS record at NameCheap or other providers would work equally well. CloudFlare also supports APEX domains via CNAME Flattening. Just don’t use GoDaddy. 😲
Traefik’s admin site is first exposed as a ClusterIP service on port 8080. Next, an object of kind IngressRoute is defined, which is new for Traefik 2.0. You don’t need to create standard Ingress objects and configure Traefik with custom annotations. This new approach is cleaner. Of course, substitute the host with a host that points to the public IP of the load balancer. Or use the IP address with the xip.io domain. If your IP would be 126.96.36.199 then you could use something like admin.188.8.131.52.xip.io. That name automatically resolves to the IP in the name.
Let’s see if we can reach the admin interface:
Traefik 2.0 is now installed in a basic way and working properly. We exposed the admin interface but now it is time to expose the calculator API.
Exposing the calculator API
The API is deployed as 5 pods in the add namespace:
The API is exposed as a service of type ClusterIP with only an internal Kubernetes IP. To expose it via Traefik, we create the following object in the add namespace:
I am using xip.io above. Change 184.108.40.206 to the public IP of Traefik’s Azure Load Balancer. The add-svc that exposes the calculator API on port 80 is exposed via Traefik. We can easily call the service via:
Great! But what is that calcheader middleware? Middlewares modify the requests and responses to and from Traefik 2.0. There are all sorts of middelwares as explained here. You can set headers, configure authentication, perform rate limiting and much much more. In this case we create the following middleware object in the add namespace:
This middleware adds a header to the request before it comes in to Traefik. The header overrides the destination and sets it to the internal DNS name of the add-svc service that exposes the calculator API. This requirement is documented by Linkerd here.
Meshing the Traefik deployment
Because we want to mesh Traefik to get Linkerd metrics and more, we need to inject the Linkerd proxy in the Traefik pods. In my case, Traefik is deployed in the default namespace so the command below can be used:
Make sure you run the command on a system with the linkerd executable in your path and kubectl homed to the cluster that has Linkerd installed.
Checking the traffic in the Linkerd dashboard
With some traffic generated, this is what you should see when you check the meshed deployment that runs the calculator API (deploy/add):
If you are wondering what these services are and do, check this post. In the above diagram, we can clearly see we are receiving traffic to the calculator API from Traefik. When I click on Traefik, I see the following:
From the above, we see Traefik receives traffic via the Azure Load Balancer and that it forwards traffic to the calculator service. The live calls are coming from the admin UI which refreshes regularly.
In Grafana, we can get more information about the Traefik deployment:
This was just a brief look at both Traefik 2 and “meshing” Traefik with Linkerd. There is much more to say and I have much more to explore. Hopefully, this can get you started!