In my previous post, I wrote about App Services with Private Link and used Azure Front Door to publish the web app. Azure Front Door Premium (in preview), can create a Private Endpoint and link it to your web app via Azure Private Link. When that happens, you need to approve the pending connection in Private Link Center.
The pending connection would be shown here, ready for approval
Although this is easy to do, you might want to automate this approval. Automation is possible via a REST API but it is easier via Azure CLI.
To do so, first list the private endpoint connections of your resource, in my case that is a web app:
az network private-endpoint-connection list --id /subscriptions/SUBID/resourceGroups/RGNAME/providers/Microsoft.Web/sites/APPSERVICENAME
The above command will return all private endpoint connections of the resource. For each connection, you get the following information:
{
"id": "PE CONNECTION ID",
"location": "East US",
"name": "NAME",
"properties": {
"ipAddresses": [],
"privateEndpoint": {
"id": "PE ID",
"resourceGroup": "RESOURCE GROUP NAME OF PE"
},
"privateLinkServiceConnectionState": {
"actionsRequired": "None",
"description": "Please approve this connection.",
"status": "Pending"
},
"provisioningState": "Pending"
},
"resourceGroup": "RESOURCE GROUP NAME OF YOUR RESOURCE",
"type": "YOUR RESOURCE TYPE"
}
To approve the above connection, use the following command:
az network private-endpoint-connection approve --id PE CONNECTION ID --description "Approved"
The –id in the approve command refers to the private endpoint connection ID, which looks like below for a web app:
/subscriptions/YOUR SUB ID/resourceGroups/YOUR RESOURCE GROUP/providers/Microsoft.Web/sites/YOUR APP SERVICE NAME/privateEndpointConnections/YOUR PRIVATE ENDPOINT CONNECTION NAME
After running the above command, the connection should show as approved:
Approved private endpoint connection
When you automate this in a pipeline, you can first list the private endpoint connections of your resource and filter on provisioningState=”Pending” to find the ones you need to approve.
If you want to jump straight to the video, here it is:
In the rest of this blog post, I provide some more background information on the different pieces of the solution.
Azure App Service
Azure App Service is a great way to host web application and APIs on Azure. It’s PaaS (platform as a service), so you do not have to deal with the underlying Windows or Linux servers as they are managed by the platform. I often see AKS (Azure Kubernetes Service) implementations to host just a couple of web APIs and web apps. In most cases, that is overkill and you still have to deal with Kubernetes upgrades, node patching or image replacements, draining and rebooting the nodes, etc… And then I did not even discuss controlling ingress and egress traffic. Even if you standardize on packaging your app in a container, Azure App Service will gladly accept the container and serve it for you.
By default, Azure App Service gives you a public IP address and FQDN (Fully Qualified Domain Name) to reach your app securely over the Internet. The default name ends with azurewebsites.net but you can easily add custom domains and certificates.
Things get a bit more complicated when you want a private IP address for your app, reachable from Azure virtual networks and on-premises networks. One solution is to use an App Service Environment. It provides a fully isolated and dedicated environment to run App Service apps such as web apps and APIs, Docker containers and Functions. You can create an internal ASE which results in an Internal Load Balancer in front of your apps that is configured in a subnet of your choice. There is no need to configure Private Endpoints to make use of Private Link. This is often called native virtual network integration.
At the network level, an App Service Environment v2, works as follows:
ASE networking (from Microsoft website)
Looking at the above diagram, an ILB ASE (but also an External ASE) also makes it easy to connect to back-end systems such as on-premises databases. The outbound connection to internal resources originates from an IP in the chosen integration subnet.
The downside to ASE is that its isolated instances (I1, I2, I3) are rather expensive. It also takes a long time to provision an ASE but that is less of an issue. In reality though , I would like to see App Service Environments go away and replaced by “regular” App Services with toggles that give you the options you require. You would just deploy App Services and set the options you require. In any case, native virtual network integration should not depend on dedicated or shared compute. One can only dream right? 😉
As an alternative to an ASE for a private app, consider a non-ASE App Service that, in production, uses Premium V2 or V3 instances. The question then becomes: “How do you get a private IP address?” That’s where Private Link comes in…
Azure Private Link with App Service
Azure Private Link provides connectivity to Azure services (such as App Service) via a Private Endpoint. The Private Endpoint creates a virtual network interface card (NIC) on a subnet of your choice. Connections to the NICs IP address end up at the Private Link service the Private Endpoint is connected to. Below is an example with Azure SQL Database where one Private Endpoint is mapped, via Azure Private Link, to one database. The other databases are not reachable via the endpoint.
Private Endpoint connected to Azure SQL Database (PaaS) via Private Link (source: Microsoft website)
To create a regular App Service that is accessible via a private IP, we can do the same thing:
create a private endpoint in the subnet of your choice
connect the private endpoint to your App Service using Private Link
Both actions can be performed at the same time from the portal. In the Networking section of your App Service, click Configure your private endpoint connections. You will see the following screen:
Private Endpoint connection of App Service
Now click Add to create the Private Endpoint:
Creating the private endpoint
The above creates the private endpoint in the default subnet of the selected VNET. When the creation is finished, the private endpoint will be connected to App Service and automatically approved. There are scenarios, such as connecting private endpoints from other tenants, that require you to approve the connection first:
Automatically approved connection
When you click on the private endpoint, you will see the subnet and NIC that was created:
Private Endpoint
From the above, you can click the link to the network interface (NIC):
Network interface created by the private endpoint
Note that when your delete the Private Endpoint, the interface gets deleted as well.
Great! Now we have an IP address that we can use to reach the App Service. If you use the default name of the web app, in my case https://web-geba.azurewebsites.net, you will get:
Oops, no access on the public name (resolves to public IP)
Indeed, when you enable Private Link on App Service, you cannot access the website using its public IP. To solve this, you will need to do something at the DNS level. For the default domain, azurewebsites.net, it is recommended to use Azure Private DNS. During the creation of my Private Endpoint, I turned on that feature which resulted in:
Private DNS Zone for privatelink.azurewebsites.net
You might wonder why this is a private DNS zone for privatelink.azurewebsites.net? From the moment you enable private link on your web app, Microsoft modifies the response to the DNS query for the public name of your app. For example, if the app is web-geba.azurewebsites.net and you query DNS for that name, it will respond with a CNAME of web-geba.privatelink.azurewebsites.net. If that cannot be resolved, you will still get the public IP but that will result in a 403.
In my case, as long as the DNS servers I use can resolve web-geba.privatelink.azurewebsites.net and I can connect to 10.240.0.4, I am good to go. Note however that the DNS story, including Private DNS and your own DNS servers, is a bit more complex that just checking a box! However, that is not the focus of this blogpost so moving on… 😉
One of the features of App Service Environments, is the ability to connect to back-end systems in Azure VNETs or on-premises. That is the result of native VNET integration.
When you enable Private Link on a regular App Service, you do not get that. Private Link only enables private inbound connectivity but does nothing for outbound. You will need to configure something else to make outbound connections from the Web App to resources such as internal SQL Servers work.
In the network configuration of you App Service, there is another option for outbound connectivity to internal resources – VNet integration.
VNET Integration
In the Networking section of App Service, find the VNet integration section and click Click here to configure. From there, you can add a VNet to integrate with. You will need to select a subnet in that VNet for this integration to work:
Outbound connectivity for App Service to Azure VNets
There are quite some things to know when it comes to VNet integration for App Service so be sure to check the docs.
Private Link with Azure Front Door
Often, a web app is made private because you want to put a Web Application Firewall (WAF) in front of the app. Typically, that goal is achieved by putting Azure Application Gateway (AG) with WAF in front of an internal App Services Environment. As as alternative to AG, you can also use virtual appliances such as Barracuda WAF for Azure. This works because the App Services Environment is a first-class citizen of your Azure virtual network.
There are multiple ways to put a WAF in front of a (non-ASE) App Service. You can use Front Door with the App Service as the origin, as long as you restrict direct access to the origin. To that end, App Services support access restrictions.
With Azure Front Door Premium, in preview at the time of this writing (June 2021), you can use Private Link as well. In that case, Azure Front Door creates a private endpoint. You cannot control or see that private endpoint because it is managed by Front Door. Because the private endpoint is not in your tenant, you will need to approve the connection from the private endpoint to your App Service. You can do that in multiple ways. One way is Private Link Center Pending Connections:
Pending Connections
If you check the video at the top of this page, this is shown here.
Conclusion
The combination of Azure networking with App Services Environments (ASE) and “regular” App Services (non-ASE) can be pretty confusing. You have native network integration for ASE, private access with private link and private endpoints for non-ASE, private DNS for private link domains, virtual network service endpoints, VNet outbound configuration for non-ASE etc… Most of the time, when I am asked for the easiest and most cost-effective option for a private web app in PaaS, I go for a regular non-ASE App Service and use Private Link to make the app accessible from the internal network.
While I was investigating Kyverno, I wanted to check my Kubernetes deployments for compliance with Kyverno policies. The Kyverno CLI can be used to do that with the following command:
To do this easily from a GitHub workflow, I created an action called gbaeke/kyverno-cli. The action uses a Docker container. It can be used in a workflow as follows:
# run kyverno cli and use v1 instead of v1.0.0
- name: Validate policies
uses: gbaeke/kyverno-action@v1
with:
command: |
kyverno apply ./policies --resource=./deploy/deployment.yaml
You can find the full workflow here. In the next section, we will take a look at how you build such an action.
If you want a video instead, here it is:
GitHub Actions
A GitHub Action is used inside a GitHub workflow. An action can be built with Javascript or with Docker. To use an action in a workflow, you use uses: followed by a reference to the action, which is just a GitHub repository. In the above action, we used uses: gbaeke/kyverno-action@v1. The repository is gbaeke/kyverno-action and the version is v1. The version can refer to a release but also a branch. In this case v1 refers to a branch. In a later section, we will take a look at versioning with releases and branches.
Create a repository
An action consists of several files that live in a git repository. Go ahead and create such a repository on GitHub. I presume you know how to do that. We will add several files to it:
Dockerfile and all the files that are needed to build the Docker image
action.yml: to set the name of our action, its description, inputs and outputs and how it should run
Docker image
Remember that we want a Docker image that can run the Kyverno CLI. That means we have to include the CLI in the image that we build. In this case, we will build the CLI with Go as instructed on https://kyverno.io. Here is the Dockerfile (should be in the root of your git repo):
FROM golang:1.15
COPY src/ /
RUN git clone https://github.com/kyverno/kyverno.git
WORKDIR kyverno
RUN make cli
RUN mv ./cmd/cli/kubectl-kyverno/kyverno /usr/bin/kyverno
ENTRYPOINT ["/entrypoint.sh"]
We start from a golang image because we need the go tools to build the executable. The result of the build is the kyverno executable in /usr/bin. The Docker image uses a shell script as its entrypoint, entrypoint.sh. We copy that shell script from the src folder in our repository.
So go ahead and create the src folder and add a file called entrypoint.sh. Here is the script:
#!/usr/bin/env bash
set -e
set -o pipefail
echo ">>> Running command"
echo ""
bash -c "set -e; set -o pipefail; $1"
This is just a bash script. We use the set commands in the main script to ensure that, when an error occurs, the script exits with the exit code from the command or pipeline that failed. Because we want to run a command like kyverno apply, we need a way to execute that. That’s why we run bash again at the end with the same options and use $1 to represent the argument we will pass to our container. Our GitHub Action will need a way to require an input and pass that input as the argument to the Docker container.
Note: make sure the script is executable; use chmod +x entrypoint.sh
The action.yml
Action.yml defines our action and should be in the root of the git repo. Here is the action.yml for our Docker action:
Above, we give the action a name and description. We also set an icon and color. The icon and color is used on the GitHub Marketplace:
command icon and color as defined in action.yml (note that this is the REAL action; in this post we call the action kyverno-action as an example)
As stated earlier, we need to pass arguments to the container when it starts. To achieve that, we define a required input to the action. The input is called command but you can use any name.
In the run: section, we specify that this action uses Docker. When you use image: Dockerfile, the workflow will build the Docker image for you with a random name and then run it for you. When it runs the container, it passes the command input as an argument with args: Multiple arguments can be passed, but we only pass one.
Note: the use of a Dockerfile makes running the action quite slow because the image needs to be built every time the action runs. In a moment, we will see how to fix that.
Verify that the image works
On your machine that has Docker installed, build and run the container to verify that you can run the CLI. Run the commands below from the folder containing the Dockerfile:
docker build -t DOCKER_HUB_USER/kyverno-action:v1.0.0 .
docker run DOCKER_HUB_USER/kyverno-action:v1.0.0 "kyverno version"
Above, I presume you have an account on Docker Hub so that you can later push the image to it. Substitute DOCKER_HUB_USER with your Docker Hub username. You can of course use any registry you want.
The result of docker run should be similar to the result below:
Note: if you want to build a specific version of the Kyverno CLI, you will need to modify the Dockerfile; the instructions I used build the latest version and includes release candidates
If docker run was successful, push the image to Docker Hub (or your registry):
docker push DOCKER_HUB_USER/kyverno-action:v1.0.0
Note: later, it will become clear why we push this container to a public registry
Publish to the marketplace
You are now ready to publish your action to the marketplace. One thing to be sure of is that the name of your action should be unique. Above, we used kyverno-action. When you run through the publishing steps, GitHub will check if the name is unique.
To see how to publish the action, check the following video:
video starts at the marketplace publishing step
Note that publishing to the marketplace is optional. Our action can still be used without it being published. Publishing just makes our action easier to discover.
Using the action
At this point, you can already use the action when you specify the exact release version. In the video, we created a release called v1.0.0 and optionally published it. The snippet below illustrates its use:
Running this action results in a docker build, followed by a docker run in the workflow:
The build step takes quite some time, which is somewhat annoying. Let’s fix that! In addition, we will let users use v1 instead of having to specify v1.0.0 or v1.0.1 etc…
Creating a v1 branch
By creating a branch called v1 and modifying action.yml to use a Docker image from a registry, we can make the action quicker and easier to use. Just create a branch in GitHub and call it v1. We’ll use the UI:
create the branch here; if it does not exist there will be a create option (here it exists already)
Make the v1 branch active and modify action.yml:
In action.yml, instead of image: ‘Dockerfile’, use the following:
When you use the above statement, the image will be pulled instead of built from scratch. You can now use the action with @v1 at the end:
# run kyverno cli and use v1 instead of v1.0.0
- name: Validate policies
uses: gbaeke/kyverno-action@v1
with:
command: |
kyverno apply ./policies --resource=./deploy/deployment.yaml
In the worflow logs, you will see:
The action now pulls the image from Docker Hub and later runs it
Conclusion
We can conclude that building GitHub Actions with Docker is quick and fun. You can build your action any way you want, using the tools you like. Want to create a tool with Go, or Python or just Bash… just do it! If you do want to build a GitHub Action with JavaScript, then be sure to check out this article on devblogs.microsoft.com.
When you install the Azure Arc agent on any physical or virtual server, either Windows or Linux, the machine suddenly starts living in a cloud world:
it appears in the Azure Portal
you can apply resource tags
you can check for security and regulatory compliance with Azure Policy
you can enable Update management
and much, much more…
Check Microsoft’s documentation for more information about Azure Arc for Servers to find out more. Below is a screenshot of such an Azure Arc-enabled Windows Server 2019 machine running on-premises with Insights enabled (on my laptop 😀):
Azure Arc-enabled Windows Server 2019
A somewhat lesser-known feature of Azure Arc is that these servers also have Managed Server Identity (MSI). After you have installed the Azure Arc agent, which normally installs to Program Files\AzureConnectedMachineAgent, two environment variables are set:
IMDS stands for Instance Metadata Service. On a regular Azure virtual machine, this service listens on the non-routable IP address of 169.254.169.254. On the virtual machine, you can make HTTP requests to that IP address without any issue. The traffic never leaves the virtual machine.
On an Azure Arc-enabled server, which can run anywhere, using the non-routable IP address is not feasible. Instead, the IMDS listens on a port on localhost as indicated by the environment variables.
The service can be used for all sorts of things. For example, I can make the following request (PowerShell):
Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri http://localhost:40342/metadata/instance?api-version=2020-06-01 | ConvertTo-Json
The result will be a JSON structure with most of the fields empty. That is not surprising since this is not an Azure VM and most fields are Azure-related (vmSize, fault domain, update domain, …). But it does show that the IMDS works, just like on a regular Azure VM.
Although there are many other things you can do, one of its most useful features is providing you with an access token to access Azure Resource Manager, Key Vault, or other services.
There are many ways to obtain an access token. The documentation contains an example in PowerShell that uses the environment variables and Invoke-WebRequest to get a token for https://management.azure.com.
A common requirement is code that needs to retrieve secrets from Azure Key Vault. Now we know that we can acquire a token via the IMDS, let’s see how we can do this with the Azure SDK for Python, which has full support for the IMDS on Azure Arc-enabled machines. The code below does the trick:
from azure.identity import ManagedIdentityCredential
from azure.keyvault.secrets import SecretClient
credentials = ManagedIdentityCredential()
secret_client = SecretClient(vault_url="https://gebakv.vault.azure.net", credential=credentials)
secret = secret_client.get_secret("notsecret")
print(secret.value)
Of course, you need Python installed with the following packages (use pip install):
azure-identity
azure-keyvault
Yes, the above code is all you need to use the managed identity of the Azure Arc-enabled server to authenticate to Key Vault and obtain the secret called notsecret. The functionality that makes the Python SDK work with Azure Arc can be seen here.
Of course, you need to make sure that the managed identity has the necessary access rights to Key Vault:
Managed Identity has Get permissions on Secrets
I have not looked at MSI Azure Arc support in the other SDKs but the Python SDK sure makes it easy!
In the previous post, I talked about akv2k8s. akv2k8s is a Kubernetes controller that synchronizes secrets and certificates from Key Vault. Besides synchronizing to a regular secret, it can also inject secrets into pods.
Instead of akv2k8s, you can also use the secrets store CSI driver with the Azure Key Vault provider. As a CSI driver, its main purpose is to mount secrets and certificates as storage volumes. Next to that, it can also create regular Kubernetes secrets that can be used with an ingress controller or mounted as environment variables. That might be required if the application was not designed to read the secret from the file system.
In the previous post, I used akv2k8s to grab a certificate from Key Vault, create a Kubernetes secret and use that secret with nginx ingress controller:
This will install the components in the current Kubernetes namespace.
Easy no?
Syncing the certificate
Following the same example as with akv2aks, we need to point at the certificate in Key Vault, set the right permissions, and bring the certificate down to Kubernetes.
You will first need to decide how to access Key Vault. You can use the managed identity of your AKS cluster or be more granular and use pod identity. If you have setup AKS with a managed identity, that is the simplest solution. You just need to grab the clientId of the managed identity like so:
az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.clientId -o tsv
Next, create a file with the content below and apply it to your cluster in a namespace of your choosing.
Compared to the akv2k8s controller, the above configuration is a bit more complex. In the parameters section, in the objects array, you specify the name of the certificate in Key Vault and its object type. Yes, you saw that correctly, the objectType actually has to be secret for this to work.
The other settings are self-explanatory: we use the managed identity, set its clientId and in keyvaultName we set the short name of our Key Vault.
The settings in the parameters section are actually sufficient to mount the secret/certificate in a pod. With the secretObjects section though, we can also ask for the creation of regular Kubernetes secrets. Here, we ask for a secret of type kubernetes.io/tls with name nginx-cert to be created. You need to explicitly set both the tls.key and the tls.crt value and correctly reference the objectName in the array.
The akv2k8s controller is simpler to use as you only need to point it to your certificate in Key Vault (and specify it’s a certificate, not a secret) and set a secret name. There is no need to set the different values in the secret.
Using the secret
The advantage of the secrets store CSI driver is that the secret is only mounted/created when an application requires it. That also means we have to instruct our application to mount the secret explicitly. You do that via a volume as the example below illustrates (part of a deployment):
in volumes: we create a volume called secrets-store-inline and use the csi driver to mount the secrets we specified in the SecretProviderClass we created earlier (azure-gebakv)
in volumeMounts: we mount the volume on /mnt/secrets-store
Because we used secretObjects in our SecretProviderClass, this mount is accompanied by the creation of a regular Kubernetes secret as well.
When you remove the deployment, the Kubernetes secret will be removed instead of lingering behind for all to see.
Of course, the pods in my deployment do not need the mounted volume. It was not immediately clear to me how to avoid the mount but still create the Kubernetes secret (not exactly the point of a CSI driver 😀). On the other hand, there is a way to have the secret created as part of ingress controller creation. That approach is more useful in this case because we want our ingress controller to use the certificate. More information can be found here. In short, it roughly works as follows:
instead of creating and mounting a volume in your application pod, a volume should be created and mounted on the ingress controller
to do so, you modify the deployment of your ingress controller (e.g. ingress-nginx) with extraVolumes: and extraVolumeMounts: sections; depending on the ingress controller you use, other settings might be required
Be aware that you need to enable auto rotation of secrets manually and that it is an alpha feature at this point (December 2020). The akv2k8s controller does that for you out of the box.
Conclusion
Both the akv2k8s controller and the Secrets Store CSI driver (for Azure) can be used to achieve the same objective: syncing secrets, keys and certificates from Key Vault to AKS. In my experience, the akv2k8s controller is easier to use. The big advantage of the Secrets Store CSI driver is that it is a broader solution (not just for AKS) and supports multiple secret stores. Next to Azure Key Vault, it also supports Hashicorp’s Vault for example. My recommendation: for Azure Key Vault and AKS, keep it simple and try akv2k8s first!
A while ago, I published a post about deploying AKS with Azure DevOps with extras like Nginx Ingress, cert-manager and several others. An Azure Resource Manager (ARM) template is used to deploy Azure Kubernetes Service (AKS). The extras are installed with Helm charts and Helm installer tasks. I mainly use it for demo purposes but I often refer to it in my daily work as well.
Although this works, there is another approach that combines an Azure DevOps pipeline with GitOps. From a high level point of view, that works as follows:
Deploy AKS with an Azure DevOps pipeline: declarative and idempotent thanks to the ARM template; the deployment is driven from an Azure DevOps pipeline but other solutions such as GitHub Actions will do as well (push)
Use a GitOps tool to deploy the GitOps agents on AKS and bootstrap the cluster by pointing the GitOps tool to a git repository (pull)
In this post, I will use Flux v2 as the GitOps tool of choice. Other tools, such as Argo CD, are capable of achieving the same goal. Note that there are ways to deploy Kubernetes using GitOps in combination with the Cluster API (CAPI). CAPI is quite a beast so let’s keep this post a bit more approachable. 😉
The above pipeline contains several strings in UPPERCASE; replace them with your own values
GITHUB_TOKEN is a secret defined in the Azure DevOps pipeline and set as an environment variable in the last task; it is required for the flux bootstrap command to configure the GitHub repo (e.g. deploy key)
the AzureResourceGroupDeployment task deploys the AKS cluster based on parameters defined in deployparams.gitops.json; that file is in a private Azure DevOps git repo; I have also added them to the gbaeke/k8s-bootstrap repository for reference
The AKS deployment uses a managed identity versus a service principal with manually set client id and secret (recommended)
The flux bootstrap command deploys an Azure Key Vault to Kubernetes Secrets controller that requires access to Key Vault; the script in the last task retrieves the managed identity object id and uses az keyvault set-policy to grant get key permissions; if you delete and recreate the cluster many times, you will have several UNKNOWN access policies at the Key Vault level
The pipeline is of course short due to the fact that nginx-ingress, cert-manager, dapr, KEDA, etc… are all deployed via the gbaeke/k8s-bootstrap repo. The demo-cluster folder in that repo contains a source and four kustomizations:
source: reference to another git repo that contains the actual deployments
k8s-secrets-kustomize.yaml: deploys secrets via custom resources picked up by the akv2k8s controller; depends on akv2k8s
k8s-common-kustomize.yaml: deploys all components in the ./deploy folder of gbaeke/k8s-common (nginx-ingress, external-dns, cert-manager, KEDA, dapr, …)
Overall, the big picture looks like this:
Note that the kustomizations that point to ./akv2k8s and ./deploy actually deploy HelmReleases to the cluster. For instance in ./akv2k8s, you will find the following manifest:
It is perfectly valid to use a kustomization that deploys manifests that contain resources of kind HelmRelease and HelmRepository. In fact, you can even patch those via a kustomization.yaml file if you wish.
You might wonder why I deploy the akv2k8s controller first, and then deploy a secret with the following manifest (upercase strings to be replaced):
apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
name: secret-sync
namespace: flux-system
spec:
vault:
name: KEYVAULTNAME # name of key vault
object:
name: SECRET # name of the akv object
type: secret # akv object type
output:
secret:
name: SECRET # kubernetes secret name
dataKey: values.yaml # key to store object value in kubernetes secret
The external-dns chart I deploy in later steps requires configuration to be able to change DNS settings in Cloudflare. Obviously, I do not want to store the Cloudflare secret in the k8s-common git repo. One way to solve that is to store the secrets in Azure Key Vault and then grab those secrets and convert them to Kubernetes secrets. The external-dns HelmRelease can then reference the secret to override values.yaml of the chart. Indeed, that requires storing a file in Key Vault which is easy to do like so (replace uppercase strings):
az keyvault secret set --name SECRETNAME --vault-name VAULTNAME --file ./YOURFILE.YAML
You can call the secret what you want but the Kubernetes secret dataKey should be values.yaml for the HelmRelease to work properly.
There are other ways to work with secrets in GitOps. The Flux v2 documentation mentions SealedSecrets and SOPS and you are of course welcome to use that.
Take a look at the different repos I outlined above to see the actual details. I think it makes the deployment of a cluster and bootstrapping the cluster much easier compared to suing a bunch of Helm install tasks and manifest deployments in the pipeline. What do you think?
If you have read my blog and watched my Youtube channel, you know I have worked with Flux in the past. Flux, by weaveworks, is a GitOps Kubernetes Operator that ensures that your cluster state matches the desired state described in a git repository. There are other solutions as well, such as Argo CD.
With Flux v2, GitOps on Kubernetes became a lot more powerful and easier to use. Flux v2 is built on a set of controllers and APIs called the GitOps Toolkit. The toolkit contains the following components:
Source controller: allows you to create sources such as a GitRepository or a HelmRepository; the source controller acts on several custom resource definitions (CRDs) as defined in the docs
Kustomize controller: runs continuous delivery pipelines defined with Kubernetes manifests (YAML) files; although you can use kustomize and define kustomization.yaml files, you do not have to; internally though, Flux v2 uses kustomize to deploy your manifests; the kustomize controller acts on Kustomization CRDs as defined here
Helm controller: deploy your workloads based on Helm charts but do so declaratively; there is no need to run helm commands; see the docs for more information
Notification controller: responds to incoming events (e.g. from a git repo) and sends outgoing events (e.g. to Teams or Slack); more info here
If you throw it all together, you get something like this:
To get started, you should of course look at the documentation over at https://toolkit.fluxcd.io. I also created a series of videos about Flux v2. The first one talks about Flux v2 in general and shows how to bootstrap a cluster.
Part 1 in the series about Flux v2
Although Flux v2 works with other source control systems than GitHub, for instance GitLab, I use GitHub in the above video. I also use kind, to make it easy to try out Flux v2 on your local machine. In subsequent videos, I use Azure Kubernetes Services (AKS).
In Flux v2, it is much easier to deploy Flux on your cluster with the flux bootstrap command. Flux v2 itself is basically installed and managed via GitOps principles by pushing all Flux v2 manifests to a git repository and running reconciliations to keep the components running as intended.
Kustomize
Flux v1 already supported kustomize but v2 takes it to another level. Whenever you want to deploy to Kubernetes with YAML manifests, you will create a kustomization, which is based on the Kustomization CRD. A kustomization is defined as below:
A kustomization requires a source. In this case, the source is a git repository called realtimeapp-infra that was already defined in advance. The source just points to a public git repository on Github: https://github.com/gbaeke/realtimeapp-infra.
The source contains a deploy folder, which contains a bases and an overlays folder. The kustomization points to the ./deploy/overlays/dev folder as set in path. That folder contains a kustomization.yaml file that deploys an application in a development namespace and uses the base from ./deploy/bases/realtimeapp as its source. If you are not sure what kustomize exactly does, I made a video that tries 😉 to explain it:
An introduction to kustomize
It is important to know that you do not need to use kustomize in your source files. If you point a Flux v2 kustomization to a path that just contains a bunch of YAML files, it will work equally well. You do not have to create a kustomization.yaml file in that folder that lists the resources (YAML files) that you want to deploy. Internally though, Flux v2 will use kustomize to deploy the manifests and uses the deployment order that kustomize uses: first namespaces, then services, then deployments, etc…
The interval in the kustomization (above set at 1 minute) means that your YAML files are applied at that interval, even if the source has not changed. This ensures that, if you modified resources on your cluster, the kustomization will reset the changes to the state as defined in the source. The source itself has its own interval. If you set a GitRepository source to 1 minute, the source is checked every 1 minute. If the source has changes, the kustomizations that depend on the source will be notified and proceed to deploy the changes.
A GitRepository source can refer to a specific branch, but can also refer to a semantic versioning tag if you use a semver range in the source. See checkout strategies for more information.
Deploying YAML manifests
If the above explanation of sources and kustomizations does not mean much to you, I created a video that illustrates these aspects more clearly:
In the above video, the source that points to https://github.com/gbaeke/realtimeapp-infra gets created first (see it at this mark). Next, I create two kustomizations, one for development and one for production. I use a kustomize base for the application plus two overlays, one for dev and one for production.
What to do when the app container images changes?
Flux v1 has a feature that tracks container images in a container registry and updates your cluster resources with a new image based on a filter you set. This requires read/write access to your git repository because Flux v1 set the images in your source files. Flux v2 does not have this feature yet (November 2020, see https://toolkit.fluxcd.io/roadmap).
In my example, I use a GitHub Action in the application source code repository to build and push the application image to Docker Hub. The GitHub action triggers a build job on two events:
push to main branch: build a container image with a short sha as the tag (e.g. gbaeke/flux-rt:sha-94561cb
published release: build a container image with the release version as the tag (e.g. gbaeke/flux-rt:1.0.1)
When the build is caused by a push to main, the update-dev-image job runs. It modifies kustomization.yaml in the dev overlay with kustomize edit:
Similarly, when the build is caused by a release, the image is updated in the production overlay’s kustomization.yaml file.
Conclusion
If you are interested in GitOps as an alternative for continuous delivery to Kubernetes, do check out Flux v2 and see if it meets your needs. I personally like it a lot and believe that they are setting the standard for GitOps on Kubernetes. I have not covered Helm deployments, monitoring and alerting features yet. I will create additional videos and posts about those features in the near future. Stay tuned!
I recently gave a talk at TechTrain, a monthly event in Mechelen (Belgium), hosted by Cronos. The talk is called “GitOps with Kubernetes: a better way to deploy” and is an introduction to GitOps with Weaveworks Flux as an example.
You can find a re-recording of the presentation on Youtube:
If you have followed my blog a little, you have seen a few posts about GitOps with Flux CD. This time, I am taking a look at Argo CD which, like Flux CD, is a GitOps tool to deploy applications from manifests in a git repository.
Don’t want to read this whole thing?
Here’s the video version of this post
There are several differences between the two tools:
At first glance, Flux appears to use a single git repo for your cluster where Argo immediately introduces the concept of apps. Each app can be connected to a different git repo. However Flux can also use multiple git repositories in the same cluster. See https://github.com/fluxcd/multi-tenancy for more information
Flux has the concept of workloads which can be automated. This means that image repositories are scanned for updates. When an update is available (say from tag v1.0.0 to v1.0.1), Flux will update your application based on filters you specify. As far as I can see, Argo requires you to drive the update from your CI process, which might be preferred.
By default, Argo deploys an administrative UI (next to a CLI) with a full view on your deployment and its dependencies
Argo supports RBAC and integrates with external identity providers (e.g. Azure Active Directory)
The Argo CD admin interface is shown below:
Argo CD admin interface… not too shabby
Let’s take a look at how to deploy Argo and deploy the app you see above. The app is deployed using a single yaml file. Nothing fancy yet such as kustomize or jsonnet.
Deployment
The getting started guide is pretty clear, so do have a look over there as well. To install, just run (with a deployed Kubernetes cluster and kubectl pointing at the cluster):
Next, install the CLI. On a Mac, that is simple (with Homebrew):
brew tap argoproj/tap
brew install argoproj/tap/argocd
You will need access to the API server, which is not exposed over the Internet by default. For testing, port forwarding is easiest. In a separate shell, run the following command:
You can now connect to https://localhost:8080 to get to the UI. You will need the admin password by running:
kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
You can now login to the UI with the user admin and the displayed password. You should also login from the CLI and change the password with the following commands:
Great! You are all set now to deploy an application.
Deploying an application
We will deploy an application that has a couple of dependencies. Normally, you would install those dependencies with Argo CD as well but since I am using a cluster that has these dependencies installed via Azure DevOps, I will just list what you need (Helm commands):
To know more about these dependencies and use an Azure DevOps YAML pipeline to deploy them, see this post. If you want, you can skip the externaldns installation and create a DNS record yourself that resolves to the public IP address of Nginx Ingress. If you do not want to use an Azure static IP address, you can remove the loadBalancerIP parameter from the first command.
The manifests we will deploy with Argo CD can be found in the following public git repository: https://github.com/gbaeke/argo-demo. The application is in three YAML files:
Two YAML files that create a certificate cluster issuer based on custom resource definitions (CRDs) from cert-manager
realtime.yaml: Redis deployment, Redis service (ClusterIP), realtime web app deployment (based on this), realtime web app service (ClusterIP), ingress resource for https://real.baeke.info (record automatically created by externaldns)
It’s best that you fork my repo and modify realtime.yaml’s ingress resource with your own DNS name.
Create the Argo app
Now you can create the Argo app based on my forked repo. I used the following command with my original repo:
The command above creates an app called realtime based on the specified repo. The app should use the manifests folder and apply (kubectl apply) all the manifests in that folder. The manifests are deployed to the cluster that Argo CD runs in. Note that you can run Argo CD in one cluster and deploy to totally different clusters.
The above command does not configure the repository to be synced automatically, although that is an option. To sync manually, use the following command:
argocd app sync realtime
The application should now be synced and viewable in the UI:
Not Secure because we use Let’s Encrypt staging for this app
Set up auto-sync
Let’s set up this app to automatically sync with the repo (default = every 3 minutes). This can be done from both the CLI and the UI. Let’s do it from the UI. Click on the app and then click App Details. You will find a Sync Policy in the app details where you can enable auto-sync
Setting up auto-sync from the UI
You can now make changes to the git repo like changing the image tag for gbaeke/fluxapp (yes, I used this image with the Flux posts as well 😊 ) to 1.0.6 and wait for the sync to happen. Or sync manually from the CLI or the UI.
Conclusion
This was a quick tour of Argo CD. There is much more you can do but the above should get you started quickly. I must say I quite like the solution and am eager to see what the collaboration of Flux CD, Argo CD and Amazon comes up with in the future.
When you have to deploy an application to multiple environments like dev, test and production there are many solutions available to you. You can manually deploy the app (Nooooooo! 😉), use a CI/CD system like Azure DevOps and its release pipelines (with or without Helm) or maybe even a “GitOps” approach where deployments are driven by a tool such as Flux or Argo based on a git repository.
In the latter case, you probably want to use a configuration management tool like Kustomize for environment management. Instead of explaining what it does, let’s take a look at an example. Suppose I have an app that can be deployed with the following yaml files:
redis-deployment.yaml: simple deployment of Redis
redis-service.yaml: service to connect to Redis on port 6379 (Cluster IP)
realtime-deployment.yaml: application that uses the socket.io library to display real-time updates coming from a Redis channel
realtime-service.yaml: service to connect to the socket.io application on port 80 (Cluster IP)
realtime-ingress.yaml: ingress resource that defines the hostname and TLS certificate for the socket.io application (works with nginx ingress controller)
Let’s call this collection of files the base and put them all in a folder:
Base files for the application
Now I would like to modify these files just a bit, to install them in a dev namespace called realtime-dev. In the ingress definition I want to change the name of the host to realdev.baeke.info instead of real.baeke.info for production. We can use Kustomize to reach that goal.
In the base folder, we can add a kustomization.yaml file like so:
This lists all the resources we would like to deploy.
Now we can create a folder for our patches. The patches define the changes to the base. Create a folder called dev (next to base). We will add the following files (one file blurred because it’s not relevant to this post):
The namespace: realtime-dev ensures that our base resource definitions are updated with that namespace. In resources, we ensure that namespace gets created. The file namespace.yaml contains the following:
Note that we also use certmanager here to issue a certificate to use on the ingress. For dev environments, it is better to use the Let’s Encrypt staging issuer instead of the production issuer.
We are now ready to generate the manifests for the dev environment. From the parent folder of base and dev, run the following command:
kubectl kustomize dev
The above command generates the patched manifests like so:
Note that namespace realtime-dev is used everywhere and that the Ingress resource uses realdev.baeke.info. The original Ingress resource looked like below:
As you can see, Kustomize has updated the host in tls: and rules: and also modified the secret name (which will be created by certmanager).
You have probably seen that Kustomize is integrated with kubectl. It’s also available as a standalone executable.
To directly apply the patched manifests to your cluster, run kubectl apply -k dev. The result:
namespace/realtime-dev created
service/realtime created
service/redis created
deployment.apps/realtime created
deployment.apps/redis created
ingress.extensions/realtime-ingress created
In another post, we will look at using Kustomize with Flux. Stay tuned!