A while ago, I published a post about deploying AKS with Azure DevOps with extras like Nginx Ingress, cert-manager and several others. An Azure Resource Manager (ARM) template is used to deploy Azure Kubernetes Service (AKS). The extras are installed with Helm charts and Helm installer tasks. I mainly use it for demo purposes but I often refer to it in my daily work as well.
Although this works, there is another approach that combines an Azure DevOps pipeline with GitOps. From a high level point of view, that works as follows:
- Deploy AKS with an Azure DevOps pipeline: declarative and idempotent thanks to the ARM template; the deployment is driven from an Azure DevOps pipeline but other solutions such as GitHub Actions will do as well (push)
- Use a GitOps tool to deploy the GitOps agents on AKS and bootstrap the cluster by pointing the GitOps tool to a git repository (pull)
In this post, I will use Flux v2 as the GitOps tool of choice. Other tools, such as Argo CD, are capable of achieving the same goal. Note that there are ways to deploy Kubernetes using GitOps in combination with the Cluster API (CAPI). CAPI is quite a beast so let’s keep this post a bit more approachable. 😉
Let’s start with the pipeline (YAML):
# AKS deployment pipeline
trigger: none
variables:
CLUSTERNAME: 'CLUSTERNAME'
RG: 'CLUSTER_RESOURCE_GROUP'
GITHUB_REPO: 'k8s-bootstrap'
GITHUB_USER: 'GITHUB_USER'
KEY_VAULT: 'KEYVAULT_SHORTNAME'
stages:
- stage: DeployGitOpsCluster
jobs:
- job: 'Deployment'
pool:
vmImage: 'ubuntu-latest'
steps:
# DEPLOY AKS
- task: AzureResourceGroupDeployment@2
inputs:
azureSubscription: 'SUBSCRIPTION_REF'
action: 'Create Or Update Resource Group'
resourceGroupName: '$(RG)'
location: 'YOUR LOCATION'
templateLocation: 'Linked artifact'
csmFile: 'aks/deploy.json'
csmParametersFile: 'aks/deployparams.gitops.json'
overrideParameters: '-clusterName $(CLUSTERNAME)'
deploymentMode: 'Incremental'
deploymentName: 'aks-gitops-deploy'
# INSTALL KUBECTL
- task: KubectlInstaller@0
name: InstallKubectl
inputs:
kubectlVersion: '1.18.8'
# GET CREDS TO K8S CLUSTER WITH ADMIN AND INSTALL FLUX V2
- task: AzureCLI@1
name: RunAzCLIScripts
inputs:
azureSubscription: 'AzureMPN'
scriptLocation: 'inlineScript'
inlineScript: |
export GITHUB_TOKEN=$(GITHUB_TOKEN)
az aks get-credentials -g $(RG) -n $(CLUSTERNAME) --admin
msi="$(az aks show -n CLUSTERNAME -g CLUSTER_RESOURCE_GROUP | jq .identityProfile.kubeletidentity.objectId -r)"
az keyvault set-policy --name $(KEY_VAULT) --object-id $msi --secret-permissions get
curl -s https://toolkit.fluxcd.io/install.sh | sudo bash
flux bootstrap github --owner=$(GITHUB_USER) --repository=$(GITHUB_REPO) --branch=main --path=demo-cluster --personal
A couple of things to note here:
- The above pipeline contains several strings in UPPERCASE; replace them with your own values
- GITHUB_TOKEN is a secret defined in the Azure DevOps pipeline and set as an environment variable in the last task; it is required for the flux bootstrap command to configure the GitHub repo (e.g. deploy key)
- the AzureResourceGroupDeployment task deploys the AKS cluster based on parameters defined in deployparams.gitops.json; that file is in a private Azure DevOps git repo; I have also added them to the gbaeke/k8s-bootstrap repository for reference
- The AKS deployment uses a managed identity versus a service principal with manually set client id and secret (recommended)
- The flux bootstrap command deploys an Azure Key Vault to Kubernetes Secrets controller that requires access to Key Vault; the script in the last task retrieves the managed identity object id and uses az keyvault set-policy to grant get key permissions; if you delete and recreate the cluster many times, you will have several UNKNOWN access policies at the Key Vault level
The pipeline is of course short due to the fact that nginx-ingress, cert-manager, dapr, KEDA, etc… are all deployed via the gbaeke/k8s-bootstrap repo. The demo-cluster folder in that repo contains a source and four kustomizations:
- source: reference to another git repo that contains the actual deployments
- k8s-akv2k8s-kustomize.yaml: deploys the Azure Key Vault to Kubernetes Secrets controller (akv2k8s)
- k8s-secrets-kustomize.yaml: deploys secrets via custom resources picked up by the akv2k8s controller; depends on akv2k8s
- k8s-common-kustomize.yaml: deploys all components in the ./deploy folder of gbaeke/k8s-common (nginx-ingress, external-dns, cert-manager, KEDA, dapr, …)
Overall, the big picture looks like this:

Note that the kustomizations that point to ./akv2k8s and ./deploy actually deploy HelmReleases to the cluster. For instance in ./akv2k8s, you will find the following manifest:
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: akv2k8s
namespace: flux-system
spec:
chart:
spec:
chart: akv2k8s
sourceRef:
kind: HelmRepository
name: akv2k8s-repo
interval: 5m0s
releaseName: akv2k8s
targetNamespace: akv2k8s
This manifest tells Flux to deploy a Helm chart, akv2k8s, from the HelmRepository source akv2k8s-repo that is defined as follows:
---
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: akv2k8s-repo
namespace: flux-system
spec:
interval: 1m0s
url: http://charts.spvapi.no/
It is perfectly valid to use a kustomization that deploys manifests that contain resources of kind HelmRelease and HelmRepository. In fact, you can even patch those via a kustomization.yaml file if you wish.
You might wonder why I deploy the akv2k8s controller first, and then deploy a secret with the following manifest (upercase strings to be replaced):
apiVersion: spv.no/v1
kind: AzureKeyVaultSecret
metadata:
name: secret-sync
namespace: flux-system
spec:
vault:
name: KEYVAULTNAME # name of key vault
object:
name: SECRET # name of the akv object
type: secret # akv object type
output:
secret:
name: SECRET # kubernetes secret name
dataKey: values.yaml # key to store object value in kubernetes secret
The external-dns chart I deploy in later steps requires configuration to be able to change DNS settings in Cloudflare. Obviously, I do not want to store the Cloudflare secret in the k8s-common git repo. One way to solve that is to store the secrets in Azure Key Vault and then grab those secrets and convert them to Kubernetes secrets. The external-dns HelmRelease can then reference the secret to override values.yaml of the chart. Indeed, that requires storing a file in Key Vault which is easy to do like so (replace uppercase strings):
az keyvault secret set --name SECRETNAME --vault-name VAULTNAME --file ./YOURFILE.YAML
You can call the secret what you want but the Kubernetes secret dataKey should be values.yaml for the HelmRelease to work properly.
There are other ways to work with secrets in GitOps. The Flux v2 documentation mentions SealedSecrets and SOPS and you are of course welcome to use that.
Take a look at the different repos I outlined above to see the actual details. I think it makes the deployment of a cluster and bootstrapping the cluster much easier compared to suing a bunch of Helm install tasks and manifest deployments in the pipeline. What do you think?