I recently uploaded a video to my YouTube channel about this topic:
In this post, I will provide some more information about the pipelines. Again, many thanks to this post on which the solution is based.
The YAML pipelines can be found in my go-template repository. The application is basically a starter template to create a Go web app or API with full configuration, zap logging, OpenAPI spec and more. The Azure DevOps pipelines are in the azdo folder.
The big picture

The pipelines are designed to deploy to a qa environment and subsequently to production after an approval is given. The ci pipeline builds a container image and a Helm chart and stores both in Azure Container Registry (ACR). When that is finished, a pipeline artifact is stored that contains the image tag and chart version in a JSON file.
The cd pipeline triggers on the ci pipeline artifact and deploys to qa and production. It waits for approval before deployment to production. It uses environments to achieve that.
CI pipeline
In the “ci” pipeline, the following steps are taken:
- Retrieve the git commit SHA with $(build.SourceVersion) and store it in a variable called imageTag. To version the images, we simply use git commit SHAs which is a valid approach. Imho you do not need to use semantic versioning tags with pipelines that deploy often.
- Build the container image. Note that the Dockerfile is a two stage build and that go test is used in the first stage. Unit tests are not run outside the image building process but you could of course do that as well to fail faster in case there is an issue.
- Scan the image for vulnerabilities with Snyk. This step is just for reference because Snyk will not find issues with the image as it is based on the scratch image.
- Push the container image to Azure Container Registry (ACR). Pipeline variables $(registryLogin) and $(registryPassword) are used with docker login instead of the Azure DevOps task.
- Run helm lint to check the chart in /charts/go-template
- Run helm package to package the chart (this is not required before pushing the chart to ACR; it is just an example)
When the above steps have finished, we are ready to push the chart to ACR. It is important to realize that storing charts in OCI compliant registries is an experimental feature of Helm. You need to turn on these features with:
export HELM_EXPERIMENTAL_OCI=1
After turning on this support, we can login to ACR and push the chart. These are the steps:
- Use helm registry login and use the same login and password as with docker login
- Save the chart in the checked out sources (/charts/go-template) locally with helm chart save. This is similar to building and storing a container image locally as you also use the full name to the chart. For example: myacr.azurecr.io/helm/go-template:0.0.1. In our pipeline, the below command is used:
chartVersion=`helm chart save charts/go-template $(registryServerName)/helm/$(projectName) | grep version | awk -F ': ' '{print $2}'`
- Above, we run the helm chart save command but we also want to retrieve the version of the chart. That version is inside /charts/go-template/Chart.yaml and is output as version. With grep and awk, we grab the version and store it in the chartVersion variable. This is a “shell variable”, not a pipeline variable.
- With the chart saved locally, we can now push the chart to ACR with:
helm chart push $(registryServerName)/helm/$(projectName):$chartVersion
- Now we just need to save the chart version and the container image tag as a pipeline artifact. We can save these two values to a json file with:
echo $(jq -n --arg chartVersion "$chartVersion" --arg imgVersion "$(imageTag)" '{chartVersion: $chartVersion, imgVersion: $imgVersion}') > $(build.artifactStagingDirectory)/variables.json
- As a last step, we publish the pipeline artifact
Do you have to do it this way? Of course not and there are many alternatives. For instance, because OCI support is experimental in helm and storing charts in ACR is in preview, you might want to install your chart directly from your source files. In that case, you can just build the container image and push it to ACR. The deployment pipeline can then checkout the sources and use /charts/go-template as the source for the helm install or helm upgrade command. The deployment pipeline could be triggered on the image push event.
Note that the pipeline uses templates for both the variables and the steps. The entire pipeline is the three files below:
- azdo/ci.yaml
- azdo/common/ci-vars.yaml
- azdo/common/ci-steps.yaml
The ci-vars template defines and accepts a parameter called projectName which is go-template in my case. To call the template and set the parameter:
variables:
- template: ./common/ci-vars.yaml
parameters:
projectName: go-template
To use the parameter in ci-vars.yaml:
parameters:
projectName: ''
variables:
helmVersion: 3.4.1
registryServerName: '$(registryName).azurecr.io'
projectName: ${{ parameters.projectName }}
imageName: ${{ parameters.projectName }}
CD pipeline
Now that we have both the chart and the container image in ACR, we can start our deployment. The screenshot below shows the repositories in ACR:

The deployment pipeline is defined in cd.yaml and uses cd-vars.yaml and cd-steps.yaml as templates. It pays off to use a template here because we execute the same steps in each environment.
The deployment pipeline triggers on the pipeline artifact from ci, by using resources as below:
resources:
pipelines:
- pipeline: ci
source: ci
trigger:
enabled: true
branches:
include:
- main
When the pipeline is triggered, the stages can be started, beginning with the qa stage:
- stage: qa
displayName: qa
jobs:
- deployment: qa
displayName: 'deploy helm chart on AKS qa'
pool:
vmImage: ubuntu-latest
variables:
k8sNamespace: $(projectName)-qa
replicas: 1
environment: qa-$(projectName)
strategy:
runOnce:
deploy:
steps:
- template: ./common/cd-steps.yaml
This pipeline deploys both qa and production to the same cluster but uses different namespaces. The namespace is defined in the stage’s variables, next to a replicas variable. Note that we are using an environment here. We’ll come back to that.
The actual magic (well, sort of…) happens in cd-steps.yaml:
- Do not checkout the source files; we do not need them
- Install helm with the HelmInstaller task
- Download the pipeline artifact
After the download of the pipeline artifact, there is one final bash script that logs on to Kubernetes and deploys the chart:
- Use az login to login with Azure CLI. You can also use an AzureCLI task with a service connection to authenticate. I often just use bash but that is personal preference.
- az login uses a service principal; the Id and secret of the service principal are in pipeline secrets
- In my case, the service principal is member of a group that was used as an admin group for managed AAD integration with AKS; as such the account has full access to the AKS cluster; that also means I can obtain a kube config using –admin in az aks get-credentials without any issue
- If you want to use a custom RBAC role for the service principal and an account that cannot use –admin, you will need to use kubelogin to obtain the AAD tokens to modify your kube config; see the comments in the bash script for more information
Phew, with the login out of the way, we can grab the Helm chart and install it:
- Use export HELM_EXPERIMENTAL_OCI=1 to turn on the experimental support
- Login to ACR with helm registry login
- Grab the chart version and image version from the pipeline artifact:
chartVersion=$(jq .chartVersion $(pipeline.workspace)/ci/build-artifact/variables.json -r) imgVersion=$(jq .imgVersion $(pipeline.workspace)/ci/build-artifact/variables.json -r)
- Pull the chart with:
helm chart pull $(registryServerName)/helm/$(projectName):$chartVersion
- Export and install the chart:
# export the chart to ./$(projectName)
helm chart export $(registryServerName)/helm/$(projectName):$chartVersion
# helm upgrade with fallback to install
helm upgrade \
--namespace $(k8sNamespace) \
--create-namespace \
--install \
--wait \
--set image.repository=$(registryServerName)/$(projectName) \
--set image.tag=$imgVersion \
--set replicaCount=$(replicas) \
$(projectName) \
./$(projectName)
Of course, to install the chart, we use helm upgrade but fall back to installation if this is the first time we run the command (–install). Note that we have to set some parameters at install time such as:
- image.repository: in the values.yaml file, the image refers to ghcr.io; we need to change this to myacr.azurecr.io/go-template
- image.tag: set this to the git commit SHA we grabbed from variables.json
- replicaCount: set this to the stage variable replicas
- namespace: set this to the stage variable k8sNamespace and use –create-namespace to create it if it does not exist; in many environments, this will not work as the namespaces are created by other teams with network policies, budgets, RBAC, etc…
Environments
As discussed earlier, the stages use environments. This shows up in Azure DevOps as follows:

You can track the deployments per environment:

And of course, you can set approvals and checks on an environment:

When you deploy, you will need to approve manually to deploy to production. You can do that from the screen that shows the stages of the pipeline run:

Note that you do not have to create environments before you use them in a pipeline. They will be dynamically created by the pipeline Usually though, they are created in advance with the appropriate settings such as approvals and checks.
You can also add resources to the environment such as your Kubernetes cluster. This gives you a view on Kubernetes, directly from Azure DevOps. However, if you deploy a private cluster, as many enterprises do, that will not work. Azure DevOps needs line of sight to the API server to show the resources properly.
Summary
What can I say? 😀 I hope that this post, the video and the sample project and pipelines can get you started with deployments to Kubernetes using Helm. If you have questions, feel free to drop them in the comments.