A while ago, the Azure DevOps blog posted an update about multi-stage YAML pipelines. The concept is straightforward: define both your build (CI) and release (CD) pipelines in a YAML file and stick that file in your source code repository.
In this post, we will look at a simple build and release pipeline that builds a container, pushes it to ACR, deploys it to Kubernetes linked to an environment. Something like this:

Note: I used a simple go app, a Dockerfile and a Kubernetes manifest as source files, check them out here.
Note: there is also a video version 😉
Note: if you start from a repository without manifests and azure-pipelines.yaml, the pipeline build wizard will propose Deploy to Azure Kubernetes Service. The wizard that follows will ask you some questions but in the end you will end up with a configured environment, the necessary service connections to AKS and ACR and even a service.yaml and deployment.yaml with the bare minimum to deploy your container!
“Show me the YAML!!!”
The file, azure-pipelines.yaml contains the two stages. Check out the first stage (plus trigger and variables) below:
trigger: - master variables: imageName: 'gosample' registry: 'REGNAME.azurecr.io' stages: - stage: build jobs: - job: 'BuildAndPush' pool: vmImage: 'ubuntu-latest' steps: - task: Docker@2 inputs: containerRegistry: 'ACR' repository: '$(imageName)' command: 'buildAndPush' Dockerfile: '**/Dockerfile' - task: PublishPipelineArtifact@0 inputs: artifactName: 'manifests' targetPath: 'manifests'
The pipeline runs on a commit to the master branch. The variables imageName and registry are referenced later using $(imageName) and $(registry). Replace REGNAME with the name of your Azure Container Registry.
It’s a multi-stage pipeline, so we start with stages: and then define the first stage build. That stage has one job which consists of two steps:
- Docker task (v2): build a Docker image based on the Dockerfile in the source code repository and push it to the container registry called ACR; ACR is a reference to a service connection defined in the project settings
- PublishPipelineArtifact: the source code repository contains Kubernetes deployment manifests in YAML format in the manifests folder; the contents of that folder is published as a pipeline artifact, to be picked up in a later stage
Now let’s look at the deployment stage:
- stage: deploy jobs: - deployment: 'DeployToK8S' pool: vmImage: 'ubuntu-latest' environment: dev strategy: runOnce: deploy: steps: - task: DownloadPipelineArtifact@1 inputs: buildType: 'current' artifactName: 'manifests' targetPath: '$(System.ArtifactsDirectory)/manifests' - task: KubernetesManifest@0 inputs: action: 'deploy' kubernetesServiceConnection: 'dev-kub-gosample-1558821689026' namespace: 'gosample' manifests: '$(System.ArtifactsDirectory)/manifests/deploy.yaml' containers: '$(registry)/$(imageName):$(Build.BuildId)'
The second stage uses a deployment job (quite new; see this). In a deployment job, you can specify an environment to link to. In the above job, the environment is called dev. In Azure DevOps, the environment is shown as below:

The environment functionality has Kubernetes integration which is pretty neat. You can drill down to the deployed objects such as deployments and services:

The deployment has two tasks:
- DownloadPipelineArtifact: download the artifact published in the first stage to $(System.ArtifactsDirectory)/manifests
- KubernetesManifest: this task can deploy Kubernetes manifests; it uses an AKS service connection that was created during creation of the environment; a service account was created in a specific namespace and with access rights to that namespace only; the manifests property will look for an image name in the Kubernetes YAML files and append the tag which is the build id here
Note that the release stage will actually download the pipeline artifact automatically. The explicit DownloadPipelineArtifact task gives additional control over the download location.
The KubernetesManifest task is relatively new at the time of this writing (end of May 2019). Its image substitution functionality could be enough in many cases, without having to revert to Helm or manual text substitution tasks. There is more to this task than what I have described here. Check out the docs for more info.
Conclusion
If you are just starting out building CI/CD pipelines in YAML, you will probably have a hard time getting uses to the schema. I know I had! 😡 In the end though, doing it this way with the pipeline stored in source control will pay off in the long run. After some time, you will have built up a useful library of these pipelines to quickly get up and running in new projects. Recommended!!! 😉🚀🚀🚀