
In an earlier post, I wrote about the use of AKS Pod Identity (Preview) in combination with the Azure SDK for Python. Although that works fine, there are some issues with that solution:
- the container image is around 1GB, which is quite large (it is based on tiangolo/uvicorn-gunicorn-fastapi:python3.7)
- as expected, the image contains many vulnerabilities as shown in the screenshot below (from SNYK)

In order to reduce the size of the image and reduce/remove the vulnerabilities, I decided to rewrite the solution in Go. Just like the Python app (with FastAPI), we will expose an HTTP endpoint that displays all resource groups in a subscription. We will use a specific pod identity that has the Contributor role at the subscription level.
If you are more into videos, here’s the video version:
The code
The code is on GitHub @ https://github.com/gbaeke/go-msi in main.go. The code is kept as simple as possible. It uses the following packages:
github.com/Azure/azure-sdk-for-go/profiles/latest/resources/mgmt/resources
github.com/Azure/go-autorest/autorest/azure/auth
The resources package is used to create a GroupsClient to work with resource groups (check the samples):
groupsClient := resources.NewGroupsClient(subID)
subID contains the subscription ID, which is retrieved via the SUBSCRIPTION_ID environment variable. The container requires that environment variable to be set.
To authenticate to Azure and obtain proper authorization, the auth package is used with the NewAuthorizerFromEnvironment() method. That method supports several authentication mechanisms, one of which is managed identities. When we run this code on AKS, the pods can use a pod identity as explained in my previous post, if the pod identity addon is installed and configured. To obtain the authorization:
authorizer, err := auth.NewAuthorizerFromEnvironment()
authorizer is then passed to groupsClient via:
groupsClient.Authorizer = authorizer
Now we can use groupsClient to iterate through the resource groups:
ctx := context.Background()
log.Println("Getting groups list...")
groups, err := groupsClient.ListComplete(ctx, "", nil)
if err != nil {
log.Println("Error getting groups", err)
}
log.Println("Enumerating groups...")
for groups.NotDone() {
groupList = append(groupList, *groups.Value().Name)
log.Println(*groups.Value().Name)
err := groups.NextWithContext(ctx)
if err != nil {
log.Println("error getting next group")
}
}
Note that the groups are printed and added to the groups slice. We can now serve the groupz endpoint that lists the groups (yes, the groups are only read at startup 😀):
log.Println("Serving on 8080...")
http.HandleFunc("/groupz", groupz)
http.ListenAndServe(":8080", nil)
The result of the call to /groupz is shown below:

Running the code in a container
We can now build a single statically linked executable with go build and package it in a scratch container. If you want to know if your executable is statically linked, run file on it (e.g. file myapp). The result should be like:
myapp: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped
Here is the multi-stage Dockerfile:
# argument for Go version
ARG GO_VERSION=1.14.5
# STAGE 1: building the executable
FROM golang:${GO_VERSION}-alpine AS build
# git required for go mod
RUN apk add --no-cache git
# certs
RUN apk --no-cache add ca-certificates
# Working directory will be created if it does not exist
WORKDIR /src
# We use go modules; copy go.mod and go.sum
COPY ./go.mod ./go.sum ./
RUN go mod download
# Import code
COPY ./ ./
# Build the statically linked executable
RUN CGO_ENABLED=0 go build \
-installsuffix 'static' \
-o /app .
# STAGE 2: build the container to run
FROM scratch AS final
# copy compiled app
COPY --from=build /app /app
# copy ca certs
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# run binary
ENTRYPOINT ["/app"]
In the above Dockerfile, it is important to add the ca certificates to the build container and later copy them to the scratch container. The code will need to connect to https://management.azure.com and requires valid root CA certificates to do so.
When you build the container with the Dockerfile, it will result in a docker image of about 8.7MB. SNYK will not report any known vulnerabilities. Great success!
Note: container will run as root though; bad! 😀 Nico Meisenzahl has a great post on containerizing .NET Core apps which also shows how to configure the image to not run as root.
Let’s add some YAML
The GitHub repo contains a workflow that builds and pushes a container to GitHub container registry. The most recent version at the time of this writing is 0.1.1. The YAML file to deploy this container as part of a deployment is below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mymsi-deployment
namespace: mymsi
labels:
app: mymsi
spec:
replicas: 1
selector:
matchLabels:
app: mymsi
template:
metadata:
labels:
app: mymsi
aadpodidbinding: mymsi
spec:
containers:
- name: mymsi
image: ghcr.io/gbaeke/go-msi:0.1.1
env:
- name: SUBSCRIPTION_ID
value: SUBSCRIPTION ID
- name: AZURE_CLIENT_ID
value: APP ID OF YOUR MANAGED IDENTITY
- name: AZURE_AD_RESOURCE
value: "https://management.azure.com"
ports:
- containerPort: 8080
It’s possible to retrieve the subscription ID at runtime (as in the Python code) but I chose to just supply it via an environment variable.
For the above manifest to work, you need to have done the following (see earlier post):
- install AKS with the pod identity add-on
- create a managed identity that has the necessary Azure roles (in this case, enumerate resource groups)
- create a pod identity that references the managed identity
In this case, the created pod identity is mymsi. The aadpodidbinding label does the trick to match the identity with the pods in this deployment.
Note that, although you can specify the AZURE_CLIENT_ID as shown above, this is not really required. The managed identity linked to the mymsi pod identity will be automatically matched. In any case, the logs of the nmi pod will reflect this.
In the YAML, AZURE_AD_RESOURCE is also specified. In this case, this is not required either because the default is https://management.azure.com. We need that resource to enumerate resource groups.
Conclusion
In this post, we looked at using the Azure SDK for Go together with managed identity on AKS, via the AAD pod identity addon. Similar to the Azure SDK for Python, the Azure SDK for Go supports managed identities natively. The difference with the Python solution is the size of the image and better security. Of course, that is an advantage stemming from the use of a language like Go in combination with the scratch image.