A while ago, I gave linkerd a spin. Due to vacations and a busy schedule, I was not able to write about my experience. I will briefly discuss how to setup linkerd and then deploy a sample service to illustrate what it can do out of the box. Let’s go!
Wait! What is linkerd?
linkerd basically is a network proxy for your Kubernetes pods that’s designed to be deployed as a service mesh. When the pods you care about have been infused with linkerd, you will automatically get metrics like latency and requests per second, a web portal to check these metrics, live inspection of traffic and much more. Below is an example of a Kubernetes namespace that has been meshed:

Installation
I can be very brief about this: installation is about as simple as it gets. Simply navigate to https://linkerd.io/2/getting-started to get started. Here are the simplified steps:
- Download the linkerd executable as described in the Getting Started guide; I used WSL for this
- Create a Kubernetes cluster with AKS (or another provider); for AKS, use the Azure CLI to get your credentials (az aks get-credentials); make sure the Azure CLI is installed in WSL and that you connected to your Azure subscription with az login
- Make sure you can connect to your cluster with kubectl
- Run linkerd check –pre to check if prerequisites are fulfilled
- Install linkerd with linkerd install | kubectl apply -f –
- Check the installation with linkerd check
The last step will nicely show its progress and end when the installation is complete:

Exploring linkerd with the dashboard
linkerd automatically installs a dashboard. The dashboard is exposed as a Kubernetes service called linkerd-web. The service is of type ClusterIP. Although you could expose the service using an ingress, you can easily tunnel to the service with the following linkerd command (first line is the command; other lines are the output):
linkerd dashboard Linkerd dashboard available at: http://127.0.0.1:50750 Grafana dashboard available at: http://127.0.0.1:50750/grafana Opening Linkerd dashboard in the default browser Failed to open Linkerd dashboard automatically Visit http://127.0.0.1:50750 in your browser to view the dashboard
From WSL, the dashboard can not open automatically but you can manually browse to it. Note that linkerd also installs Prometheus and Grafana.
Out of the box, the linkerd deployment is meshed:

Adding linkerd to your own service
In this section, we will deploy a simple service that can add numbers and add linkerd to it. Although there are many ways to do this, I chose to create a separate namespace and enable auto-injection via an annotation. Here’s the yaml to create the namespace (add-ns.yaml):
apiVersion: v1 kind: Namespace metadata: name: add annotations: linkerd.io/inject: enabled
Just run kubectl create -f add-ns.yaml to create the namespace. The annotation ensures that all pods added to the namespace get the linkerd proxy in the pod. All traffic to and from the pod will then pass through the proxy.
Now, let’s install the add service and deployment:
apiVersion: v1 kind: Service metadata: name: add-svc spec: ports: - port: 80 name: http protocol: TCP targetPort: 8000 - port: 8080 name: grpc protocol: TCP targetPort: 8080 selector: app: add version: v1 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: add spec: replicas: 2 selector: matchLabels: app: add template: metadata: labels: app: add version: v1 spec: containers: - name: add image: gbaeke/adder
The deployment deploys to two pods with the gbaeke/adder image. To deploy the above, save it to a file (add.yaml) and use the following command to deploy:
kubectl create -f add-yaml -n add
Because the deployment uses the add namespace, the linkerd proxy will be added to each pod automatically. When you list the pods in the deployment, you see:

To see more details about one of these pods, I can use the following command:
k get po add-5b48fcc894-2dc97 -o yaml -n add
You will clearly see the two containers in the output:

Generating some traffic
Let’s deploy a client that continuously uses the calculator service:
apiVersion: apps/v1 kind: Deployment metadata: name: add-cli spec: replicas: 1 selector: matchLabels: app: add-cli template: metadata: labels: app: add-cli spec: containers: - name: add-cli image: gbaeke/adder-cli env: - name: SERVER value: "add-svc"
Save the above to add-cli.yaml and deploy with the below command:
kubectl create -f add-cli.yaml -n add
The deployment uses another image called gbaeke/adder-cli that continuously makes requests to the server specified in the SERVER environment variable.
Checking the deployment in the linkerd portal
When you now open the add namespace in the linked portal, you should see something similar to the below screenshot (note: I deployed 5 servers and 5 clients):

The linkerd proxy in all pods sees all traffic. From the traffic, it can infer that the add-cli deployment talks to the add deployment. The add deployment receives about 150 requests per second. The 99th percentile latency is relatively high because the cluster nodes are very small, I deployed more instances and the client is relatively inefficient.
When I click the deployment called add, the following screen is shown:

The deployment clearly shows where traffic is coming from plus relevant metrics such as RPS and P99 latency. You also get a view on the live calls now. Note that the client is using GRPC which uses a HTTP POST. When you scroll down on this page, you get more information about the caller and a view on the individual pods:

To see live calls in more detail, you can click the Tap icon:

For each call, details can be requested:

Conclusion
This was just a brief look at linkerd. It is trivially easy to install and with auto-injection, very simple to add it to your own services. Highly recommended to give it a spin to see where it can add value to your projects!