Quick Tip: deploying multiple Traefik ingresses

For a customer that is developing a microservices application, the proposed architecture contains two Kubernetes ingresses:

  • internal ingress: exposed via an Azure internal load balancer, deployed in a separate subnet in the customer’s VNET; no need for SSL
  • external ingress: exposed via an external load balancer; SSL via Let’s Encrypt

The internal ingress exposes API endpoints via Azure API Management and its ability to connect to internal subnets. The external ingress exposes web applications via Azure Front Door.

The Ingress Controller of choice is Traefik. We use the Helm chart to deploy Traefik in the cluster. The example below uses Azure Kubernetes Service so I will refer to Azure objects such as VNETs, subnets, etc… Let’s get started!

Internal Ingress

In values.yaml, use ingressClass to set a custom class. For example:

 kubernetes:
  ingressClass: traefik-int 

When you do not set this value, the default ingressClass is traefik. When you define the ingress object, you refer to this class in your manifest via the annotation below:

 annotations:
    kubernetes.io/ingress.class: traefik-int

When we deploy the internal ingress, we need to tell Traefik to create an internal load balancer. Optionally, you can specify a subnet to deploy to. You can add these options under the service section in values.yaml:

service:
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "traefik" 

The above setting makes sure that the annotations are set on the service that the Helm chart creates to expose Traefik to the “outside” world. The settings are not Traefik specific.

Above, we want Kubernetes to deploy the Azure internal load balancer to a subnet called traefik. That subnet needs to exist in the VNET that contains the Kubernetes subnet. Make sure that the AKS service principal has the necessary access rights to deploy the load balancer in the subnet. If it takes a long time to deploy the load balancer, use kubectl get events in the namespace where you deploy Traefik (typically kube-system).

If you want to provide an static IP address to the internal load balancer, you can do so via the loadBalancerIp setting near the top of values.yaml. You can use any free address in the subnet where you deploy the load balancer.

loadBalancerIP: 172.20.3.10 

All done! You can now deploy the internal ingress with:

helm install . --name traefik-int --namespace kube-system

Note that we install the Helm chart from our local file system and that we are in the folder that contains the chart and values.yaml. Hence the dot (.) in the command.

TIP: if you want to use a private DNS zone to resolve the internal services, see the private DNS section in Azure API Management and Azure Kubernetes Service. Private DNS zones are still in preview.

External ingress

The external ingress is simple now. Just set the ingressClass to traefik-ext (or leave it at the default of traefik although that’s not very clear) and remove the other settings. If you want a static public IP address, you can create such an address first and specify it in values.yaml. In an Azure context, you would create a public IP object in the resource group that contains your Kubernetes nodes.

Conclusion

If you need multiple ingresses of the same type or brand, use distinct values for ingressClass and reference the class in your ingress manifest file. Naturally, when you use two different solutions, say Kong for APIs and Traefik for web sites, you do not need to do that since they use different ingressClass values by default (kong and traefik). Hope this quick tip was useful!

Azure API Management and Azure Kubernetes Service

You have decided to host your APIs in Kubernetes in combination with an API management solution? You are surely not the only one! In an Azure context, one way of doing this is combining Azure API Management and Azure Kubernetes Service (AKS). This post describes one of the ways to get this done. We will use the following services:

  • Virtual Network: AKS will use advanced networking and Azure CNI
  • Private DNS: to host a private DNS zone (private.baeke.info) ; note that private DNS is in public preview
  • AKS: deployed in a subnet of the virtual network
  • Traefik: Ingress Controller deployed on AKS, configured to use an internal load balancer in a dedicated subnet of the virtual network
  • Azure API Management: with virtual network integration which requires Developer or Premium; note that Premium comes at a hefty price though

Let’s take it step by step but note that this post does not contain all the detailed steps. I might do a video later with more details. Check the YouTube channel for more information.

We will setup something like this:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment) 

Virtual Network

Create a virtual network in a resource group. We will add a private DNS zone to this network. You should not add resources such as virtual machines to this virtual network before you add the private DNS zone.

I will call my network privdns and add a few subnets (besides default):

  • aks: used by AKS
  • traefik: for the internal load balancer (ILB) and the front-end IP addresses
  • apim: to give API management access to the virtual network

Private DNS

Add a private DNS zone to the virtual network with Azure CLI:

az network dns zone create -g rg-ingress -n private.baeke.info --zone-type Private --resolution-vnets privdns 

You can now add records to this private DNS zone:

az network dns record-set a add-record \
   -g rg-ingress \
   -z private.baeke.info \
   -n test \
   -a 1.1.1.1

To test name resolution, deploy a small Linux virtual machine and ping test.private.baeke.info:

Testing the private DNS zone

Update for June 27th, 2019: the above commands use the old API; please see https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli for the new syntax to create a zone and to link it to an existing VNET; these zones should be viewable in the portal via Private DNS Zones:

Private DNS zones in the portal

Azure Kubernetes Service

Deploy AKS and use advanced networking. Use the aks subnet when asked. Each node you deploy will get 30 IP address in the subnet:

First IP addresses of one of the nodes

Traefik

To expose the APIs over an internal IP we will use ingress objects, which require an Ingress Controller. Traefik is just one of the choices available. Any Ingress Controller will work.

Instead of using ingresses, you could also expose your APIs via services of type LoadBalancer and use an internal load balancer. The latter approach would require one IP per API where the ingress approach only requires one IP in total. That IP resolves to Traefik which uses the host header to route to the APIs.

We will install Traefik with Helm. Check my previous post for more info about Traefik and Helm. In this case, I will download and untar the Helm chart and modify values.yaml. To download and untar the Helm chart use the following command:

helm fetch stable/traefik --untar

You will now have a traefik folder, which contains values.yaml. Modify values.yaml as follows:

Changes to values.yaml

This will instruct Helm to add the above annotations to the Traefik service object. It instructs the Azure cloud integration components to use an internal load balancer. In addition, the load balancer should be created in the traefik subnet. Make sure that your AKS service principal has the RBAC role on the virtual network to perform this operation.

Now you can install Traefik on AKS. Make sure you are in the traefik folder where the Helm chart was untarred:

helm install . --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,dashboard.enabled=true --namespace kube-system

When the installation is finished, there should be an internal load balancer in the resource group that is behind your AKS cluster:

ILB deployed

The result of kubectl get svc -n kube-system should result in something like:

EXTERNAL-IP is the front-end IP on the ILB for the traefik service

We can now reach Treafik on the virtual network and create an A record that resolves to this IP. The func.private.baeke.info I will use later, resolves to the above IP.

Azure API Management

Deploy API Management from the portal. API Management will need access to the virtual network which means we need a version (SKU) that has virtual network support. This is needed simply because the APIs are not exposed on the public Internet.

For testing, use the Developer SKU. In production, you should use the Premium SKU although it is very expensive. Microsoft should really make the virtual network integration part of every SKU since it is such a common scenario! Come on Microsoft, you know it’s the right thing to do! 😉

API Management virtual network integration

Above, API Management is configured to use the apim subnet of the virtual network. It will also be able to resolve private DNS names via this integration. Note that configuring the network integration takes quite some time.

Deploy a service and ingress

I deployed the following sample API with a simple deployment and service. Save this as func.yaml and run kubectl apply -f func.yaml. You will end up with two pods running a super simple and stupid API plus a service object of type ClusterIP, which is only reachable inside Kubernetes:

apiVersion: v1
kind: Service
metadata:
  name: func
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: func
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: func
spec:
  replicas: 2
  selector:
    matchLabels:
      app: func
  template:
    metadata:
      labels:
        app: func
    spec:
      containers:
      - name: func
        image: gbaeke/ingfunc
        ports:
        - containerPort: 80

Next, deploy an ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: func.private.baeke.info
      http:
        paths:
        - path: /
          backend:
            serviceName: func
            servicePort: 80

Notice I used func.private.baeke.info! Naturally, that name should resolve to the IP address on the ILB that routes to Traefik.

Testing the API from API Management

In API Management, I created an API that uses func.private.baeke.info as the backend. Yes, I know, the API name is bad. It’s just a sample ok? 😎

API with backend func.private.baeke.info

Let’s test the GET operation I created:

Great success! API management can reach the Kubernetes-hosted API via Traefik

Conclusion

In this post, we looked at one way to expose Kubernetes-hosted APIs to the outside world via Azure API Management. The traffic flow is as follows:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment)

Because we have to use host names in ingress definitions, we added a private DNS zone to the virtual network. We can create multiple A records, one for each API, and provide access to these APIs with ingress objects.

As stated above, you can also expose each API via an internal load balancer. In that case, you do not need an Ingress Controller such as Traefik. Alternatively, you could also replace Azure API Management with a solution such as Kong. I have used Kong in the past and it is quite good! The choice for one or the other will depend on several factors such as cost, features, ease of use, support, etc…

Quick overview of Traefik Ingress Controller Installation

This post is mainly a note to self 📝📝📝 that describes a quick way to deploy a Kubernetes Ingress Controller with Traefik.

There is also a video version:

We will install Traefik with Helm and I assume the cluster has rbac enabled. If you deploy clusters with AKS, that is the default although you can turn it off. With rbac enabled, you need to install the server-side component of Helm, tiller, using the following commands:

kubectl apply -f tiller-rbac.yaml
helm init --service-account tiller

The file tiller-rbac.yaml should contain the following:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system 

Note that you create an account that has cluster-wide admin privileges. That’s guaranteed to work but might not be what you want.

Next, install the Traefik Ingress Controller with the following Helm one-liner:

helm install stable/traefik --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=email@domain.com,onHostRule=true,acme.challengeType=tls-alpn-01,acme.staging=false,dashboard.enabled=true --namespace kube-system 

The above command uses Helm to install the stable/traefik chart. Note that the chart is maintained by the community and not by the folks at Traefik. Traefik itself is exposed via a service of type LoadBalancer, which results in a public IP address. Use kubectl get svc traefik -n kube-system to check. There are ways to make sure the service uses a static IP but that is not discussed in this post. Check out this doc for AKS. The other settings do the following:

  • ssl.enabled: yes, SSL 😉
  • ssl.enforced: redirect to https when user uses http
  • acme.enabled: enable Let’s Encrypt
  • acme.email: set the e-mail address to use with Let’s Encrypt; you will get certificate expiry mails on that address
  • onHostRule: issue certificates based on the host setting in the ingress definition
  • acme.challengeType: method used by Let’s Encrypt to issue the certificate; use this one for regular certs; use DNS verification for wildcard certs
  • acme.staging: set to false to issue fully trusted certs; beware of rate limiting
  • dashboard.enabled: enable the Traefik dashboard; you can expose the service via an ingress object as well

Note: to specify a specific version of Traefik, use the imageTag parameter as part of –set; for instance imageTag=1.7.12

When the installation is finished, run the following commands:

# check installation
helm ls

# check traefik service
kubectl get svc traefik --namespace kube-system -w

The first command should show that Traefik is installed. The second command returns the traefik service, which we configured with serviceType LoadBalancer. The external IP of the service will be pending for a while. When you have an address and you browse it, you should get a 404. Result from curl -v below:

 Rebuilt URL to: http://IP/
 Trying 137.117.140.116…
 Connected to 137.117.140.116 (IP) port 80 (#0) 
 GET / HTTP/1.1
 Host: IP
 User-Agent: curl/7.47.0
 Accept: /
 < HTTP/1.1 404 Not Found
 < Content-Type: text/plain; charset=utf-8
 < Vary: Accept-Encoding
 < X-Content-Type-Options: nosniff
 < Date: Fri, 24 May 2019 17:00:29 GMT
 < Content-Length: 19
 <
 404 page not found 

Next, install nginx just to have a simple website to securely publish. Yes I know, kubectl run… 🤷

kubectl run nginx --image nginx --expose --port 80

The above command installs nginx but also creates an nginx service of type ClusterIP. We can expose that service via an ingress definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: your.domain.com
      http:
        paths:
        - path: /
          backend:
            serviceName: nginx
            servicePort: 80

Replace your.domain.com with a host that resolves to the external IP address of the Traefik service. The annotation is not technically required if Traefik is the only Ingress Controller in your cluster. I prefer being explicit though. Save the above contents to a file and then run:

kubectl apply -f yourfile.yaml

Now browse to whatever you used as domain. The result should be:

Yes… nginx exposed via Traefik and a Let’s Encrypt certificate

To expose the Traefik dashboard, use the yaml below. Note that we explicitly installed the dashboard by setting dashboard.enabled to true.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: traefikdb
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: yourother.domain.com
      http:
        paths:
        - path: /
          backend:
            serviceName: traefik-dashboard
            servicePort: 80

Put the above contents in a file and create the ingress object in the same namespace as the traefik-dashboard service. Use kubectl apply -f yourfile.yaml -n kube-system. You should then be able to access the dashboard with the host name you provided:

Traefik dashboard

Note: if you do not want to mess with DNS records that map to the IP address of the Ingress Controller, just use a xip.io address. In the ingress object’s host setting, use something like web.w.x.y.z.xip.io where web is just something you choose and w.x.y.z is the IP address of the Ingress Controller. Traefik will also request a certificate for such a name. For more information, check xip.io. Simple for testing purposes!

Hope it helps!