The basics of meshing Traefik 2.0 with Linkerd

A while ago, I blogged about Linkerd 2.x. In that post, I used a simple calculator API, reachable via an Azure Load Balancer. When you look at that traffic in Linkerd, you see the following:

Incoming load balancer traffic to a meshed deployment (in this case Traefik 2.0)

Above, you do not see this is Azure Load Balancer traffic. The traffic reaches the meshed service via the Azure CNI pods.

In this post, we will install Traefik 2.0, mesh the Traefik deployment and make the calculator service reachable via Traefik and the new IngressRoute. Let’s get started!

Install Traefik 2.0

We will install Traefik 2.0 with http support only. There’s an excellent blog that covers the installation over here. In short, you do the following:

  • deploy prerequisites such as custom resource definitions (CRDs), ClusterRole, ClusterRoleBinding, ServiceAccount
  • deploy Traefik 2.0: it’s just a Kubernetes deployment
  • deploy a service to expose the Traefik HTTP endpoint via a Load Balancer; I used an Azure Load Balancer automatically deployed via Azure Kubernetes Service (AKS)
  • deploy a service to expose the Traefik admin endpoint via an IngressRoute

Here are the prerequisites for easy copy and pasting:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutes.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRoute
    plural: ingressroutes
    singular: ingressroute
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ingressroutetcps.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: IngressRouteTCP
    plural: ingressroutetcps
    singular: ingressroutetcp
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: middlewares.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: Middleware
    plural: middlewares
    singular: middleware
  scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: tlsoptions.traefik.containo.us

spec:
  group: traefik.containo.us
  version: v1alpha1
  names:
    kind: TLSOption
    plural: tlsoptions
    singular: tlsoption
  scope: Namespaced

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller

rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.containo.us
    resources:
      - middlewares
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutes
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - ingressroutetcps
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - traefik.containo.us
    resources:
      - tlsoptions
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: default

---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: default
  name: traefik-ingress-controller

Save this to a file and then use kubectl apply -f filename.yaml. Here’s the deployment:

kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  namespace: default
  name: traefik
  labels:
    app: traefik

spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-ingress-controller
      containers:
        - name: traefik
          image: traefik:v2.0
          args:
            - --api
            - --accesslog
            - --entrypoints.web.Address=:8000
            - --entrypoints.web.forwardedheaders.insecure=true
            - --providers.kubernetescrd
            - --ping
            - --accesslog=true
            - --log=true
          ports:
            - name: web
              containerPort: 8000
            - name: admin
              containerPort: 8080

Here’s the service to expose Traefik’s web endpoint. This is different from the post I referred to because that post used DigitalOcean. I am using Azure here.

apiVersion: v1
kind: Service
metadata:
  name: traefik
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      name: web
      port: 80
      targetPort: 8000
  selector:
    app: traefik

The above service definition will give you a public IP. Traffic destined to port 80 on that IP goes to the Traefik pods on port 8000.

Now we can expose the Traefik admin interface via Traefik itself. Note that I am not using any security here. Check the original post for basic auth config via middleware.

apiVersion: v1
kind: Service
metadata:
  name: traefik-admin
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      name: admin
      port: 8080
  selector:
    app: traefik
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: traefik-admin
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`somehost.somedomain.com`) && PathPrefix(`/`)
    kind: Rule
    priority: 1
    services:
    - name: traefik-admin
      port: 8080

Traefik’s admin site is first exposed as a ClusterIP service on port 8080. Next, an object of kind IngressRoute is defined, which is new for Traefik 2.0. You don’t need to create standard Ingress objects and configure Traefik with custom annotations. This new approach is cleaner. Of course, substitute the host with a host that points to the public IP of the load balancer. Or use the IP address with the xip.io domain. If your IP would be 1.1.1.1 then you could use something like admin.1.1.1.1.xip.io. That name automatically resolves to the IP in the name.

Let’s see if we can reach the admin interface:

The new Traefik 2 admin UI

Traefik 2.0 is now installed in a basic way and working properly. We exposed the admin interface but now it is time to expose the calculator API.

Exposing the calculator API

The API is deployed as 5 pods in the add namespace:

Calculator API exposed

The API is exposed as a service of type ClusterIP with only an internal Kubernetes IP. To expose it via Traefik, we create the following object in the add namespace:

apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: calc-svc
  namespace: add  
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`calc.1.1.1.1.xip.io`) && PathPrefix(`/`)
    kind: Rule
    priority: 1
    middlewares:
      - name: calcheader
    services:
    - name: add-svc
      port: 80

I am using xip.io above. Change 1.1.1.1 to the public IP of Traefik’s Azure Load Balancer. The add-svc that exposes the calculator API on port 80 is exposed via Traefik. We can easily call the service via:

curl http://calc.1.1.1.1.xip.io/add/10/10

20

Great! But what is that calcheader middleware? Middlewares modify the requests and responses to and from Traefik 2.0. There are all sorts of middelwares as explained here. You can set headers, configure authentication, perform rate limiting and much much more. In this case we create the following middleware object in the add namespace:

apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: calcheader
  namespace: add
spec:
  headers:
    customRequestHeaders:
      l5d-dst-override: "add-svc.add.svc.cluster.local:80"

This middleware adds a header to the request before it comes in to Traefik. The header overrides the destination and sets it to the internal DNS name of the add-svc service that exposes the calculator API. This requirement is documented by Linkerd here.

Meshing the Traefik deployment

Because we want to mesh Traefik to get Linkerd metrics and more, we need to inject the Linkerd proxy in the Traefik pods. In my case, Traefik is deployed in the default namespace so the command below can be used:

kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f - 

Make sure you run the command on a system with the linkerd executable in your path and kubectl homed to the cluster that has Linkerd installed.

Checking the traffic in the Linkerd dashboard

With some traffic generated, this is what you should see when you check the meshed deployment that runs the calculator API (deploy/add):

Both the traffic generator (add-cli) and Traefik are meshed which results in a more detailed view of the traffic

If you are wondering what these services are and do, check this post. In the above diagram, we can clearly see we are receiving traffic to the calculator API from Traefik. When I click on Traefik, I see the following:

A view on the meshed Traefik deployment

From the above, we see Traefik receives traffic via the Azure Load Balancer and that it forwards traffic to the calculator service. The live calls are coming from the admin UI which refreshes regularly.

In Grafana, we can get more information about the Traefik deployment:

Linkerd metrics for Traefik in the Grafana dashboard that comes with Linkerd
More metrics

Conclusion

This was just a brief look at both Traefik 2 and “meshing” Traefik with Linkerd. There is much more to say and I have much more to explore. Hopefully, this can get you started!

Using the OAuth Client Credentials Flow

I often get questions about protecting applications like APIs using OAuth. I guess you know the drill:

  • you have to obtain a token (typically a JWT or JSON Web Token)
  • the client submits the token to your backend (via a Authorization HTTP header)
  • the token needs to be verified (do you trust it?)
  • you need to grab some fields from the token to use in your application (claims).

When the client is a daemon or some server side process, you can use the client credentials grant flow to obtain the token from Azure AD. The flow works as follows:

OAuth Client Credentials Flow (image from Microsoft docs)

The client contacts the Azure AD token endpoint to obtain a token. The client request contains a client ID and client secret to properly authenticate to Azure AD as a known application. The token endpoint returns the token. In this post, I only focus on the access token which is used to access the resource web API. The client uses the access token in the Authorization header of requests to the API.

Let’s see how this works. Oh, and by the way, this flow should be done with Azure AD. Azure AD B2C does not support this type of flow (yet).

Create a client application in Azure AD

In Azure AD, create a new App Registration. This can be a standard app registration for Web APIs. You do not need a redirect URL or configure public clients or implicit grants.

Standard run of the mill app registration

In Certificates & secrets, create a client secret and write it down. It will not be shown anymore when you later come back to this page:

Yes, I set it to Never Expire!

From the Overview page, note the application ID (also client ID). You will need that later to request a token.

Why do we even create this application? It represents the client application that will call your APIs. With this application, you control the secret that the client application uses but also the access rights to the APIs as we will see later. The client application will request a token, specifying the client ID and the client secret. Let’s now create another application that represents the backend API.

Create an API application in Azure AD

This is another App Registration, just like the app registration for the client. In this case, it represents the API. Its settings are a bit different though. There is no need to specify redirect URIs or other settings in the Authentication setting. There is also no need for a client secret. We do want to use the Expose an API page though:

Expose API page

Make sure you get the application ID URI. In the example above, it is api://06b2a484-141c-42d3-9d73-32bec5910b06 but you can change that to something more descriptive.

When you use the client credentials grant, you do not use user scopes. As such, the Scopes defined by this API list is empty. Instead, you want to use application roles which are defined in the manifest:

Application role in the manifest

There is one role here called invokeRole. You need to generate a GUID manually and use that as the id. Make sure allowedMemberTypes contains Application.

Great! But now we need to grant the client the right to obtain a token for one or more of the roles. You do that in the client application, in API Permissions:

Client application is granted access to the invokeRole application role of the API application

To grant the permission, just click Add a permission, select My APIs, click your API and select the role:

Selecting the role

Delegated permissions is greyed out because there are no user scopes. Application permissions is active because we defined an application role on the API application.

Obtaining a token

The server-side application only needs to do one call to the token endpoint to obtain the access token. Here is an example call with curl:

curl -d "grant_type=client_credentials&client_id=f1f695cb-2d00-4c0f-84a5-437282f3f3fd&client_secret=SECRET&audience=api%3A%2F%2F06b2a484-141c-42d3-9d73-32bec5910b06&scope=api%3A%2F%2F06b2a484-141c-42d3-9d73-32bec5910b06%2F.default" -X POST "https://login.microsoftonline.com/019486dd-8ffb-45a9-9232-4132babb1324/oauth2/v2.0/token"

Ouch, lots of gibberish here. Let’s break it down:

  • the POST needs to send URL encoded data in the body; curl’s -d takes care of that but you need to perform the URL encoding yourself
  • grant_type: client_credentials to indicate you want to use this flow
  • client_id: the application ID of the client app registration in Azure AD
  • client_secret: URL encoded secret that you generated when you created the client app registration
  • audience: the resource you want an access token for; it is the URL encoding of api://06b2a484-141c-42d3-9d73-32bec5910b06 as set in Expose an API
  • scope: this one is a bit special; for the v2 endpoint that we use here it needs to be api://06b2a484-141c-42d3-9d73-32bec5910b06/.default (but URL encoded); the scope (or roles) that the client application has access to will be included in the token

The POST goes to the Azure AD v2.0 token endpoint. There is also a v1 endpoint which would require other fields. See the Microsoft docs for more info. Note that I also updated the application manifests to issue v2 tokens via the accessTokenAcceptedVersion field (set to 2).

The result of the call only results in an access token (no refresh token in the client credentials flow). Something like below with the token shortened:

{"token_type":"Bearer","expires_in":3600,"ext_expires_in":3600,"access_token":"eyJ0e..."}

The access_token can be decoded on https://jwt.ms:

Decoded token

Note that the invokeRole is present because the client application was granted access to that role. We also know the application ID that represents the API, which is in the aud field. The azp field contains the application ID of the client application.

Great, we can now use this token to call our API. The raw HTTP request would be in this form.

GET https://somehost/calc/v1/add/1/1 HTTP/1.1 
Host: somehost 
Authorization: Bearer eyJ0e...

Of course, your application needs to verify the token somehow. This can be done in your application or in an intermediate layer such as API Management. We will take a look at how to do this with API Management in a later post.

Conclusion

Authentication, authorization and, on a broader scale, identity can be very challenging. Technically though, a flow such as the client credentials flow, is fairly simple to implement once you have done it a few times. Hopefully, if you are/were struggling with this type of flow, this post has given you some pointers!

Inspecting Web Application Firewall logs

In some of my previous posts, I talked about Azure Front Door and Web Application Firewall policies to protect a workload like one or more APIs running on Kubernetes or App Service. Although I enabled the Web Application Firewall policies, I did not show what happens when the rules are triggered. Let’s take a look at that! 🕶

Before we get started though, take the following diagram into account:

Azure web application firewall
From: https://docs.microsoft.com/en-us/azure/frontdoor/waf-overview

WAF for Front Door is a global solution. You create a WAF policy in the portal or via other means and attach it to a Front Door frontend. Rules are evaluated and acted upon at the edge versus on your application server.

Azure WAF supports custom rules and Azure-managed rule sets (based on OWASP). The custom rules are interesting because they allow you to restrict IP addresses, configure geographic based access control and more.

There’s an additional rule type called bot protection rule as well. At the time of this writing (beginning June 2019) this feature is in public preview. It uses the Microsoft Intelligent Security Graph to do its magic, similarly to Azure Firewall when you enable Threat Intelligence.

WAF Logs

Let’s first use a tool that can scan an endpoint for vulnerabilities to trigger the WAF rules. One such tool is OWASP ZAP, which you need to install on your workstation.

OWASP ZAP tool

Before we check the logs, note we have set the policy to Detection:

WAF policy set to Detection; start with detection to learn what the rules might block in your app

Now let’s take a look at the logs. Use the following query in Log Analytics and modify it for your own host (host_s field):

AzureDiagnostics
| where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog"
| where action_s == "Block" 
| where host_s == "api.baeke.info"  

The result:

Blocked requests (if the policy were set at Prevention at the global level)

Let’s look at a SQL Injection block:

Typical SQL Injection

The decoded requestUri_s is https://api.baeke.info:443/users?apikey=theapikey’ AND ‘1’=’1′ –. Typical! It was blocked at the edge. This request went via the BRU location.

Like with any Log Analytics query, you can place alerts on log occurrences. You will need to be in the Log Analytics workspace, and not in the Logs section of Azure Front Door:

Conclusion

Azure Web Application Firewall policies for Azure Front Door integrate with Azure Monitor and Log Analytics, like most other Azure services. With some KQL, the query language for Log Analytics, it is straightforward to request the logs and set alerts on them.

Azure Front Door and multi-region deployments

In the previous post, we looked at publishing and securing an API with Azure Front Door and Azure Web Application Firewall. The API ran on Kubernetes, exposed by Kong and Kong Ingress Controller. Kong was configured to require an API key to call the /users API, allowing us to identify the consumer of the API. The traffic flow was as follows:

Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer) -- HTTP --> API Kubernetes service --> API pods 

Although Kubernetes makes the API(s) highly available, you might want to take extra precautions such as deploying the API in multiple regions. In this post, we will take a look at doing so. That means we will deploy the API in both West and North Europe, in two distinct Kubernetes clusters:

  • we-clu: Kubernetes cluster in West Europe
  • ne-clu: Kubernetes cluster in North Europe

The flow is very similar of course:

Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer; region to connect to depends on Front Door configuration and health probes) -- HTTP --> API Kubernetes service --> API pods  

Let’s take a look at the configuration! By the way, the supporting files to deploy the Kubernetes objects are here: https://github.com/gbaeke/api-kong/tree/master. To deploy Kong, check out this post.

Kubernetes

We deploy a Kubernetes cluster in each region, install Helm, deploy Kong, deploy our API and configure ingresses and related Kong custom resource definitions (CRDs). The result is an external IP address in each region that leads to the Kong proxy. Search for “kong” on this blog to find posts with more details about this deployment.

Note that the API deployment specifies an environment variable that will contain the string WE or NE. This environment variable will be displayed in the output when we call the API. Here is the API deployment for West Europe:

apiVersion: v1
kind: Service
metadata:
  name: func
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: func
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: func
spec:
  replicas: 2
  selector:
    matchLabels:
      app: func
  template:
    metadata:
      labels:
        app: func
    spec:
      containers:
      - name: func
        image: gbaeke/ingfunc
        env:
        - name: REGION
          value: "WE"
        ports:
        - containerPort: 80 

When we call the API and Azure Front Door uses the backend in West Europe, the result will be:

WE included in the response from the West Europe cluster

Origin APIs

The origin APIs need to be exposed on the public Internet using a DNS name. Azure Front Door requires a public backend to connect to. Naturally, the backend can be configured to only accept incoming requests from Front Door. In our case, the APIs are available on the public IP of the Kong proxy. The following names were used:

  • api-o-we.baeke.info: Kong proxy in West Europe
  • api-o-ne.baeke.info; Kong proxy in North Europe

Both endpoints are configured to accept TLS connections only, and use a Let’s Encrypt wildcard certificate for *.baeke.info.

Front Door Configuration

The Front Door designer looks the same as in the previous post:

Front Door Designer

However, the backend pool api-o now has two backends:

Two backend hosts, both enabled with same priority and weight

To determine the health of the backend, Front Door needs to be configured with a health probe that returns status code 200. If we were to specify the probe below, the health probe would fail:

Errrrrrr, this won’t work

The health probe would hit Kong’s proxy and return a 404 (Not Found). We did not create a route for /, only for /users. With Azure Front Door, when all health probes fail, all backends are considered healthy. Yes, you read that right.

Although we could create a route called /health that returns a 200, we will use the following probe just to make it work:

Fixing the health probe (quick and dirty fix); just can the /users API

If you are exposing multiple APIs on each cluster, the health probe above would not make sense. Also note that the purpose of the health probe is to determine if the cluster is up or not. It will not fix one API behaving badly or being removed accidentally!

You can check the health probes from the portal:

Yep! Health probes in West and North Europe are at 100%

Connection test

When I connect from my home laptop in Belgium, I get the following response:

Connection to West Europe cluster

When I connect from my second home in Dublin 🤷‍♀️ I get:

Connection from a VM in North Europe (I was kidding about the second home)

If you enable logging to Log Analytics, you can check this in the FrontDoorAccessLog:

Connection from home via Brussels (West Europe would show DB in Tenant_x)

When I remove the /users API in West Europe (kubectl delete deploy func), my home laptop will connect to North Europe as expected:

I didn’t fake this! It’s 100% real, my laptop now connects to the North Europe cluster via Front Door as expected

Note that the calls will not fail from the moment you delete the /users API (the health probe here). That depends on the following setting (backends in Front Door designer):

When should the backend be determined healthy or unhealthy; decrease sample size and or samples to make it go faster

The backend health percentage graph indicates the probe failure as well:

We’ve lost West Europe folks!

Conclusion

When you are going for a multi-region deployment of services, Azure Front Door is one of the options. Of course, there is much more to a multi-region deployment than the “front-end stuff” described in this post. What do you do with databases for instance? Can you use active-active write regions (e.g. Cosmos DB) or does your database only support active/passive with read replicas?

As in other load balancing and fail over solutions, proper health probes are crucial in the design. Think about what a good health probe can be and what it means when it is not available. One option is to just write a health probe exposed via an endpoint such as /health that merely returns a 200 status code. But your health probe could also be designed to connect to backend systems such as databases or queues to determine the health of the system.

Hopefully, this post gives you some ideas to start! Follow me on Twitter for updates.

Publishing and securing your API with Kong and Azure Front Door

In the post, Securing your API with Kong and CloudFlare, I exposed a dummy API on Kubernetes with Kong and published it securely with CloudFlare. The breadth of features and its ease of use made CloudFlare a joy to work with. It didn’t take long before I got the question: “can’t you do that with Azure only?”. The answer is obvious: “Of course you can!”

In this post, the traffic flow is as follows:

Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer) -- HTTP --> API Kubernetes service --> API pods

Similarly to CloudFlare, Azure Front Door provides a fully trusted certificate for consumers of the API. In contrast to CloudFlare, Azure Front Door does not provide origin certificates which are trusted by Front Door. That’s easy to solve though by using a fully trusted Let’s Encrypt certificate which is stored as a Kubernetes secret and used in the Kubernetes Ingress definition. For this post, I requested a wildcard certificate for *.baeke.info via https://www.sslforfree.com/

Let’s take it step-by-step, starting at the API and Kong level.

APIs and Kong

Just like in the previous posts, we have a Kubernetes service called func and back-end pods that host the API implemented via Azure Functions in a container. Below you see the API pods in the default namespace. For convenience, Kong is also deployed in that namespace (not recommended in production):

A view on the API pods and Kong via k9s

The ingress definition is shown below:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-auth
spec:
  tls:
  - hosts:
    - api-o.baeke.info
    secretName: wildcard-baeke.info.tls
  rules:
    - host: api-o.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80 

Kong will pick up the above definition and configure itself accordingly.

The API is exposed publicly via https://api-o.baeke.info where the o stands for origin. The secret wildcard-baeke.info.tls refers to a secret which contains the wildcard certificate for *.baeke.info:

apiVersion: v1
kind: Secret
metadata:
  name: wildcard-baeke.info.tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: certificate
  tls.key: key

Naturally, certificate and key should be replaced with the base64-encoded strings of the certificate and key you have obtained (in this case from https://www.sslforfree.com).

At the DNS level, api-o.baeke.info should refer to the external IP address of the exposed Kong Ingress Controller (proxy):

The service kong-kong-proxy is exposed via a public IP address (service of type LoadBalancer)

For the rest, the Kong configuration is not very different from the configuration in Securing your API with Kong and CloudFlare. I did remove the whitelisting configuration, which needs to be updated for Azure Front Door.

Great, we now have our API listening on https://api-o.baeke.info but it is not exposed via Azure Front Door and it does not have a WAF policy. Let’s change that.

Web Application Firewall (WAF) Policy

You can create a WAF policy from the portal:

WAF Policy

The above policy is set to detection only. No custom rules have been defined, but a managed rule set is activated:

Managed rule set for OWASP

The WAF policy was saved as baekeapiwaf. It will be attached to an Azure Front Door frontend. When a policy is attached to a frontend, it will be shown in the policy:

Associated frontends (Front Door front-ends)

Azure Front Door

We will now add Azure Front Door to obtain the following flow:

Consumer ---> https://api.baeke.info (Front Door + WAF) --> https://api-o.baeke.info

The final configuration in Front Door Designer looks like this:

Front Door Designer

When a request comes in for api.baeke.info, the response from api-o.baeke.info is served. Caching was not enabled. The frontend and backend are tied together via the routing rule.

The first thing you need to do is to add the azurefd.net frontend which is baeke-api.azurefd.net in the above config. There’s not much to say about that. Just click the blue plus next to Frontend hosts and follow the prompts. I did not attach a WAF policy to that frontend because it will not forward requests to the backend. We will use a custom domain for that.

Next, click the blue plus again to add the custom domain (here api.baeke.info). In your DNS zone, create a CNAME record that maps api.yourdomain.com to the azurefd.net name:

Mapping of custom domain to azurefd.net domain in CloudFlare DNS

I attached the WAF policy baekeapiwaf to the front-end domain:

WAF policy with OWASP rules to protect the API

Next, I added a certificate. When you select Front Door managed, you will get a Digicert managed image. If the CNAME mapping is not complete, you will get an e-mail from Digicert to approve certificate issuance. Make sure you check your e-mails if it takes long to issue the certificate. It will take a long time either way so be patient! 💤💤💤

Now that we have the frontend, specify the backend that Front Door needs to connect to:

Backend pool

The backend pool uses the API exposed at api-o.baeke.info as defined earlier. With only one backend, priority and weight are of no importance. It should be clear that you can add multiple backends, potentially in different regions, and load balance between them.

You will also need a health probe to check for healthy and unhealthy backends:

Health probes of the backend

Note that the above health check does NOT return a 200 OK status code. That is the only status code that would result in a healthy endpoint. With the above config, Kong will respond with a “no Route matched” 404 Not Found error instead. That does not mean that Front Door will not route to this endpoint though! When all endpoints are in a failed state, Front Door considers them healthy anyway 😲😲😲 and routes traffic using round-robin. See the documentation for more info.

Now that we have the frontend and the backend, let’s tie the two together with a rule:

First part of routing rule

In the first part of the rule, we specify that we listen for requests to api.baeke.info (and not the azurefd.net domain) and that we only accept https. The pattern /* basically forwards everything to the backend.

In the route details, we specify the backend to route to:

Backend to route to

Clearly, we want to route to the api-o backend we defined earlier. We only connect to the backend via HTTPS. It only accepts HTTPS anyway, as defined at the Kong level via a KongIngress resource.

Note that it is possible to create a HTTP to HTTPS redirect rule. See the post Azure Front Door Revisited for more information. Without the rule, you will get the following warning:

Please disregard this warning 😎

Test, test, test

Let’s call the API via the http tool:

Clearly, Azure Front Door has served this request as indicated by the X-Azure-Ref header. Let’s try http:

Azure Front Door throws the above error because the routing rule only accepts https on api.baeke.info!

White listing Azure Front Door

To restrict calls to the backend to Azure Front Door, I used the following KongPlugin definition:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: whitelist-fd
  namespace: default
config:
  whitelist: 
  - 147.243.0.0/16
plugin: ip-restriction 

The IP range is documented here. Note that the IP range can and probably will change in the future.

In the ingress definition, I added the plugin via the annotations:

annotations:
  kubernetes.io/ingress.class: kong
  plugins.konghq.com: http-auth, whitelist-fd 

Calling the backend API directly will now fail:

That’s a no no! Please use the Front Door!

Conclusion

Publishing APIs (or any web app), whether they are running on Kubernetes or other systems, is easy to do with the combination of Azure Front Door and Web Application Firewall policies. Do take pricing into account though. It’s a mixture of relatively low fixed prices with variable pricing per GB and requests processed. In general, CloudFlare has the upper hand here, from both a pricing and features perspective. On the other hand, Front Door has advantages when it comes to automating its deployment together with other Azure resources. As always: plan, plan, plan and choose wisely! 🦉

Securing your API with Kong and CloudFlare

In the previous post, we looked at API Management with Kong and the Kong Ingress Controller. We did not care about security and exposed a sample toy API over a public HTTP endpoint that also required an API key. All in the clear, no firewall, no WAF, nothing… 👎👎👎

In this post, we will expose the API over TLS and configure Kong to use a CloudFlare origin certificate. An origin certificate is issued and trusted by CloudFlare to connect to the origin, which in our case is an API hosted on Kubernetes.

The API consumer will not connect directly to the Kubernetes-hosted API exposed via Kong. Instead, the consumer connects to CloudFlare over TLS and uses a certificate issued by CloudFlare that is fully trusted by browsers and other clients.

The traffic flow is as follows:

Consumer --> CloudFlare (TLS with fully trusted cert, WAF, ...) --> Kong Ingress (TLS with origin cert) --> API (HTTP)

Configuring Kong

Refer to the previous post for installation instructions. The YAML files to configure the Ingress, KongIngress, Consumer, etc… are almost the same. The Ingress resource has the following changes:

  • We use a new hostname api.baeke.info
  • We configure TLS for api.baeke.info by referring to a secret called baeke.info.tls which contains the CloudFlare origin certificate.
  • We use an additional Kong plugin which provides whitelisting of CloudFlare addresses; only CloudFlare is allowed to connect to the Ingress

Here is the full definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-auth, whitelist
spec:
  tls:
  - hosts:
    - api.baeke.info
    secretName: baeke.info.tls # cloudflare origin cert
  rules:
    - host: api.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80

Here is the plugin definition for whitelisting with the current (June 15th, 2019) list of IP ranges used by CloudFlare. Note that you have to supply the addresses and ranges as an array. The documentation shows a comma-separated list! 🤷‍♂️

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: whitelist
  namespace: default
config:
  whitelist: 
  - 173.245.48.0/20
  - 103.21.244.0/22
  - 103.22.200.0/22
  - 103.31.4.0/22
  - 141.101.64.0/18
  - 108.162.192.0/18
  - 190.93.240.0/20
  - 188.114.96.0/20
  - 197.234.240.0/22
  - 198.41.128.0/17
  - 162.158.0.0/15
  - 104.16.0.0/12
  - 172.64.0.0/13
  - 131.0.72.0/22
plugin: ip-restriction 

I also made a change to the KongIngress resource, to only allow https to the back-end service. Only the route section is shown below:

route:
 methods:
 - GET
 regex_priority: 0
 strip_path: true
 preserve_host: true
 protocols:
 - https 

In the previous post, the protocols array contained the http value.

Note: for whitelisting to work, the Kong proxy service needs externalTrafficPolicy set to Local. Use kubectl edit svc kong-kong-proxy to modify that setting. You can set this value at deployment time as well. This might or might not work for you. I used AKS where this produces the desired outcome.

CloudFlare

Get the external IP of the kong-kong-proxy service and create a DNS entry for it. I created a A record for api.baeke.info:

Make sure the orange cloud is active. In this case, this means that requests for api.baeke.info are proxied by CloudFlare. That allows us to cache, enable WAF (web application firewall), rate limiting and more!

In the Firewall section, WAF is turned on. Note that this is a paying feature!

WAF to protect your API

In Crypto, Universal SSL is turned on and set to Full (strict).

Full (strict) means that CloudFlare connects to your origin over HTTPS and that it expects a valid certificate, which is checked. An origin certificate, issued by CloudFlare but not trusted by your operating system is also valid. As stated above, I use such an origin certificate at the Ingress level.

The origin certificate can be issued and/or downloaded from the Crypto section:

Origin certs

I created an origin certificate for *.baeke.info and baeke.info and downloaded the certificate and private key in PEM format. I then encoded the contents of the certificate and key in base64 format and used them in a secret:

apiVersion: v1
kind: Secret
metadata:
  name: baeke.info.tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: base64-encoded-cert
  tls.key: base64-endoced-key

As you have seen in the Ingress definition, it referred to this secret via its name, baeke.info.tls.

When a consumer connects to the API, the fully trusted certificate issued by CloudFlare is used:

Universal SSL cert from CloudFlare

We also make sure consumers of the API need to use TLS:

Force HTTPS at the CloudFlare level

With the above configuration, consumers need to securely connect to https://api.baeke.info at CloudFlare. CloudFlare connects securely to the origin, which is the external IP of the ingress. Only CloudFlare is allowed to connect to that external IP because of the whitelisting configuration.

Testing the API

Let’s try the API with the http tool:

Connecting to the API

All sorts of headers are added by CloudFlare which makes it clear that CloudFlare is proxying the requests. When we don’t add a key or specify a wrong one:

Kong is still doing its work

The key is now securely sent from consumer to CloudFlare to origin. Phew! 😎

Conclusion

In this post, we hosted an API on Kubernetes, exposed it with Kong and secured it with CloudFlare. This example can easily be extended with multiple Kong proxies for high availability and multiple APIs (/users, /orders, /products, …) that are all protected by CloudFlare with end-to-end encryption and WAF. CloudFlare lends an extra helping hand by automatically generating both the “front-end” and origin certificates.

In a follow-up post, we will look at an alternative approach via Azure Front Door Service. Stay tuned!

API Management with Kong Ingress Controller on Kubernetes

In previous posts, I wrote about Azure API Management in combination with APIs hosted on Kubernetes:

  • API Management with private APIs: requires API Management with virtual network integration because the APIs are reachable via an internal ingress on the Azure virtual network; use the premium tier 💰💰💰
  • API Management with public APIs: does not require virtual network integration but APIs need to restrict access to the public IP address of the API Management instance; you can use the other less expensive tiers 🎉🎉🎉

Instead of using API Management, there are many other solutions. One of those solutions is Kong 🐵. In this post, we will take a look at Kong Ingress Controller, which can be configured via Kubernetes API objects such as ingresses and custom resource definitions defined by Kong. We will do the following:

  • Install Kong via Helm
  • Create an Ingress resource to access a dummy (and dumb 😊) user management API via http://hostname/users. The back-end API uses http://hostname/api/getusers so we will need to translate the path
  • Create a KongIngress custom resource to configure the back-end (like only allowing GET and setting the target path to /api/getusers)
  • Use a rate limiting plugin and associate it with the Ingress
  • Require key authentication on the Ingress, which also requires a KongConsumer and a KongCredential resource

For a video version, head over to Youtube. I recommend 1,5x speed! 💤💤💤

Installation

The installation can be performed with Helm. The extra LoadBalancer parameters expose the proxy and admin API via a public IP address. I used Azure Kubernetes Service (AKS).

helm install stable/kong --name kong --set ingressController.enabled=true   --set admin.type=LoadBalancer --set proxy.type=LoadBalancer

The above command installs Kong in the default namespace. List the services in that namespace with kubectl get svc and note the external IP of the kong-kong-proxy service. I associated that IP with a wildcard DNS entry like *.kong.yourdomain.com. That allows me to create an ingress for http://user.kong.yourdomain.com.

Note that you should not make the admin API publicly available via a load balancer. Just remove –set admin.type=LoadBalancer to revert to the default NodePort or set admin.type=ClusterIP.

The Helm chart will automatically install a PostgreSQL instance via a StatefulSet. The instance will have an 8GB disk attached. Use kubectl get pv to check that. You can use an external PostgreSQL instance or Cassandra (even Cosmos DB with the Cassandra API). I would highly recommend to use external state. There is also an option to not use a database but I did not try that.

Install the dummy user service

Use the deployment from the previous post, which deploys two pods with a container based on gbaeke/ingfunc. It contains the dummy API which is actually an Azure Function container running the Kestrel web server.

Create the Ingress object

The Ingress definition below, allows us to connect to the back-end user service using http://user.kong.baeke.info/users:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-ratelimit, http-auth
spec:
  rules:
    - host: user.kong.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80 

The ingress.class annotation ensures that Kong picks up this Ingress definition because I also had Traefik installed, which is another Ingress Controller. The plugins.konghq.com annotation refers to two plugins:

  • rate limiting: we will define this later to limit requests to 1 request/second
  • key auth: we will define this later to require the consumer to specify a previously defined API key

Go ahead and save the above file and apply it with kubectl apply -f filename.yaml. In subsequent steps, do the same for the other YAML definitions. All resources will be deployed in the default namespace.

Kong-specific ingress properties

The KongIngress custom resource definition can be used to specify additional Kong-specific properties on the Ingress:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: func
proxy:
  protocol: http
  path: "/api/getusers"
  connect_timeout: 10000
  retries: 10
  read_timeout: 10000
  write_timeout: 10000
route:
  methods:
  - GET
  regex_priority: 0
  strip_path: true
  preserve_host: true
  protocols:
  - http 

The name of the KongIngress resource is func, which is the same name as the Ingress. This associates the KongIngress resource with the Ingress resource automatically. Note that we restricted the methods to GET and that we specify the path to the back-end API as /api/getusers. You also need strip_path set to true to make this work (strips the original path from the request).

Rate limiting

To configure rate limiting, a typical capability of an API management solution, use the definition below:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: http-ratelimit
  namespace: default
config:
  second: 1
plugin: rate-limiting 

This is a custom resource definition of kind (type) KongPlugin. Via the plugin property we specify the rate-limiting plugin and set it to one request per second. Note that we call this resource http-ratelimit and that we use this name in the annotation of the Ingress specification. That associates the plugin with that specific Ingress resource.

Require an API key

To require an API key, first create a consumer with a KongConsumer object:

apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: top
username: topuser 

Next, create a credential and associate it with the consumer:

apiVersion: configuration.konghq.com/v1
kind: KongCredential
metadata:
  name: topcred
consumerRef: top
type: key-auth
config:
  key: yourverysecretkeyhere

We need a consumer and a key because the next steps will require a key when we call the API. To do just that, define a key-auth plugin:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: http-auth
  namespace: default
plugin: key-auth 

The above plugin is associated with the Ingress using its name (http-auth) in the Ingress annotations.

Testing the API

Let’s try to call the API without a key:

Cannot call the API without the key

Let’s send a key with the request via a parameter (via a header is also possible):

API can be called with a key

Note I used the httpie tool (apt install httpie) for nicer formatting!

If you want to try the rate limiting features, use this on the bash prompt:

while true; do http http://user.kong.baeke.info/users?apikey=KEY; done 

Once in a while, you should see:

Oops, rate limit exceeded

If you want to check the configuration, navigate to https://exposed-admin-IP:8444:

Kong admin API

A bit further down the output of the admin API, the enabled plug-ins should be listed:

Enabled plugins

Conclusion

In this post, we looked at the basics of Kong Ingress Controller and a few of its options to translate the path, limit the rate of requests and key authentication. We did not touch on other stuff like SSL, the Enterprise version and many of the other plugins. Hopefully though, this is just enough to get you started with the open source version on Kubernetes. Take a look a the Kong documentation for more in depth information!

Azure API Management with public APIs on Kubernetes

In my previous blog post, I looked at Azure API Management in combination with private APIs hosted on Kubernetes. The APIs were exposed via Traefik and an internal load balancer. To make that scenario work, the Azure API Management premium SKU is required, which is quite costly.

This post describes another approach where the APIs are exposed on the public Internet via an Ingress Controller that requires HTTPS in addition to restricting the API caller to the IP address of the Azure API Management instance. Something like this:

Internet client -> Azure API Management --> Ingress Controller (with IP whitelisting per ingress) --> API service (Kubernetes) --> API pods (Kubernetes, part of a Deployment)

Let’s see how this works, shall we?

API Management

Deploy Azure API management from the portal. In this case, you can use the other SKUs such as Basic and Standard. Note the IP address of the Azure API Management instance on the Overview page:

IP address of API Management

Ingress Controller

As usual, let’s use Traefik. When you have Helm installed, use the following command:

helm install stable/traefik --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=name@domain.com,onHostRule=true,acme.challengeType=tls-alpn-01,acme.staging=false,dashboard.enabled=true,externalTrafficPolicy=Local --namespace kube-system

Note the use of externalTrafficPolicy=Local. This lets Traefik know the IP address of the actual caller, which is required because we want to restrict access to the IP address of API Management.

Ingress object

When your API is deployed via a deployment and a service of type ClusterIP, use the following ingress definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  annotations:
    kubernetes.io/ingress.class: traefik
    traefik.ingress.kubernetes.io/whitelist-source-range: "YOURIP/32"
spec:
  tls:
  - hosts:
    - api.domain.com
  rules:
    - host: api.domain.com
      http:
        paths:
        - path: /
          backend:
            serviceName: func
            servicePort: 80

The above ingress object, exposes the internal service func via Traefik. The whitelist-source-range annotation is used to limit access to this resource to the IP address of Azure API Management. Replace YOURIP with that IP address. Obviously, replace the host api.domain.com with a host that resolves to the external IP of the load balancer that provides access to Traefik. The Let’s Encrypt configuration automatically provisions a valid certificate to the service.

When I navigate to the API on my local computer, the following happens:

No access to the API if the request does not come from API management

When I test the API from API Management (after setting the back-end correctly):

API management can call the back-end API

Conclusion

What do you do when you do not want to spend money on the premium SKU? The answer is clear: use the lower SKUs if possible and restrict access to the back-end APIs with other means such as IP whitelisting. Other possibilities include using some form of authentication such as basic authentication etc…

Azure API Management and Azure Kubernetes Service

You have decided to host your APIs in Kubernetes in combination with an API management solution? You are surely not the only one! In an Azure context, one way of doing this is combining Azure API Management and Azure Kubernetes Service (AKS). This post describes one of the ways to get this done. We will use the following services:

  • Virtual Network: AKS will use advanced networking and Azure CNI
  • Private DNS: to host a private DNS zone (private.baeke.info) ; note that private DNS is in public preview
  • AKS: deployed in a subnet of the virtual network
  • Traefik: Ingress Controller deployed on AKS, configured to use an internal load balancer in a dedicated subnet of the virtual network
  • Azure API Management: with virtual network integration which requires Developer or Premium; note that Premium comes at a hefty price though

Let’s take it step by step but note that this post does not contain all the detailed steps. I might do a video later with more details. Check the YouTube channel for more information.

We will setup something like this:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment) 

Virtual Network

Create a virtual network in a resource group. We will add a private DNS zone to this network. You should not add resources such as virtual machines to this virtual network before you add the private DNS zone.

I will call my network privdns and add a few subnets (besides default):

  • aks: used by AKS
  • traefik: for the internal load balancer (ILB) and the front-end IP addresses
  • apim: to give API management access to the virtual network

Private DNS

Add a private DNS zone to the virtual network with Azure CLI:

az network dns zone create -g rg-ingress -n private.baeke.info --zone-type Private --resolution-vnets privdns 

You can now add records to this private DNS zone:

az network dns record-set a add-record \
   -g rg-ingress \
   -z private.baeke.info \
   -n test \
   -a 1.1.1.1

To test name resolution, deploy a small Linux virtual machine and ping test.private.baeke.info:

Testing the private DNS zone

Update for June 27th, 2019: the above commands use the old API; please see https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli for the new syntax to create a zone and to link it to an existing VNET; these zones should be viewable in the portal via Private DNS Zones:

Private DNS zones in the portal

Azure Kubernetes Service

Deploy AKS and use advanced networking. Use the aks subnet when asked. Each node you deploy will get 30 IP address in the subnet:

First IP addresses of one of the nodes

Traefik

To expose the APIs over an internal IP we will use ingress objects, which require an Ingress Controller. Traefik is just one of the choices available. Any Ingress Controller will work.

Instead of using ingresses, you could also expose your APIs via services of type LoadBalancer and use an internal load balancer. The latter approach would require one IP per API where the ingress approach only requires one IP in total. That IP resolves to Traefik which uses the host header to route to the APIs.

We will install Traefik with Helm. Check my previous post for more info about Traefik and Helm. In this case, I will download and untar the Helm chart and modify values.yaml. To download and untar the Helm chart use the following command:

helm fetch stable/traefik --untar

You will now have a traefik folder, which contains values.yaml. Modify values.yaml as follows:

Changes to values.yaml

This will instruct Helm to add the above annotations to the Traefik service object. It instructs the Azure cloud integration components to use an internal load balancer. In addition, the load balancer should be created in the traefik subnet. Make sure that your AKS service principal has the RBAC role on the virtual network to perform this operation.

Now you can install Traefik on AKS. Make sure you are in the traefik folder where the Helm chart was untarred:

helm install . --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,dashboard.enabled=true --namespace kube-system

When the installation is finished, there should be an internal load balancer in the resource group that is behind your AKS cluster:

ILB deployed

The result of kubectl get svc -n kube-system should result in something like:

EXTERNAL-IP is the front-end IP on the ILB for the traefik service

We can now reach Treafik on the virtual network and create an A record that resolves to this IP. The func.private.baeke.info I will use later, resolves to the above IP.

Azure API Management

Deploy API Management from the portal. API Management will need access to the virtual network which means we need a version (SKU) that has virtual network support. This is needed simply because the APIs are not exposed on the public Internet.

For testing, use the Developer SKU. In production, you should use the Premium SKU although it is very expensive. Microsoft should really make the virtual network integration part of every SKU since it is such a common scenario! Come on Microsoft, you know it’s the right thing to do! 😉

API Management virtual network integration

Above, API Management is configured to use the apim subnet of the virtual network. It will also be able to resolve private DNS names via this integration. Note that configuring the network integration takes quite some time.

Deploy a service and ingress

I deployed the following sample API with a simple deployment and service. Save this as func.yaml and run kubectl apply -f func.yaml. You will end up with two pods running a super simple and stupid API plus a service object of type ClusterIP, which is only reachable inside Kubernetes:

apiVersion: v1
kind: Service
metadata:
  name: func
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: func
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: func
spec:
  replicas: 2
  selector:
    matchLabels:
      app: func
  template:
    metadata:
      labels:
        app: func
    spec:
      containers:
      - name: func
        image: gbaeke/ingfunc
        ports:
        - containerPort: 80

Next, deploy an ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: func.private.baeke.info
      http:
        paths:
        - path: /
          backend:
            serviceName: func
            servicePort: 80

Notice I used func.private.baeke.info! Naturally, that name should resolve to the IP address on the ILB that routes to Traefik.

Testing the API from API Management

In API Management, I created an API that uses func.private.baeke.info as the backend. Yes, I know, the API name is bad. It’s just a sample ok? 😎

API with backend func.private.baeke.info

Let’s test the GET operation I created:

Great success! API management can reach the Kubernetes-hosted API via Traefik

Conclusion

In this post, we looked at one way to expose Kubernetes-hosted APIs to the outside world via Azure API Management. The traffic flow is as follows:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment)

Because we have to use host names in ingress definitions, we added a private DNS zone to the virtual network. We can create multiple A records, one for each API, and provide access to these APIs with ingress objects.

As stated above, you can also expose each API via an internal load balancer. In that case, you do not need an Ingress Controller such as Traefik. Alternatively, you could also replace Azure API Management with a solution such as Kong. I have used Kong in the past and it is quite good! The choice for one or the other will depend on several factors such as cost, features, ease of use, support, etc…

Microsoft Face API with a local container

A few days ago, I obtained access to the Face container. It provides access to the Face API via a container you can run where you want: on your pc, at the network edge or in your datacenter. You should allocate 6 GB or RAM and 2 cores for the container to run well. Note that you still need to create a Face API resource in the Azure Portal. The container needs to be associated with the Azure Face API via the endpoint and access key:

Face API with a West Europe (Amsterdam) endpoint

I used the Standard tier, which charges 0.84 euros per 1000 calls. As noted, the container will not function without associating it with an Azure Face API resource.

When you gain access to the container registry, you can pull the container:

docker pull containerpreview.azurecr.io/microsoft/cognitive-services-face:latest

After that, you can run the container as follows (for API billing endpoint in West Europe):

docker run --rm -it -p 5000:5000 --memory 6g --cpus 2 containerpreview.azurecr.io/microsoft/cognitive-services-face Eula=accept Billing=https://westeurope.api.cognitive.microsoft.com/face/v1.0 ApiKey=YOUR_API_KEY

The container will start. You will see the output (–it):

Running Face API container

And here’s the spec:

API spec Face API v1

Before showing how to use the detection feature, note that the container needs Internet access for billing purposes. You will not be able to run the container in fully offline scenarios.

Over at https://github.com/gbaeke/msface-go, you can find a simple example in Go that uses the container. The Face API can take a byte stream of an image or a URL to an image. The example takes the first approach and loads an image from disk as specified by the -image parameter. The resulting io.Reader is passed to the getFace function which does the actual call to the API (uri = http://localhost:5000/face/v1.0/detect):

request, err := http.NewRequest("POST", uri+"?returnFaceAttributes="+params, m)
request.Header.Add("Content-Type", "application/octet-stream")

// Send the request to the local web service
resp, err := client.Do(request)
if err != nil {
    return "", err
}

The response contains a Body attribute and that attribute is unmarshalled to a variable of type interface. That one is marshalled with indentation to a byte slice (b) which is returned by the function as a string:

var response interface{}
err = json.Unmarshal(respBody, &response)
if err != nil {
    return "", err
}
b, err := json.MarshalIndent(response, "", "\t")

Now you can use a picture like the one below:

Is he smiling?

Here are some parts of the input, following the command
detectface -image smiling.jpg

Emotion is clearly happiness with additional features such as age, gender, hair color, etc…

[
{
"faceAttributes": {
"accessories": [],
"age": 33,
"blur": {
"blurLevel": "high",
"value": 1
},
"emotion": {
"anger": 0,
"contempt": 0,
"disgust": 0,
"fear": 0,
"happiness": 1,
"neutral": 0,
"sadness": 0,
"surprise": 0
},
"exposure": {
"exposureLevel": "goodExposure",
"value": 0.71
},
"facialHair": {
"beard": 0.6,
"moustache": 0.6,
"sideburns": 0.6
},
"gender": "male",
"glasses": "NoGlasses",
"hair": {
"bald": 0.26,
"hairColor": [
{
"color": "black",
"confidence": 1
}],
"faceId": "b6d924c1-13ef-4d19-8bc9-34b0bb21f0ce",
"faceRectangle": {
"height": 1183,
"left": 944,
"top": 167,
"width": 1183
}
}
]

That’s it! Give the Face API container a go with the tool. You can get it here: https://github.com/gbaeke/msface-go/releases/tag/v0.0.1 (Windows)

%d bloggers like this: