Securing your API with Kong and CloudFlare

In the previous post, we looked at API Management with Kong and the Kong Ingress Controller. We did not care about security and exposed a sample toy API over a public HTTP endpoint that also required an API key. All in the clear, no firewall, no WAF, nothing… 👎👎👎

In this post, we will expose the API over TLS and configure Kong to use a CloudFlare origin certificate. An origin certificate is issued and trusted by CloudFlare to connect to the origin, which in our case is an API hosted on Kubernetes.

The API consumer will not connect directly to the Kubernetes-hosted API exposed via Kong. Instead, the consumer connects to CloudFlare over TLS and uses a certificate issued by CloudFlare that is fully trusted by browsers and other clients.

The traffic flow is as follows:

Consumer --> CloudFlare (TLS with fully trusted cert, WAF, ...) --> Kong Ingress (TLS with origin cert) --> API (HTTP)

Configuring Kong

Refer to the previous post for installation instructions. The YAML files to configure the Ingress, KongIngress, Consumer, etc… are almost the same. The Ingress resource has the following changes:

  • We use a new hostname api.baeke.info
  • We configure TLS for api.baeke.info by referring to a secret called baeke.info.tls which contains the CloudFlare origin certificate.
  • We use an additional Kong plugin which provides whitelisting of CloudFlare addresses; only CloudFlare is allowed to connect to the Ingress

Here is the full definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-auth, whitelist
spec:
  tls:
  - hosts:
    - api.baeke.info
    secretName: baeke.info.tls # cloudflare origin cert
  rules:
    - host: api.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80

Here is the plugin definition for whitelisting with the current (June 15th, 2019) list of IP ranges used by CloudFlare. Note that you have to supply the addresses and ranges as an array. The documentation shows a comma-separated list! 🤷‍♂️

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: whitelist
  namespace: default
config:
  whitelist: 
  - 173.245.48.0/20
  - 103.21.244.0/22
  - 103.22.200.0/22
  - 103.31.4.0/22
  - 141.101.64.0/18
  - 108.162.192.0/18
  - 190.93.240.0/20
  - 188.114.96.0/20
  - 197.234.240.0/22
  - 198.41.128.0/17
  - 162.158.0.0/15
  - 104.16.0.0/12
  - 172.64.0.0/13
  - 131.0.72.0/22
plugin: ip-restriction 

I also made a change to the KongIngress resource, to only allow https to the back-end service. Only the route section is shown below:

route:
 methods:
 - GET
 regex_priority: 0
 strip_path: true
 preserve_host: true
 protocols:
 - https 

In the previous post, the protocols array contained the http value.

Note: for whitelisting to work, the Kong proxy service needs externalTrafficPolicy set to Local. Use kubectl edit svc kong-kong-proxy to modify that setting. You can set this value at deployment time as well. This might or might not work for you. I used AKS where this produces the desired outcome.

CloudFlare

Get the external IP of the kong-kong-proxy service and create a DNS entry for it. I created a A record for api.baeke.info:

Make sure the orange cloud is active. In this case, this means that requests for api.baeke.info are proxied by CloudFlare. That allows us to cache, enable WAF (web application firewall), rate limiting and more!

In the Firewall section, WAF is turned on. Note that this is a paying feature!

WAF to protect your API

In Crypto, Universal SSL is turned on and set to Full (strict).

Full (strict) means that CloudFlare connects to your origin over HTTPS and that it expects a valid certificate, which is checked. An origin certificate, issued by CloudFlare but not trusted by your operating system is also valid. As stated above, I use such an origin certificate at the Ingress level.

The origin certificate can be issued and/or downloaded from the Crypto section:

Origin certs

I created an origin certificate for *.baeke.info and baeke.info and downloaded the certificate and private key in PEM format. I then encoded the contents of the certificate and key in base64 format and used them in a secret:

apiVersion: v1
kind: Secret
metadata:
  name: baeke.info.tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: base64-encoded-cert
  tls.key: base64-endoced-key

As you have seen in the Ingress definition, it referred to this secret via its name, baeke.info.tls.

When a consumer connects to the API, the fully trusted certificate issued by CloudFlare is used:

Universal SSL cert from CloudFlare

We also make sure consumers of the API need to use TLS:

Force HTTPS at the CloudFlare level

With the above configuration, consumers need to securely connect to https://api.baeke.info at CloudFlare. CloudFlare connects securely to the origin, which is the external IP of the ingress. Only CloudFlare is allowed to connect to that external IP because of the whitelisting configuration.

Testing the API

Let’s try the API with the http tool:

Connecting to the API

All sorts of headers are added by CloudFlare which makes it clear that CloudFlare is proxying the requests. When we don’t add a key or specify a wrong one:

Kong is still doing its work

The key is now securely sent from consumer to CloudFlare to origin. Phew! 😎

Conclusion

In this post, we hosted an API on Kubernetes, exposed it with Kong and secured it with CloudFlare. This example can easily be extended with multiple Kong proxies for high availability and multiple APIs (/users, /orders, /products, …) that are all protected by CloudFlare with end-to-end encryption and WAF. CloudFlare lends an extra helping hand by automatically generating both the “front-end” and origin certificates.

In a follow-up post, we will look at an alternative approach via Azure Front Door Service. Stay tuned!

API Management with Kong Ingress Controller on Kubernetes

In previous posts, I wrote about Azure API Management in combination with APIs hosted on Kubernetes:

  • API Management with private APIs: requires API Management with virtual network integration because the APIs are reachable via an internal ingress on the Azure virtual network; use the premium tier 💰💰💰
  • API Management with public APIs: does not require virtual network integration but APIs need to restrict access to the public IP address of the API Management instance; you can use the other less expensive tiers 🎉🎉🎉

Instead of using API Management, there are many other solutions. One of those solutions is Kong 🐵. In this post, we will take a look at Kong Ingress Controller, which can be configured via Kubernetes API objects such as ingresses and custom resource definitions defined by Kong. We will do the following:

  • Install Kong via Helm
  • Create an Ingress resource to access a dummy (and dumb 😊) user management API via http://hostname/users. The back-end API uses http://hostname/api/getusers so we will need to translate the path
  • Create a KongIngress custom resource to configure the back-end (like only allowing GET and setting the target path to /api/getusers)
  • Use a rate limiting plugin and associate it with the Ingress
  • Require key authentication on the Ingress, which also requires a KongConsumer and a KongCredential resource

For a video version, head over to Youtube. I recommend 1,5x speed! 💤💤💤

Installation

The installation can be performed with Helm. The extra LoadBalancer parameters expose the proxy and admin API via a public IP address. I used Azure Kubernetes Service (AKS).

helm install stable/kong --name kong --set ingressController.enabled=true   --set admin.type=LoadBalancer --set proxy.type=LoadBalancer

The above command installs Kong in the default namespace. List the services in that namespace with kubectl get svc and note the external IP of the kong-kong-proxy service. I associated that IP with a wildcard DNS entry like *.kong.yourdomain.com. That allows me to create an ingress for http://user.kong.yourdomain.com.

Note that you should not make the admin API publicly available via a load balancer. Just remove –set admin.type=LoadBalancer to revert to the default NodePort or set admin.type=ClusterIP.

The Helm chart will automatically install a PostgreSQL instance via a StatefulSet. The instance will have an 8GB disk attached. Use kubectl get pv to check that. You can use an external PostgreSQL instance or Cassandra (even Cosmos DB with the Cassandra API). I would highly recommend to use external state. There is also an option to not use a database but I did not try that.

Install the dummy user service

Use the deployment from the previous post, which deploys two pods with a container based on gbaeke/ingfunc. It contains the dummy API which is actually an Azure Function container running the Kestrel web server.

Create the Ingress object

The Ingress definition below, allows us to connect to the back-end user service using http://user.kong.baeke.info/users:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-ratelimit, http-auth
spec:
  rules:
    - host: user.kong.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80 

The ingress.class annotation ensures that Kong picks up this Ingress definition because I also had Traefik installed, which is another Ingress Controller. The plugins.konghq.com annotation refers to two plugins:

  • rate limiting: we will define this later to limit requests to 1 request/second
  • key auth: we will define this later to require the consumer to specify a previously defined API key

Go ahead and save the above file and apply it with kubectl apply -f filename.yaml. In subsequent steps, do the same for the other YAML definitions. All resources will be deployed in the default namespace.

Kong-specific ingress properties

The KongIngress custom resource definition can be used to specify additional Kong-specific properties on the Ingress:

apiVersion: configuration.konghq.com/v1
kind: KongIngress
metadata:
  name: func
proxy:
  protocol: http
  path: "/api/getusers"
  connect_timeout: 10000
  retries: 10
  read_timeout: 10000
  write_timeout: 10000
route:
  methods:
  - GET
  regex_priority: 0
  strip_path: true
  preserve_host: true
  protocols:
  - http 

The name of the KongIngress resource is func, which is the same name as the Ingress. This associates the KongIngress resource with the Ingress resource automatically. Note that we restricted the methods to GET and that we specify the path to the back-end API as /api/getusers. You also need strip_path set to true to make this work (strips the original path from the request).

Rate limiting

To configure rate limiting, a typical capability of an API management solution, use the definition below:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: http-ratelimit
  namespace: default
config:
  second: 1
plugin: rate-limiting 

This is a custom resource definition of kind (type) KongPlugin. Via the plugin property we specify the rate-limiting plugin and set it to one request per second. Note that we call this resource http-ratelimit and that we use this name in the annotation of the Ingress specification. That associates the plugin with that specific Ingress resource.

Require an API key

To require an API key, first create a consumer with a KongConsumer object:

apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: top
username: topuser 

Next, create a credential and associate it with the consumer:

apiVersion: configuration.konghq.com/v1
kind: KongCredential
metadata:
  name: topcred
consumerRef: top
type: key-auth
config:
  key: yourverysecretkeyhere

We need a consumer and a key because the next steps will require a key when we call the API. To do just that, define a key-auth plugin:

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: http-auth
  namespace: default
plugin: key-auth 

The above plugin is associated with the Ingress using its name (http-auth) in the Ingress annotations.

Testing the API

Let’s try to call the API without a key:

Cannot call the API without the key

Let’s send a key with the request via a parameter (via a header is also possible):

API can be called with a key

Note I used the httpie tool (apt install httpie) for nicer formatting!

If you want to try the rate limiting features, use this on the bash prompt:

while true; do http http://user.kong.baeke.info/users?apikey=KEY; done 

Once in a while, you should see:

Oops, rate limit exceeded

If you want to check the configuration, navigate to https://exposed-admin-IP:8444:

Kong admin API

A bit further down the output of the admin API, the enabled plug-ins should be listed:

Enabled plugins

Conclusion

In this post, we looked at the basics of Kong Ingress Controller and a few of its options to translate the path, limit the rate of requests and key authentication. We did not touch on other stuff like SSL, the Enterprise version and many of the other plugins. Hopefully though, this is just enough to get you started with the open source version on Kubernetes. Take a look a the Kong documentation for more in depth information!

Azure API Management and Azure Kubernetes Service

You have decided to host your APIs in Kubernetes in combination with an API management solution? You are surely not the only one! In an Azure context, one way of doing this is combining Azure API Management and Azure Kubernetes Service (AKS). This post describes one of the ways to get this done. We will use the following services:

  • Virtual Network: AKS will use advanced networking and Azure CNI
  • Private DNS: to host a private DNS zone (private.baeke.info) ; note that private DNS is in public preview
  • AKS: deployed in a subnet of the virtual network
  • Traefik: Ingress Controller deployed on AKS, configured to use an internal load balancer in a dedicated subnet of the virtual network
  • Azure API Management: with virtual network integration which requires Developer or Premium; note that Premium comes at a hefty price though

Let’s take it step by step but note that this post does not contain all the detailed steps. I might do a video later with more details. Check the YouTube channel for more information.

We will setup something like this:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment) 

Virtual Network

Create a virtual network in a resource group. We will add a private DNS zone to this network. You should not add resources such as virtual machines to this virtual network before you add the private DNS zone.

I will call my network privdns and add a few subnets (besides default):

  • aks: used by AKS
  • traefik: for the internal load balancer (ILB) and the front-end IP addresses
  • apim: to give API management access to the virtual network

Private DNS

Add a private DNS zone to the virtual network with Azure CLI:

az network dns zone create -g rg-ingress -n private.baeke.info --zone-type Private --resolution-vnets privdns 

You can now add records to this private DNS zone:

az network dns record-set a add-record \
   -g rg-ingress \
   -z private.baeke.info \
   -n test \
   -a 1.1.1.1

To test name resolution, deploy a small Linux virtual machine and ping test.private.baeke.info:

Testing the private DNS zone

Update for June 27th, 2019: the above commands use the old API; please see https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli for the new syntax to create a zone and to link it to an existing VNET; these zones should be viewable in the portal via Private DNS Zones:

Private DNS zones in the portal

Azure Kubernetes Service

Deploy AKS and use advanced networking. Use the aks subnet when asked. Each node you deploy will get 30 IP address in the subnet:

First IP addresses of one of the nodes

Traefik

To expose the APIs over an internal IP we will use ingress objects, which require an Ingress Controller. Traefik is just one of the choices available. Any Ingress Controller will work.

Instead of using ingresses, you could also expose your APIs via services of type LoadBalancer and use an internal load balancer. The latter approach would require one IP per API where the ingress approach only requires one IP in total. That IP resolves to Traefik which uses the host header to route to the APIs.

We will install Traefik with Helm. Check my previous post for more info about Traefik and Helm. In this case, I will download and untar the Helm chart and modify values.yaml. To download and untar the Helm chart use the following command:

helm fetch stable/traefik --untar

You will now have a traefik folder, which contains values.yaml. Modify values.yaml as follows:

Changes to values.yaml

This will instruct Helm to add the above annotations to the Traefik service object. It instructs the Azure cloud integration components to use an internal load balancer. In addition, the load balancer should be created in the traefik subnet. Make sure that your AKS service principal has the RBAC role on the virtual network to perform this operation.

Now you can install Traefik on AKS. Make sure you are in the traefik folder where the Helm chart was untarred:

helm install . --name traefik --set serviceType=LoadBalancer,rbac.enabled=true,dashboard.enabled=true --namespace kube-system

When the installation is finished, there should be an internal load balancer in the resource group that is behind your AKS cluster:

ILB deployed

The result of kubectl get svc -n kube-system should result in something like:

EXTERNAL-IP is the front-end IP on the ILB for the traefik service

We can now reach Treafik on the virtual network and create an A record that resolves to this IP. The func.private.baeke.info I will use later, resolves to the above IP.

Azure API Management

Deploy API Management from the portal. API Management will need access to the virtual network which means we need a version (SKU) that has virtual network support. This is needed simply because the APIs are not exposed on the public Internet.

For testing, use the Developer SKU. In production, you should use the Premium SKU although it is very expensive. Microsoft should really make the virtual network integration part of every SKU since it is such a common scenario! Come on Microsoft, you know it’s the right thing to do! 😉

API Management virtual network integration

Above, API Management is configured to use the apim subnet of the virtual network. It will also be able to resolve private DNS names via this integration. Note that configuring the network integration takes quite some time.

Deploy a service and ingress

I deployed the following sample API with a simple deployment and service. Save this as func.yaml and run kubectl apply -f func.yaml. You will end up with two pods running a super simple and stupid API plus a service object of type ClusterIP, which is only reachable inside Kubernetes:

apiVersion: v1
kind: Service
metadata:
  name: func
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: func
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: func
spec:
  replicas: 2
  selector:
    matchLabels:
      app: func
  template:
    metadata:
      labels:
        app: func
    spec:
      containers:
      - name: func
        image: gbaeke/ingfunc
        ports:
        - containerPort: 80

Next, deploy an ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
    - host: func.private.baeke.info
      http:
        paths:
        - path: /
          backend:
            serviceName: func
            servicePort: 80

Notice I used func.private.baeke.info! Naturally, that name should resolve to the IP address on the ILB that routes to Traefik.

Testing the API from API Management

In API Management, I created an API that uses func.private.baeke.info as the backend. Yes, I know, the API name is bad. It’s just a sample ok? 😎

API with backend func.private.baeke.info

Let’s test the GET operation I created:

Great success! API management can reach the Kubernetes-hosted API via Traefik

Conclusion

In this post, we looked at one way to expose Kubernetes-hosted APIs to the outside world via Azure API Management. The traffic flow is as follows:

Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment)

Because we have to use host names in ingress definitions, we added a private DNS zone to the virtual network. We can create multiple A records, one for each API, and provide access to these APIs with ingress objects.

As stated above, you can also expose each API via an internal load balancer. In that case, you do not need an Ingress Controller such as Traefik. Alternatively, you could also replace Azure API Management with a solution such as Kong. I have used Kong in the past and it is quite good! The choice for one or the other will depend on several factors such as cost, features, ease of use, support, etc…

%d bloggers like this: