Inspecting Web Application Firewall logs

In some of my previous posts, I talked about Azure Front Door and Web Application Firewall policies to protect a workload like one or more APIs running on Kubernetes or App Service. Although I enabled the Web Application Firewall policies, I did not show what happens when the rules are triggered. Let’s take a look at that! 🕶

Before we get started though, take the following diagram into account:

Azure web application firewall
From: https://docs.microsoft.com/en-us/azure/frontdoor/waf-overview

WAF for Front Door is a global solution. You create a WAF policy in the portal or via other means and attach it to a Front Door frontend. Rules are evaluated and acted upon at the edge versus on your application server.

Azure WAF supports custom rules and Azure-managed rule sets (based on OWASP). The custom rules are interesting because they allow you to restrict IP addresses, configure geographic based access control and more.

There’s an additional rule type called bot protection rule as well. At the time of this writing (beginning June 2019) this feature is in public preview. It uses the Microsoft Intelligent Security Graph to do its magic, similarly to Azure Firewall when you enable Threat Intelligence.

WAF Logs

Let’s first use a tool that can scan an endpoint for vulnerabilities to trigger the WAF rules. One such tool is OWASP ZAP, which you need to install on your workstation.

OWASP ZAP tool

Before we check the logs, note we have set the policy to Detection:

WAF policy set to Detection; start with detection to learn what the rules might block in your app

Now let’s take a look at the logs. Use the following query in Log Analytics and modify it for your own host (host_s field):

AzureDiagnostics
| where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog"
| where action_s == "Block" 
| where host_s == "api.baeke.info"  

The result:

Blocked requests (if the policy were set at Prevention at the global level)

Let’s look at a SQL Injection block:

Typical SQL Injection

The decoded requestUri_s is https://api.baeke.info:443/users?apikey=theapikey’ AND ‘1’=’1′ –. Typical! It was blocked at the edge. This request went via the BRU location.

Like with any Log Analytics query, you can place alerts on log occurrences. You will need to be in the Log Analytics workspace, and not in the Logs section of Azure Front Door:

Conclusion

Azure Web Application Firewall policies for Azure Front Door integrate with Azure Monitor and Log Analytics, like most other Azure services. With some KQL, the query language for Log Analytics, it is straightforward to request the logs and set alerts on them.

Securing your API with Kong and CloudFlare

In the previous post, we looked at API Management with Kong and the Kong Ingress Controller. We did not care about security and exposed a sample toy API over a public HTTP endpoint that also required an API key. All in the clear, no firewall, no WAF, nothing… 👎👎👎

In this post, we will expose the API over TLS and configure Kong to use a CloudFlare origin certificate. An origin certificate is issued and trusted by CloudFlare to connect to the origin, which in our case is an API hosted on Kubernetes.

The API consumer will not connect directly to the Kubernetes-hosted API exposed via Kong. Instead, the consumer connects to CloudFlare over TLS and uses a certificate issued by CloudFlare that is fully trusted by browsers and other clients.

The traffic flow is as follows:

Consumer --> CloudFlare (TLS with fully trusted cert, WAF, ...) --> Kong Ingress (TLS with origin cert) --> API (HTTP)

Configuring Kong

Refer to the previous post for installation instructions. The YAML files to configure the Ingress, KongIngress, Consumer, etc… are almost the same. The Ingress resource has the following changes:

  • We use a new hostname api.baeke.info
  • We configure TLS for api.baeke.info by referring to a secret called baeke.info.tls which contains the CloudFlare origin certificate.
  • We use an additional Kong plugin which provides whitelisting of CloudFlare addresses; only CloudFlare is allowed to connect to the Ingress

Here is the full definition:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: func
  namespace: default
  annotations:
    kubernetes.io/ingress.class: kong
    plugins.konghq.com: http-auth, whitelist
spec:
  tls:
  - hosts:
    - api.baeke.info
    secretName: baeke.info.tls # cloudflare origin cert
  rules:
    - host: api.baeke.info
      http:
        paths:
        - path: /users
          backend:
            serviceName: func
            servicePort: 80

Here is the plugin definition for whitelisting with the current (June 15th, 2019) list of IP ranges used by CloudFlare. Note that you have to supply the addresses and ranges as an array. The documentation shows a comma-separated list! 🤷‍♂️

apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: whitelist
  namespace: default
config:
  whitelist: 
  - 173.245.48.0/20
  - 103.21.244.0/22
  - 103.22.200.0/22
  - 103.31.4.0/22
  - 141.101.64.0/18
  - 108.162.192.0/18
  - 190.93.240.0/20
  - 188.114.96.0/20
  - 197.234.240.0/22
  - 198.41.128.0/17
  - 162.158.0.0/15
  - 104.16.0.0/12
  - 172.64.0.0/13
  - 131.0.72.0/22
plugin: ip-restriction 

I also made a change to the KongIngress resource, to only allow https to the back-end service. Only the route section is shown below:

route:
 methods:
 - GET
 regex_priority: 0
 strip_path: true
 preserve_host: true
 protocols:
 - https 

In the previous post, the protocols array contained the http value.

Note: for whitelisting to work, the Kong proxy service needs externalTrafficPolicy set to Local. Use kubectl edit svc kong-kong-proxy to modify that setting. You can set this value at deployment time as well. This might or might not work for you. I used AKS where this produces the desired outcome.

CloudFlare

Get the external IP of the kong-kong-proxy service and create a DNS entry for it. I created a A record for api.baeke.info:

Make sure the orange cloud is active. In this case, this means that requests for api.baeke.info are proxied by CloudFlare. That allows us to cache, enable WAF (web application firewall), rate limiting and more!

In the Firewall section, WAF is turned on. Note that this is a paying feature!

WAF to protect your API

In Crypto, Universal SSL is turned on and set to Full (strict).

Full (strict) means that CloudFlare connects to your origin over HTTPS and that it expects a valid certificate, which is checked. An origin certificate, issued by CloudFlare but not trusted by your operating system is also valid. As stated above, I use such an origin certificate at the Ingress level.

The origin certificate can be issued and/or downloaded from the Crypto section:

Origin certs

I created an origin certificate for *.baeke.info and baeke.info and downloaded the certificate and private key in PEM format. I then encoded the contents of the certificate and key in base64 format and used them in a secret:

apiVersion: v1
kind: Secret
metadata:
  name: baeke.info.tls
  namespace: default
type: kubernetes.io/tls
data:
  tls.crt: base64-encoded-cert
  tls.key: base64-endoced-key

As you have seen in the Ingress definition, it referred to this secret via its name, baeke.info.tls.

When a consumer connects to the API, the fully trusted certificate issued by CloudFlare is used:

Universal SSL cert from CloudFlare

We also make sure consumers of the API need to use TLS:

Force HTTPS at the CloudFlare level

With the above configuration, consumers need to securely connect to https://api.baeke.info at CloudFlare. CloudFlare connects securely to the origin, which is the external IP of the ingress. Only CloudFlare is allowed to connect to that external IP because of the whitelisting configuration.

Testing the API

Let’s try the API with the http tool:

Connecting to the API

All sorts of headers are added by CloudFlare which makes it clear that CloudFlare is proxying the requests. When we don’t add a key or specify a wrong one:

Kong is still doing its work

The key is now securely sent from consumer to CloudFlare to origin. Phew! 😎

Conclusion

In this post, we hosted an API on Kubernetes, exposed it with Kong and secured it with CloudFlare. This example can easily be extended with multiple Kong proxies for high availability and multiple APIs (/users, /orders, /products, …) that are all protected by CloudFlare with end-to-end encryption and WAF. CloudFlare lends an extra helping hand by automatically generating both the “front-end” and origin certificates.

In a follow-up post, we will look at an alternative approach via Azure Front Door Service. Stay tuned!

Securing access to and from Azure Functions

I am often asked how to secure access to and from Azure Functions that are not running in an App Service Environment (ASE). An App Service Environment allows you to safeguard your apps in a subnet of your Azure Virtual Network. In a sense, it gives you a private deployment of Azure App Service that you can secure with Azure Firewall, Network Security Groups (NSGs) or Network Virtual Appliances (NVAs).

When you use Azure Functions in a regular App Service Plan or Premium plan, you will need to rely on Virtual Network Service Endpoints and App Service network integration to achieve similar results.

In this post, we will look at an example of an Azure Function, running in a Premium plan, that queries CosmosDB. We will restrict incoming traffic to the Azure Function from a subnet and only allow CosmosDB to be queried by the same Azure Function. Here’s a diagram:

Incoming Traffic

To restrict incoming traffic to the Azure Function, navigate to the Function App in the portal and select Networking in Platform Features. You will see the following screen:

Azure Functions network features

We will configure the inbound restrictions via Configure Access Restrictions. You can configure restrictions for both the Function App itself and the scm site:

From the moment you add rules, a Deny All rule will appear. In the above rules, I allowed my private IP and the default subnet in the virtual network. The second rule configures the service endpoints. When you open the properties of the subnet, you will see:

Service Endpoint of type Microsoft.Web

Great! When you try to access the function from any other location, you will get a 403 error from the Azure Functions front-end. So don’t expect a connection timeout like with regular network security rules.

Outgoing traffic

The example Azure Function uses an HTTP trigger and a Cosmos DB input (cosmos). Documents contain a name property. The query outputs the name found on the first document:

module.exports = async function (context, req, cosmos) {
context.log(cosmos);
context.res = {
body: "hello " + cosmos[0].name
}};

In order to secure access to Cosmos DB, two features were used:

  1. Azure Functions VNet Integration (VNet integration is currently in preview)
  2. Cosmos DB network service endpoints to restrict access to the subnet that provides the Azure Function hosts with an IP address

Configuring the VNet integration is straightforward, especially when compared to the old style of integration which required a VPN tunnel:

App Service (including Azure Functions) VNet integration

As you can see in the above screenshot, you delegate a subnet to the App Service hosts. In my case, that is subnet func-sec:

Subnet delegated to a service (Microsoft.Web/serverFarms)

The bottom of the screenshot shows the subnet is delegated to the Microsoft.Web/serverFarms service. That is the result of the VNet integration.

You can also see the subnet has service endpoints configured for Cosmos DB. That is the result of the Cosmos DB configuration below:

Service endpoint config in Cosmos DB

In Cosmos DB, an existing virtual network was added. I did not enable the Accept connections from within public Azure datacenters option.

When you remove the service endpoint and you run the Azure Function, the following error is thrown:

Unable to proceed with the request. Please check the authorization claims to ensure the required permissions to process the request. ActivityId: 03b2c11f-2b21-44c9-ab44-61b4864539fe, Microsoft.Azure.Documents.Common/2.2.0.0, Windows/10.0.14393 documentdb-netcore-sdk/2.2.0

Does it work from a VM in the default subnet?

If all went well, I should be able to call the Azure Function from the virtual machine in the default subnet. Let’s try with curl:

Yes, itsme!

The name field in the first document is set to itsme so it worked! Great, the function can be called from the default subnet. In case you are wondering about the use of -p in the ssh command: this virtual machine sat behind an Azure Firewall and the VM ssh port was exposed via a DNAT rule over a random port.

From another location, the following error is shown (wrapped around some HTML but this is the main error):

Error 403 - This web app is stopped

Conclusion

With virtual network service endpoints now available for most Azure PaaS (platform as a service) components, you can ensure those services are only accessed from intended locations. In this example, you saw how to secure access to Azure Functions and Cosmos DB. Service endpoints combined with the App Service VNet integration make it straightforward to secure a Function App end-to-end.

Windows Intune available on March 23rd

Windows Intune, Microsoft’s cloud-based pc management service will be available for trial and purchase on March 23rd. It will become available in several countries, including Belgium.

The service can be acquired for around 11$ per computer per month. This includes a Windows 7 Enterprise license. For 1$ extra, you can acquire MDOP as well which includes App-V.

For more technical content, see http://technet.microsoft.com/en-us/windows/ff472080.aspx?ITPID=mscomgl. The FAQ is here: http://www.microsoft.com/windows/windowsintune/windowsintune-faq.aspx.

Windows Intune