Here’s a quick overview of the steps you need to take to put Front Door in front of an Azure Web App. In this case, the web app runs a WordPress site.
Step 1: DNS
Suppose you deployed the Web App and its name is gebawptest.azurewebsites.net and you want to reach the site via wp.baeke.info. Traffic will flow like this:
user types wp.baeke.info ---CNAME to xyz.azurefd.net--> Front Door --- connects to gebawptest.azurewebsites.net using wp.baeke.info host header
It’s clear that later, in Front Door, you will have to specify the host header (wp.baeke.info in this case). More on that later…
If you have worked with Azure Web App before, you probably know you need to configure the host header sent by the browser as a custom domain on the web app. Something like this:
Custom domain in Azure Web App (no https configured – hence the red warning)
In this case, we do not want to resolve wp.baeke.info to the web app but to Front Door. To make the custom domain assignment work (because the web app will verify the custom name), add the following TXT record to DNS:
TXT awverify.wp gebawptest.azurewebsites.net
For example in CloudFlare:
awverify txt record in CloudFlare DNS
With the above TXT record, I could easily add wp.baeke.info as a custom domain to the gebawptest.azurewebsites.net web app.
Note: wp.baeke.info is a CNAME to your Front Door domain (see below)
Step 2: Front Door
My Front Door designer looks like this:
Front Door designer
When you create a Front Door, you need to give it a name. In my case that is gebafd.azurefd.net. With wp.baeke.info as a CNAME for gebafd.azurefd.net, you can easily add wp.baeke.info as an additional Frontend host.
The backend pool is the Azure Web App. It’s configured as follows:
Front Door backend host (only one in the pool); could also have used the Azure App Service backend type
You should connect to the web app using its original name but send wp.baeke.info as the host header. This allows Front Door to connect to the web app correctly.
The last part of the Front Door config is a simple rule that connects the frontend wp.baeke.info to the backend pool using HTTP only.
Step 3: WordPress config
With the default Azure WordPress templates, you do not need to modify anything because wp-config.php contains the following settings:
In some of my previous posts, I talked about Azure Front Door and Web Application Firewall policies to protect a workload like one or more APIs running on Kubernetes or App Service. Although I enabled the Web Application Firewall policies, I did not show what happens when the rules are triggered. Let’s take a look at that! š¶
Before we get started though, take the following diagram into account:
WAF for Front Door is a global solution. You create a WAF policy in the portal or via other means and attach it to a Front Door frontend. Rules are evaluated and acted upon at the edge versus on your application server.
Azure WAF supports custom rules and Azure-managed rule sets (based on OWASP). The custom rules are interesting because they allow you to restrict IP addresses, configure geographic based access control and more.
There’s an additional rule type called bot protection rule as well. At the time of this writing (beginning June 2019) this feature is in public preview. It uses the Microsoft Intelligent Security Graph to do its magic, similarly to Azure Firewall when you enable Threat Intelligence.
WAF Logs
Let’s first use a tool that can scan an endpoint for vulnerabilities to trigger the WAF rules. One such tool is OWASP ZAP, which you need to install on your workstation.
OWASP ZAP tool
Before we check the logs, note we have set the policy to Detection:
WAF policy set to Detection; start with detection to learn what the rules might block in your app
Now let’s take a look at the logs. Use the following query in Log Analytics and modify it for your own host (host_s field):
AzureDiagnostics
| where ResourceType == "FRONTDOORS" and Category == "FrontdoorWebApplicationFirewallLog"
| where action_s == "Block"
| where host_s == "api.baeke.info"
The result:
Blocked requests (if the policy were set at Prevention at the global level)
Like with any Log Analytics query, you can place alerts on log occurrences. You will need to be in the Log Analytics workspace, and not in the Logs section of Azure Front Door:
Conclusion
Azure Web Application Firewall policies for Azure Front Door integrate with Azure Monitor and Log Analytics, like most other Azure services. With some KQL, the query language for Log Analytics, it is straightforward to request the logs and set alerts on them.
In the previous post, we looked at publishing and securing an API with Azure Front Door and Azure Web Application Firewall. The API ran on Kubernetes, exposed by Kong and Kong Ingress Controller. Kong was configured to require an API key to call the /users API, allowing us to identify the consumer of the API. The traffic flow was as follows:
Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer) -- HTTP --> API Kubernetes service --> API pods
Although Kubernetes makes the API(s) highly available, you might want to take extra precautions such as deploying the API in multiple regions. In this post, we will take a look at doing so. That means we will deploy the API in both West and North Europe, in two distinct Kubernetes clusters:
we-clu: Kubernetes cluster in West Europe
ne-clu: Kubernetes cluster in North Europe
The flow is very similar of course:
Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer; region to connect to depends on Front Door configuration and health probes) -- HTTP --> API Kubernetes service --> API pods
We deploy a Kubernetes cluster in each region, install Helm, deploy Kong, deploy our API and configure ingresses and related Kong custom resource definitions (CRDs). The result is an external IP address in each region that leads to the Kong proxy. Search for “kong” on this blog to find posts with more details about this deployment.
Note that the API deployment specifies an environment variable that will contain the string WE or NE. This environment variable will be displayed in the output when we call the API. Here is the API deployment for West Europe:
When we call the API and Azure Front Door uses the backend in West Europe, the result will be:
WE included in the response from the West Europe cluster
Origin APIs
The origin APIs need to be exposed on the public Internet using a DNS name. Azure Front Door requires a public backend to connect to. Naturally, the backend can be configured to only accept incoming requests from Front Door. In our case, the APIs are available on the public IP of the Kong proxy. The following names were used:
api-o-we.baeke.info: Kong proxy in West Europe
api-o-ne.baeke.info; Kong proxy in North Europe
Both endpoints are configured to accept TLS connections only, and use a Let’s Encrypt wildcard certificate for *.baeke.info.
Front Door Configuration
The Front Door designer looks the same as in the previous post:
Front Door Designer
However, the backend pool api-o now has two backends:
Two backend hosts, both enabled with same priority and weight
To determine the health of the backend, Front Door needs to be configured with a health probe that returns status code 200. If we were to specify the probe below, the health probe would fail:
Errrrrrr, this won’t work
The health probe would hit Kong’s proxy and return a 404 (Not Found). We did not create a route for /, only for /users. With Azure Front Door, when all health probes fail, all backends are considered healthy. Yes, you read that right.
Although we could create a route called /health that returns a 200, we will use the following probe just to make it work:
Fixing the health probe (quick and dirty fix); just can the /users API
If you are exposing multiple APIs on each cluster, the health probe above would not make sense. Also note that the purpose of the health probe is to determine if the cluster is up or not. It will not fix one API behaving badly or being removed accidentally!
You can check the health probes from the portal:
Yep! Health probes in West and North Europe are at 100%
Connection test
When I connect from my home laptop in Belgium, I get the following response:
Connection to West Europe cluster
When I connect from my second home in Dublin š¤·āāļø I get:
Connection from a VM in North Europe (I was kidding about the second home)
If you enable logging to Log Analytics, you can check this in the FrontDoorAccessLog:
Connection from home via Brussels (West Europe would show DB in Tenant_x)
When I remove the /users API in West Europe (kubectl delete deploy func), my home laptop will connect to North Europe as expected:
I didn’t fake this! It’s 100% real, my laptop now connects to the North Europe cluster via Front Door as expected
Note that the calls will not fail from the moment you delete the /users API (the health probe here). That depends on the following setting (backends in Front Door designer):
When should the backend be determined healthy or unhealthy; decrease sample size and or samples to make it go faster
The backend health percentage graph indicates the probe failure as well:
We’ve lost West Europe folks!
Conclusion
When you are going for a multi-region deployment of services, Azure Front Door is one of the options. Of course, there is much more to a multi-region deployment than the “front-end stuff” described in this post. What do you do with databases for instance? Can you use active-active write regions (e.g. Cosmos DB) or does your database only support active/passive with read replicas?
As in other load balancing and fail over solutions, proper health probes are crucial in the design. Think about what a good health probe can be and what it means when it is not available. One option is to just write a health probe exposed via an endpoint such as /health that merely returns a 200 status code. But your health probe could also be designed to connect to backend systems such as databases or queues to determine the health of the system.
Hopefully, this post gives you some ideas to start! Follow me on Twitter for updates.
In the post, Securing your API with Kong and CloudFlare, I exposed a dummy API on Kubernetes with Kong and published it securely with CloudFlare. The breadth of features and its ease of use made CloudFlare a joy to work with. It didn’t take long before I got the question: “can’t you do that with Azure only?”. The answer is obvious: “Of course you can!”
In this post, the traffic flow is as follows:
Consumer -- HTTPS --> Azure Front Door with WAF policy -- HTTPS --> Kong (exposed with Azure Load Balancer) -- HTTP --> API Kubernetes service --> API pods
Similarly to CloudFlare, Azure Front Door provides a fully trusted certificate for consumers of the API. In contrast to CloudFlare, Azure Front Door does not provide origin certificates which are trusted by Front Door. That’s easy to solve though by using a fully trusted Let’s Encrypt certificate which is stored as a Kubernetes secret and used in the Kubernetes Ingress definition. For this post, I requested a wildcard certificate for *.baeke.info via https://www.sslforfree.com/
Let’s take it step-by-step, starting at the API and Kong level.
APIs and Kong
Just like in the previous posts, we have a Kubernetes service called func and back-end pods that host the API implemented via Azure Functions in a container. Below you see the API pods in the default namespace. For convenience, Kong is also deployed in that namespace (not recommended in production):
Kong will pick up the above definition and configure itself accordingly.
The API is exposed publicly via https://api-o.baeke.info where the o stands for origin. The secret wildcard-baeke.info.tls refers to a secret which contains the wildcard certificate for *.baeke.info:
Naturally, certificate and key should be replaced with the base64-encoded strings of the certificate and key you have obtained (in this case from https://www.sslforfree.com).
At the DNS level, api-o.baeke.info should refer to the external IP address of the exposed Kong Ingress Controller (proxy):
The service kong-kong-proxy is exposed via a public IP address (service of type LoadBalancer)
For the rest, the Kong configuration is not very different from the configuration in Securing your API with Kong and CloudFlare. I did remove the whitelisting configuration, which needs to be updated for Azure Front Door.
Great, we now have our API listening on https://api-o.baeke.info but it is not exposed via Azure Front Door and it does not have a WAF policy. Let’s change that.
Web Application Firewall (WAF) Policy
You can create a WAF policy from the portal:
WAF Policy
The above policy is set to detection only. No custom rules have been defined, but a managed rule set is activated:
Managed rule set for OWASP
The WAF policy was saved as baekeapiwaf. It will be attached to an Azure Front Door frontend. When a policy is attached to a frontend, it will be shown in the policy:
Associated frontends (Front Door front-ends)
Azure Front Door
We will now add Azure Front Door to obtain the following flow:
Consumer ---> https://api.baeke.info (Front Door + WAF) --> https://api-o.baeke.info
The final configuration in Front Door Designer looks like this:
Front Door Designer
When a request comes in for api.baeke.info, the response from api-o.baeke.info is served. Caching was not enabled. The frontend and backend are tied together via the routing rule.
The first thing you need to do is to add the azurefd.net frontend which is baeke-api.azurefd.net in the above config. There’s not much to say about that. Just click the blue plus next to Frontend hosts and follow the prompts. I did not attach a WAF policy to that frontend because it will not forward requests to the backend. We will use a custom domain for that.
Next, click the blue plus again to add the custom domain (here api.baeke.info). In your DNS zone, create a CNAME record that maps api.yourdomain.com to the azurefd.net name:
Mapping of custom domain to azurefd.net domain in CloudFlare DNS
I attached the WAF policy baekeapiwaf to the front-end domain:
WAF policy with OWASP rules to protect the API
Next, I added a certificate. When you select Front Door managed, you will get a Digicert managed image. If the CNAME mapping is not complete, you will get an e-mail from Digicert to approve certificate issuance. Make sure you check your e-mails if it takes long to issue the certificate. It will take a long time either way so be patient! š¤š¤š¤
Now that we have the frontend, specify the backend that Front Door needs to connect to:
Backend pool
The backend pool uses the API exposed at api-o.baeke.info as defined earlier. With only one backend, priority and weight are of no importance. It should be clear that you can add multiple backends, potentially in different regions, and load balance between them.
You will also need a health probe to check for healthy and unhealthy backends:
Health probes of the backend
Note that the above health check does NOT return a 200 OK status code. That is the only status code that would result in a healthy endpoint. With the above config, Kong will respond with a “no Route matched” 404 Not Found error instead. That does not mean that Front Door will not route to this endpoint though! When all endpoints are in a failed state, Front Door considers them healthy anyway š²š²š² and routes traffic using round-robin. See the documentation for more info.
Now that we have the frontend and the backend, let’s tie the two together with a rule:
First part of routing rule
In the first part of the rule, we specify that we listen for requests to api.baeke.info (and not the azurefd.net domain) and that we only accept https. The pattern /* basically forwards everything to the backend.
In the route details, we specify the backend to route to:
Backend to route to
Clearly, we want to route to the api-o backend we defined earlier. We only connect to the backend via HTTPS. It only accepts HTTPS anyway, as defined at the Kong level via a KongIngress resource.
Note that it is possible to create a HTTP to HTTPS redirect rule. See the post Azure Front Door Revisited for more information. Without the rule, you will get the following warning:
Please disregard this warning š
Test, test, test
Let’s call the API via the http tool:
Clearly, Azure Front Door has served this request as indicated by the X-Azure-Ref header. Let’s try http:
Azure Front Door throws the above error because the routing rule only accepts https on api.baeke.info!
White listing Azure Front Door
To restrict calls to the backend to Azure Front Door, I used the following KongPlugin definition:
The IP range is documented here. Note that the IP range can and probably will change in the future.
In the ingress definition, I added the plugin via the annotations:
annotations:
kubernetes.io/ingress.class: kong
plugins.konghq.com: http-auth, whitelist-fd
Calling the backend API directly will now fail:
That’s a no no! Please use the Front Door!
Conclusion
Publishing APIs (or any web app), whether they are running on Kubernetes or other systems, is easy to do with the combination of Azure Front Door and Web Application Firewall policies. Do take pricing into account though. It’s a mixture of relatively low fixed prices with variable pricing per GB and requests processed. In general, CloudFlare has the upper hand here, from both a pricing and features perspective. On the other hand, Front Door has advantages when it comes to automating its deployment together with other Azure resources. As always: plan, plan, plan and choose wisely! š¦
In the previous post, I wrote about hosting a simple static website on an Azure Storage Account. To enable a custom URL such as https://blog.baeke.info, you can add Azure CDN. If you use the Verizon Premium tier, you can configure rules such as a http to https redirect rule. This is similar to hosting static sites in an Amazon S3 bucket with Amazon CloudFront although it needs to be said that the http to https redirect is way simpler to configure there.
On Twitter, Karim Vaes reminded me of the Azure Front Door service, which is currently in preview. The tagline of the Azure Front Door service is clear:Ā “scalableĀ andĀ secureĀ entryĀ pointĀ forĀ fastĀ deliveryĀ ofĀ yourĀ globalĀ applications”.
Azure Front Door Service Preview
The Front Door service is quite advanced and has features like global HTTP load balancing with instant failover, SSL offload, application acceleration and even application firewalling and DDoS protection. The price is lower that the Verizon Premium tier of Azure CDN. Please note that preview pricing is in effect at this moment.
Configuring a Front Door with the portal is very easy with the Front Door Designer. The screenshot below shows the designer for the same website as the previous post but for a different URL:
Front Door Designer
During deployment, you create a name that ends in azurefd.net (here geba.azurefd.net). Afterwards you can add a custom name like deploy.baeke.info in the above example. Similar to the Azure CDN, Front Door will give you a Digicert issued certificate if you enable HTTPS and choose FrontĀ DoorĀ managed:
Front Door managed SSL certificate
Naturally, the backendĀ pool will refer to the https endpoint of the static website of your Azure Storage Account. I only have one such endpoint, but I could easily add another copy and start load balancing between the two.
In the routingĀ rule, be sure you select the frontend host that matches the custom domain name you set up in the frontend hosts section:
Routing rule
It is still not as easy as in CloudFront to redirect http to https. For my needs, I can allow both http and https to Front Door and redirect in the browser: