Azure Kubernetes Service and Azure Private Link Integration

If you have done any work with Azure, you have probably come across terms such as Azure Private Link Service (PLS) and Private Endpoints (PEs). To quickly illustrate what Azure PLS is, let’s look at a diagram from the Microsoft documentation for Azure SQL database:

PLS with Azure SQL

Above, Azure SQL Database uses Azure Private Link Service (PLS) to provide connectivity to the database from inside a virtual network that you control. Without a private link, you would need to connect to Azure SQL via a public IP address over the Internet. In order to connect privately, a private endpoint connection (PE) is created inside a subnet in your virtual network. Above, that interface gets IP address 10.0.0.5. The PE can be seen as a network interface that is connected to Azure SQL Database via Azure PLS. The green arrow from the PE to Azure SQL Database can be seen as the private connection.

Azure SQL Database is not the only service offering this functionality. For example, when you deploy Azure Kubernetes Service (AKS) with a private Kubernetes API service, a private endpoint connection is created to access the Kubernetes control plane via Azure PLS.

When you go to Private Link Center in the Azure Portal, you can see all your private endpoints and their connection state. Below, a private endpoint for a private AKS cluster is shown. It shows as connected via private link.

Private endpoint to access the Microsoft managed AKS control plane

Creating your own Private link services

In the two examples above, Azure SQL Database and AKS use Azure PLS to enable a private connection. But what if you build your own service and you want to offer private connectivity to consumers such as your customers or other Azure services? That is where the creation of your own private link services comes into play. These services can be created from Private Link Center by enabling private connections to a standard load balancer:

Creating your own private link service

More information about this process can be found in the documentation.

In summary, when you have a standard load balancer that load balances traffic to an application, you can offer a private connection to that load balancer via Azure Private Link Service.

The load balancer can be in front of traditional virtual machines or Kubernetes pods. In the next section, we’ll look at the second scenario: creating a private link service from an internal load balancer (ILB) that AKS creates for a Kubernetes service.

Creating a Private Link Service from an AKS internal load balancer

Although it was technically possible to create a Private Link Service from an internal load balancer controlled by AKS in the past, it was a cumbersome process. In addition, AKS was not aware of the Private Link Service configuration. A new capability in the Azure Cloud Provider changes this.

When you create a Kubernetes service of type LoadBalancer, you can now provide annotations that instruct the AKS Azure Cloud Provider to create a private link service from the internal load balancer it creates. Here’s an example:

apiVersion: v1
kind: Service
metadata:
  name: super-api
  annotations:
    # create ILB instead of ELB; this functionality predates the PLS functionality
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-pls-create: "true"
    service.beta.kubernetes.io/azure-pls-name: myPLS
    service.beta.kubernetes.io/azure-pls-ip-configuration-subnet: YOUR SUBNET
    service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count: "1"
    service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address: 10.224.10.10
    service.beta.kubernetes.io/azure-pls-proxy-protocol: "false"
    service.beta.kubernetes.io/azure-pls-visibility: "*"
    # does not apply here because we will use Front Door later
    service.beta.kubernetes.io/azure-pls-auto-approval: "YOUR SUBSCRIPTION ID"
spec:
  selector:
    app: super-api
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080

This works with both Kubenet and the Azure CNI. You can use the subnet that your AKS nodes are in. Above, replace YOUR SUBNET with the name of your subnet, not its resource id.

When the above YAML is submitted to Kubernetes, the private link service myPLS gets created. Record the alias for later use:

Creation of the PLS

Note that the annotation service.beta.kubernetes.io/azure-load-balancer-internal: "true" creates the load balancer in the AKS node resource group.

Note that a private link service also creates a network interface in the subnet for NATting purposes. NAT ensures that the networking configuration of the consumer does not lead to IP address conflicts. The NAT IP above is 10.224.10.10. You can configure multiple NAT IP addresses to avoid port exhaustion.

The PLS will be visible in the Private Link Center without connections. Later, when you add services that use this private link service, the number of connections will be shown as below:

myPLS with one connection (from Azure Front Door, see below 😉)

But what can we connect to this? We already know the answer: a private endpoint. You could create a private endpoint in any network, in any subscription, and link it up to myPLS. In fact, other customers from different Azure AD tenants can use myPLS as well, provided that the usage is approved by you. We will not do that in this example, and instead, wire up Azure Front Door to our AKS service.

Azure Front Door Premium

Azure Front Door Premium supports private endpoints that connect to your own private link services. Those private endpoints are not owned by you but by the Front Door service. You will not be able to see those private endpoints in your subscription(s) because they do not live there. It’s as if someone from another organization and tenant connects to your private link service. In this case, that other organization is Microsoft! 😉

With the configuration of Front Door, we get the full picture below:

AKS service via ILB with PLS consumed by Front Door Premium Private Endpoint 🧠

The configuration of the private endpoint and wiring it up to your private link service is done in the origin group configuration, as shown above. When you add an origin to the origin group, one of the options is to connect to a private link service. Below, you see an already configured origin group:

Origin group with a private link service origin

Above, the origin host name is the alias of the private link service created earlier (myPLS).

Here’s a screenshot of the Add an origin UI:

Adding an origin using private link service

The Origin type should be custom, and the Host name should be the private link service alias. Then, you can check Enable private link service and select the private link that was created by AKS based on the service annotations.

Remember that you will still have to approve the usage of the private link service by Azure Front Door! Check Pending Connections in Private Link Center.

Does it work?

In Front Door manager, you should have an endpoint and a route that uses the origin group. In my case, that is aksdemo-agfcfwgkgyctgyhs.z01.azurefd.net. The AKS service publishes a deployment of ghcr.io/gbaeke/super:1.0.7 which just prints Hello from Super API:

Tadaaa, it works!

Conclusion

This new feature makes it super easy to create Azure Private Link Services from internal load balancers created by AKS. Combined with Azure Front Door Premium, you can publish these services to the Internet without having to provide public connectivity at the AKS level. In addition, you can enable other Front Door features such as WAF (web application firewall). Maybe in the future, we’ll see some extra integration with Azure Front Door so it can act as an AKS Ingress Controller, all controlled from Kubernetes manifests? 😉

Approving a private endpoint connection with Azure CLI

In my previous post, I wrote about App Services with Private Link and used Azure Front Door to publish the web app. Azure Front Door Premium (in preview), can create a Private Endpoint and link it to your web app via Azure Private Link. When that happens, you need to approve the pending connection in Private Link Center.

The pending connection would be shown here, ready for approval

Although this is easy to do, you might want to automate this approval. Automation is possible via a REST API but it is easier via Azure CLI.

To do so, first list the private endpoint connections of your resource, in my case that is a web app:

az network private-endpoint-connection list --id /subscriptions/SUBID/resourceGroups/RGNAME/providers/Microsoft.Web/sites/APPSERVICENAME

The above command will return all private endpoint connections of the resource. For each connection, you get the following information:

 {
    "id": "PE CONNECTION ID",
    "location": "East US",
    "name": "NAME",
    "properties": {
      "ipAddresses": [],
      "privateEndpoint": {
        "id": "PE ID",
        "resourceGroup": "RESOURCE GROUP NAME OF PE"
      },
      "privateLinkServiceConnectionState": {
        "actionsRequired": "None",
        "description": "Please approve this connection.",
        "status": "Pending"
      },
      "provisioningState": "Pending"
    },
    "resourceGroup": "RESOURCE GROUP NAME OF YOUR RESOURCE",
    "type": "YOUR RESOURCE TYPE"
  }

To approve the above connection, use the following command:

az network private-endpoint-connection approve --id  PE CONNECTION ID --description "Approved"

The –id in the approve command refers to the private endpoint connection ID, which looks like below for a web app:

/subscriptions/YOUR SUB ID/resourceGroups/YOUR RESOURCE GROUP/providers/Microsoft.Web/sites/YOUR APP SERVICE NAME/privateEndpointConnections/YOUR PRIVATE ENDPOINT CONNECTION NAME

After running the above command, the connection should show as approved:

Approved private endpoint connection

When you automate this in a pipeline, you can first list the private endpoint connections of your resource and filter on provisioningState=”Pending” to find the ones you need to approve.

Hope it helps!

Trying Consul Connect on your local machine

In a previous post, I talked about installing Consul on Kubernetes and using some of its features. In that post, I did not look at the service mesh functionality. Before looking at that, it is beneficial to try out the service mesh features on your local machine.

You can easily install Consul on your local machine with Chocolatey for Windows or Homebrew for Mac. On Windows, a simple choco install consul is enough. Since Consul is just a single executable, you can start it from the command line with all the options you need.

In the video below, I walk through configuring two services running as containers on my local machine: a web app that talks to Redis. We will “mesh” both services and then use an intention to deny service-to-service traffic.

Consul Service Mesh on your local machine… speed it up! ☺

In a later post and video, we will look at Consul Connect on Kubernetes. Stay tuned!

Exposing a local endpoint to the Internet with inlets

A while ago, I learned about inlets by Alex Ellis. It allows you to expose an endpoint on your internal network via a tunnel to an exit node. To actually reach your internal website, you navigate to the public IP and port of the exit node. Something like this:

Internet user --> public IP:port of exit node -- tunnel --> your local endpoint

On both the exit node and your local network, you need to run inlets. Let’s look at an example. Suppose I want to expose my Magnificent Image Classifier 😀 running on my local machine to the outside world. The classifier is actually just a container you can run as follows:

docker run -p 9090:9090 -d gbaeke/nasnet

The container image is big so it will take while to start. When the container is started, just navigate to http://localhost:9090 to see the UI. You can upload a picture to classify it.

So far so good. Now you need an exit node with a public IP. I deployed a small Azure B-series Linux VM (B1s; 7 euros/month). SSH into that VM and install the inlets CLI (yeah, I know piping a script to sudo sh is dangerous 😏):

curl -sLS https://get.inlets.dev | sudo sh

Now run the inlets server (from instructions here):

export token=$(head -c 16 /dev/urandom | shasum | cut -d" " -f1) 
inlets server --port=9090 --token="$token"

The first line just generates a random token. You can use any token you want or even omit a token (not recommended). The second command runs the server on port 9090. It’s the same port as my local endpoint but that is not required. You can use any valid port.

TIP: the Azure VM had a network security group (NSG) configured so I had to add TCP port 9090 to the allow list

Now that the server is running, let’s run the client. Install inlets like above or use brew install inlets on a Mac and run the following commands:

export REMOTE="IP OF EXIT NODE:9090"
export TOKEN="TOKEN FROM SERVER"  
inlets client \
   --remote=$REMOTE \  
   --upstream=http://127.0.0.1:9090  
   --token $TOKEN

The inlets client will establish a web sockets connection to the inlets server on the exit node. The –upstream option is used to specify the local endpoint. In my case, that’s the classifier container (nasnet-go).

I can now browse to the public IP and port of the inlets server to see the classifier UI:

The inlets server will show the logs:

I think inlets is a fantastic tool that is useful in many scenarios. I have used ngrok in the past but it has some limits. You can pay to remove those limits. Inlets, on the other hand, is fully open source and not limited in any way. Be sure to check out the inlets GitHub page which has lots more details. Highly recommended!!!

%d bloggers like this: