In my previous blog post, I looked at Azure API Management in combination with private APIs hosted on Kubernetes. The APIs were exposed via Traefik and an internal load balancer. To make that scenario work, the Azure API Management premium SKU is required, which is quite costly.
This post describes another approach where the APIs are exposed on the public Internet via an Ingress Controller that requires HTTPS in addition to restricting the API caller to the IP address of the Azure API Management instance. Something like this:
Internet client -> Azure API Management --> Ingress Controller (with IP whitelisting per ingress) --> API service (Kubernetes) --> API pods (Kubernetes, part of a Deployment)
Let’s see how this works, shall we?
Deploy Azure API management from the portal. In this case, you can use the other SKUs such as Basic and Standard. Note the IP address of the Azure API Management instance on the Overview page:
As usual, let’s use Traefik. When you have Helm installed, use the following command:
The above ingress object, exposes the internal service func via Traefik. The whitelist-source-range annotation is used to limit access to this resource to the IP address of Azure API Management. Replace YOURIP with that IP address. Obviously, replace the host api.domain.com with a host that resolves to the external IP of the load balancer that provides access to Traefik. The Let’s Encrypt configuration automatically provisions a valid certificate to the service.
When I navigate to the API on my local computer, the following happens:
When I test the API from API Management (after setting the back-end correctly):
What do you do when you do not want to spend money on the premium SKU? The answer is clear: use the lower SKUs if possible and restrict access to the back-end APIs with other means such as IP whitelisting. Other possibilities include using some form of authentication such as basic authentication etc…
You have decided to host your APIs in Kubernetes in combination with an API management solution? You are surely not the only one! In an Azure context, one way of doing this is combining Azure API Management and Azure Kubernetes Service (AKS). This post describes one of the ways to get this done. We will use the following services:
Virtual Network: AKS will use advanced networking and Azure CNI
Private DNS: to host a private DNS zone (private.baeke.info) ; note that private DNS is in public preview
AKS: deployed in a subnet of the virtual network
Traefik: Ingress Controller deployed on AKS, configured to use an internal load balancer in a dedicated subnet of the virtual network
Azure API Management: with virtual network integration which requires Developer or Premium; note that Premium comes at a hefty price though
Let’s take it step by step but note that this post does not contain all the detailed steps. I might do a video later with more details. Check the YouTube channel for more information.
We will setup something like this:
Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment)
Create a virtual network in a resource group. We will add a private DNS zone to this network. You should not add resources such as virtual machines to this virtual network before you add the private DNS zone.
I will call my network privdns and add a few subnets (besides default):
aks: used by AKS
traefik: for the internal load balancer (ILB) and the front-end IP addresses
apim: to give API management access to the virtual network
Add a private DNS zone to the virtual network with Azure CLI:
az network dns zone create -g rg-ingress -n private.baeke.info --zone-type Private --resolution-vnets privdns
You can now add records to this private DNS zone:
az network dns record-set a add-record \
-g rg-ingress \
-z private.baeke.info \
-n test \
To test name resolution, deploy a small Linux virtual machine and ping test.private.baeke.info:
Azure Kubernetes Service
Deploy AKS and use advanced networking. Use the aks subnet when asked. Each node you deploy will get 30 IP address in the subnet:
To expose the APIs over an internal IP we will use ingress objects, which require an Ingress Controller. Traefik is just one of the choices available. Any Ingress Controller will work.
Instead of using ingresses, you could also expose your APIs via services of type LoadBalancer and use an internal load balancer. The latter approach would require one IP per API where the ingress approach only requires one IP in total. That IP resolves to Traefik which uses the host header to route to the APIs.
We will install Traefik with Helm. Check my previous post for more info about Traefik and Helm. In this case, I will download and untar the Helm chart and modify values.yaml. To download and untar the Helm chart use the following command:
helm fetch stable/traefik --untar
You will now have a traefik folder, which contains values.yaml. Modify values.yaml as follows:
This will instruct Helm to add the above annotations to the Traefik service object. It instructs the Azure cloud integration components to use an internal load balancer. In addition, the load balancer should be created in the traefik subnet. Make sure that your AKS service principal has the RBAC role on the virtual network to perform this operation.
Now you can install Traefik on AKS. Make sure you are in the traefik folder where the Helm chart was untarred:
When the installation is finished, there should be an internal load balancer in the resource group that is behind your AKS cluster:
The result of kubectl get svc -n kube-system should result in something like:
We can now reach Treafik on the virtual network and create an A record that resolves to this IP. The func.private.baeke.info I will use later, resolves to the above IP.
Azure API Management
Deploy API Management from the portal. API Management will need access to the virtual network which means we need a version (SKU) that has virtual network support. This is needed simply because the APIs are not exposed on the public Internet.
For testing, use the Developer SKU. In production, you should use the Premium SKU although it is very expensive. Microsoft should really make the virtual network integration part of every SKU since it is such a common scenario! Come on Microsoft, you know it’s the right thing to do! 😉
Above, API Management is configured to use the apim subnet of the virtual network. It will also be able to resolve private DNS names via this integration. Note that configuring the network integration takes quite some time.
Deploy a service and ingress
I deployed the following sample API with a simple deployment and service. Save this as func.yaml and run kubectl apply -f func.yaml. You will end up with two pods running a super simple and stupid API plus a service object of type ClusterIP, which is only reachable inside Kubernetes:
Notice I used func.private.baeke.info! Naturally, that name should resolve to the IP address on the ILB that routes to Traefik.
Testing the API from API Management
In API Management, I created an API that uses func.private.baeke.info as the backend. Yes, I know, the API name is bad. It’s just a sample ok? 😎
Let’s test the GET operation I created:
In this post, we looked at one way to expose Kubernetes-hosted APIs to the outside world via Azure API Management. The traffic flow is as follows:
Consumer --> Azure API Management public IP --> ILB (in private VNET) --> Traefik (in Kubernetes) --> API (in Kubernetes - ClusterIP service in front of a deployment)
Because we have to use host names in ingress definitions, we added a private DNS zone to the virtual network. We can create multiple A records, one for each API, and provide access to these APIs with ingress objects.
As stated above, you can also expose each API via an internal load balancer. In that case, you do not need an Ingress Controller such as Traefik. Alternatively, you could also replace Azure API Management with a solution such as Kong. I have used Kong in the past and it is quite good! The choice for one or the other will depend on several factors such as cost, features, ease of use, support, etc…
A while ago, the Azure DevOps blog posted an update about multi-stage YAML pipelines. The concept is straightforward: define both your build (CI) and release (CD) pipelines in a YAML file and stick that file in your source code repository.
In this post, we will look at a simple build and release pipeline that builds a container, pushes it to ACR, deploys it to Kubernetes linked to an environment. Something like this:
Note: I used a simple go app, a Dockerfile and a Kubernetes manifest as source files, check them out here.
Note: there is also a video version 😉
“Show me the YAML!!!”
The file, azure-pipelines.yaml contains the two stages. Check out the first stage (plus trigger and variables) below:
The pipeline runs on a commit to the master branch. The variables imageName and registry are referenced later using $(imageName) and $(registry). Replace REGNAME with the name of your Azure Container Registry.
It’s a multi-stage pipeline, so we start with stages: and then define the first stage build. That stage has one job which consists of two steps:
Docker task (v2): build a Docker image based on the Dockerfile in the source code repository and push it to the container registry called ACR; ACR is a reference to a service connection defined in the project settings
PublishPipelineArtifact: the source code repository contains Kubernetes deployment manifests in YAML format in the manifests folder; the contents of that folder is published as a pipeline artifact, to be picked up in a later stage
The second stage uses a deployment job (quite new; see this). In a deployment job, you can specify an environment to link to. In the above job, the environment is called dev. In Azure DevOps, the environment is shown as below:
The environment functionality has Kubernetes integration which is pretty neat. You can drill down to the deployed objects such as deployments and services:
The deployment has two tasks:
DownloadPipelineArtifact: download the artifact published in the first stage to $(System.ArtifactsDirectory)/manifests
KubernetesManifest: this task can deploy Kubernetes manifests; it uses an AKS service connection that was created during creation of the environment; a service account was created in a specific namespace and with access rights to that namespace only; the manifests property will look for an image name in the Kubernetes YAML files and append the tag which is the build id here
Note that the release stage will actually download the pipeline artifact automatically. The explicit DownloadPipelineArtifact task gives additional control over the download location.
The KubernetesManifest task is relatively new at the time of this writing (end of May 2019). Its image substitution functionality could be enough in many cases, without having to revert to Helm or manual text substitution tasks. There is more to this task than what I have described here. Check out the docs for more info.
If you are just starting out building CI/CD pipelines in YAML, you will probably have a hard time getting uses to the schema. I know I had! 😡 In the end though, doing it this way with the pipeline stored in source control will pay off in the long run. After some time, you will have built up a useful library of these pipelines to quickly get up and running in new projects. Recommended!!! 😉🚀🚀🚀
In this post, we will take a look at creating a simple machine learning model for text classification and deploying it as a container with Azure Machine Learning service. This post is not intended to discuss the finer details of creating a text classification model. In fact, we will use the Keras library and its Reuters newswire dataset to create a simple dense neural network. You can find many online examples based on this dataset. For further information, be sure to check out and buy 👍 Deep Learning with Python by François Chollet, the creator of Keras and now at Google. It contains a section that explains using this dataset in much more detail!
Together with the workspace, you also get a storage account, a key vault, application insights and a container registry. In later steps, we will create a container and store it in this registry. That all happens behind the scenes though. You will just write a few simple lines of code to make that happen!
Note the Authoring (Preview) section! These were added just before Build 2019 started. For now, we will not use them.
To create the model and interact with the workspace, we will use a free Jupyter notebook in Azure Notebooks. At this point in time (8 May 2019), Azure Notebooks is still in preview. To get started, find the link below in the Overview section of the Machine Learning service workspace:
When you open the notebook, you will see the following first four cells:
It’s always simple if a prepared dataset is handed to you like in the above example. Above, you simply use the reuters class of keras.datasets and use the load_data method to get the data and directly assign it to variables to hold the train and test data plus labels.
In this case, the data consists of newswires with a corresponding label that indicates the category of the newswire (e.g. an earnings call newswire). There are 46 categories in this dataset. In the real world, you would have the newswire in text format. In this case, the newswire has already been converted (preprocessed) for you in an array of integers, with each integer corresponding to a word in a dictionary.
A bit further in the notebook, you will find a Vectorization section:
In this section, the train and test data is vectorized using a one-hot encoding method. Because we specified, in the very first cell of the notebook, to only use the 10000 most important words each article can be converted to a vector with 10000 values. Each value is either 1 or 0, indicating the word is in the text or not.
This bag-of-words approach is one of the ways to represent text in a data structure that can be used in a machine learning model. Besides vectorizing the training and test samples, the categories are also one-hot encoded.
Now the dense neural network model can be created:
The above code defines a very simple dense neural network. A dense neural network is not necessarily the best type but that’s ok for this post. The specifics are not that important. Just note that the nn variable is our model. We will use this variable later when we convert the model to the ONNX format.
The last cell (16 above) does the actual training in 9 epochs. Training will be fast because the dataset is relatively small and the neural network is simple. Using the Azure Notebooks compute is sufficient. After 9 epochs, this is the result:
Not exactly earth-shattering: 78% accuracy on the test set!
Saving the model in ONNX format
ONNX is an open format to store deep learning models. When your model is in that format, you can use the ONNX runtime for inference.
Converting the Keras model to ONNX is easy with the onnxmltools:
The result of the above code is a file called reuters.onnx in your notebook project.
Predict with the ONNX model
Let’s try to predict the category of the first newswire in the test set. Its real label is 3, which means it’s a newswire about an earnings call (earn class):
We will use similar code later in score.py, a file that will be used in a container we will create to expose the model as an API. The code is pretty simple: start an inference session based on the reuters.onnx file, grab the input and output and use run to predict. The resulting array is the output of the softmax layer and we use argmax to extract the category with the highest probability.
Saving the model to the workspace
With the model in reuters.onnx, we can add it to the workspace:
You will need a file in your Azure Notebook project called config.json with the following contents:
With that file in place, when you run cell 27 (see above), you will need to authenticate to Azure to be able to interact with the workspace. The code is pretty self-explanatory: the reuters.onnx model will be added to the workspace:
As you can see, you can save multiple versions of the model. This happens automatically when you save a model with the same name.
Creating the scoring container image
The scoring (or inference) container image is used to expose an API to predict categories of newswires. Obviously, you will need to give some instructions how scoring needs to be done. This is done via score.py:
The code is similar to the code we wrote earlier to test the ONNX model. score.py needs an init() and run() function. The other functions are helper functions. In init(), we need to grab a reference to the ONNX model. The ONNX model file will be placed in the container during the build process. Next, we start an InferenceSession via the ONNX runtime. In run(), the code is similar to our earlier example. It predicts via session.run and returns the result as JSON. We do not have to worry about the rest of the code that runs the API. That is handled by Machine Learning service.
Note: using ONNX is not a requirement; we could have persisted and used the native Keras model for instance
In this post, we only need score.py since we do not train our model via Azure Machine learning service. If you train a model with the service, you would create a train.py file to instruct how training should be done based on data in a storage account for instance. You would also provision compute resources for training. In our case, that is not required so we train, save and export the model directly from the notebook.
Now we need to create an environment file to indicate the required Python packages and start the image build process:
The build process is handled by the service and makes sure the model file is in the container, in addition to score.py and myenv.yml. The result is a fully functional container that exposes an API that takes an input (a newswire) and outputs an array of probabilities. Of course, it is up to you to define what the input and output should be. In this case, you are expected to provide a one-hot encoded article as input.
The container image will be listed in the workspace, potentially multiple versions of it:
Deploy to Azure Container Instances
When the image is ready, you can deploy it via the Machine Learning service to Azure Container Instances (ACI) or Azure Kubernetes Service (AKS). To deploy to ACI:
When the deployment is finished, the deployment will be listed:
When you click on the deployment, the scoring URI will be shown (e.g. http://IPADDRESS:80/score). You can now use Postman or any other method to score an article. To quickly test the service from the notebook:
The helper method run of aci_service will post the JSON in test_sample to the service. It knows the scoring URI from the deployment earlier.
Containerizing a machine learning model and exposing it as an API is made surprisingly simple with Azure Machine learning service. It saves time so you can focus on the hard work of creating a model that performs well in the field. In this post, we used a sample dataset and a simple dense neural network to illustrate how you can build such a model, convert it to ONNX format and use the ONNX runtime for scoring.
Deploying Azure Kubernetes Service (AKS) is, like most other Kubernetes-as-a-service offerings such as those from DigitalOcean and Google, very straightforward. It’s either a few clicks in the portal or one or two command lines and you are finished.
Using these services properly and in a secure fashion is another matter though. I am often asked how to secure access to the cluster and its applications. In addition, customers also want visibility and control of incoming and outgoing traffic. Combining Azure Firewall with AKS is one way of achieving those objectives.
This post will take a look at the combination of Azure Firewall and AKS. It is inspired by this post by Dennis Zielke. In that post, Dennis provides all the necessary Azure CLI commands to get to the following setup:
In what follows, I will keep referring to the subnet names and IP addresses as in the above diagram.
Azure Firewall is a stateful firewall, provided as a service with built-in high availability. You deploy it in a subnet of a virtual network. The subnet should have the name AzureFirewallSubnet. The firewall will get two IP addresses:
Internal IP: the first IP address in the subnet (here 10.0.3.4)
Public IP: a public IP address; in the above setup we will use it to provide access to a Kubernetes Ingress controller via a DNAT rule
As in the physical world, you will need to instruct systems to route traffic through the firewall. In Azure, this is done via a route table. The following route table was created:
In (1) a route to 0.0.0.0/0 is defined that routes to the private IP of the firewall. The route will be used when no other route applies! The route table is associated with just the aks-5-subnet (2), which is the subnet where AKS (with advanced networking) is deployed. It’s important to note that now, all external traffic originating from the Kubernetes cluster passes through the firewall.
When you compare Azure Firewall to the Network Virtual Appliances (NVAs) from vendors such as CheckPoint, you will notice that the capabilities are somewhat limited. On the flip side though, Azure Firewall is super simple to deploy when compared with a highly available NVA setup.
Before we look at the firewall rules, let’s take a look at the Kubernetes Ingress Controller.
Kubernetes Ingress Controller
In this example, I will deploy nginx-ingress as an Ingress Controller. It will provide access to HTTP-based workloads running in the cluster and it can route to various workloads based on the URL. I will deploy the nginx-ingress with Helm.
Think of an nginx-ingress as a reverse proxy. It receives http requests, looks at the hostname and path (e.g. mydomain.com/api/user) and routes the request to the appropriate Kubernetes service (e.g. the user service).
Normally, the nginx-ingress service is accessed via an Azure external load balancer. Behind the scenes, this is the result of the service object having spec.type set to the value LoadBalancer. If we want external traffic to nginx-ingress to pass through the firewall, we will need to tell Kubernetes to create an internal load balancer via an annotation. Let’s do that with Helm. First, you will need to install tiller, the server-side component of Helm. Use the following procedure from the Microsoft documentation:
This ensures an internal load balancer gets created. It gets created in the mc-* resource group that backs your AKS deployment:
Note that Kubernetes creates the load balancer, including the rules and probes for port 80 and 443 as defined in the service object that comes with the Helm chart. The load balancer is created in the ing-4-subnet as instructed by the service annotation. Its private IP address is 10.0.4.4 as in the diagram at the top of this post
DNAT Rule to Load Balancer
To provide access to internal resources, Azure Firewall uses DNAT rules which stands for destination network address translation. The concept is simple: traffic to the firewall’s public IP on some port can be forwarded to an internal IP on the same or another port. In our case, traffic to the firewall’s public IP on port 80 and 443 is forwarded to the internal load balancer’s private IP on port 80 and 443. The load balancer will forward the request to nginx-ingress:
If the installation of nginx-ingress was successful, you should end up at the default back-end when you go to http://firewallPublicIP.
If you configured Log Analytics and installed the Azure Firewall solution, you can look at the firewall logs. DNAT actions are logged and can be inspected:
Application and Network Rules
Azure Firewall application rules are rules that allow or deny outgoing HTTP/HTTPS traffic based on the URL. The following rules were defined:
The above rules allow http and https traffic to destinations such as docker.io, cloudflare and more.
Note that another Azure Firewall rule type, network rules, are evaluated first. If a match is found, rule evaluation is stopped. Suppose you have these network rules:
The above network rule allows port 22 and 443 for all sources and destinations. This means that Kubernetes can actually connect to any https-enabled site on the default port, regardless of the defined application rules. See rule processing for more information.
This feature alerts on and/or denies network traffic coming from known bad IP addresses or domains. You can track this via Log Analytics:
Above, you see denied port scans, traffic from botnets or brute force credentials attacks all being blocked by Azure Firewall. This feature is currently in preview.
The AKS documentation has a best practices section that discusses networking. It contains useful information about the networking model (Kubenet vs Azure CNI), ingresses and WAF. It does not, at this point in time (May 2019), desicribe how to use Azure Firewall with AKS. It would be great if that were added in the near future.
Here are a couple of key points to think about:
WAF (Web Application Firewall): Azure Firewall threat intelligence is not WAF; to enable WAF, there are several options:
you can use cloud-native WAFs such as TwistLock (WAF is one of the features of this product; it also provides firewall and vulnerability assessment)
remote access to Kubernetes API: today, the API server is exposed via a public IP address; having the API server on a local IP will be available soon
remote access to Kubernetes hosts using SSH: only allow SSH on the private IP addresses; use a bastion host to enable connectivity
Azure Kubernetes Service (AKS) can be combined with Azure Firewall to control network traffic to and from your Kubernetes cluster. Log Analytics provides the dashboard and logs to report and alert on traffic patterns. Features such as threat intelligence provide an extra layer of defense. For HTTP/HTTPS workloads (so most workloads), you should complement the deployment with a WAF such as Azure Application Gateway or 3rd party.
Clearly, the function uses the eventHubTrigger for an Event Hub called hub-pg. In connection, EH refers to an Application Setting which contains the connections string to the Event Hub. Yes, I excel at naming stuff! The Event Hub has defined a consumer group called pg that we are using in this function.
The cardinality setting is currently set to “one”, which means that the function can only process one message at a time. As a best practice, you should use a cardinality of “many” in order to process batches of messages. A setting of “many” is the default.
To make the required change, modify function.json and set cardinality to “many”. You will also have to modify the Azure Function to process a batch of messages versus only one:
The number of messages in a batch is controlled by maxBatchSize in host.json. By default, it is set to 64. Another setting,prefetchCount, determines how many messages are retrieved and cached before being sent to your function. When you change maxBatchSize, it is recommended to set prefetchCount to twice the maxBatchSize setting. For instance:
It’s great to have these options but how should you set them? As always, the answer is in this book:
A great resource to get a feel for what these settings do is this article. It also comes with a Power BI report that allows you to set the parameters to see the results of load tests.
In this post, we used the function.json cardinality setting of “many” to process a batch of messages per function call. By default, Azure Functions will use batches of 64 messages without prefetching. With the host.json settings of maxBatchSize and prefetchCount, that can be changed to better handle your scenario.
Several years ago, when we started our first adventures in the wonderful world of IoT, we created an application for visualizing real-time streams of sensor data. The sensor data came from custom-built devices that used 2G for connectivity. IoT networks and protocols such as SigFox, NB-IoT or Lora were not mainstream at that time. We leveraged what were then new and often preview-level Azure services such as IoT Hub, Stream Analytics, etc… The architecture was loosely based on lambda architecture with a hot and cold path and stateful window-based stream processing. Fun stuff!
Kubernetes already existed but had not taken off yet. Managed Kubernetes services such as Azure Kubernetes Service (AKS) weren’t a thing.
The application (end-user UI and management) was loosely based on a micro-services pattern and we decided to run the services as Docker containers. At that time, Karim Vaes, now a Program Manager for Azure Storage, worked at our company and was very enthusiastic about Rancher. , Rancher was still v1 and we decided to use it in combination with their own container orchestration framework called Cattle.
Our experience with Rancher was very positive. It was easy to deploy and run in production. The combination of GitHub, Shippable and the Rancher CLI made it extremely easy to deploy our code. Rancher, including Cattle, was very stable for our needs.
In recent years though, the growth of Kubernetes as a container orchestrator platform has far outpaced the others. Using an alternative orchestrator such as Cattle made less sense. Rancher 2.0 is now built around Kubernetes but maintains the same experience as earlier versions such as simple deployment and flexible configuration and management.
In this post, I will look at deploying Rancher 2.0 and importing an existing AKS cluster. This is a basic scenario but it allows you to get a feel for how it works. Indeed, besides deploying your cluster with Rancher from scratch (even on-premises on VMware), you can import existing Kubernetes clusters including managed clusters from Google, Amazon and Azure.
For evaluation purposes, it is best to just run Rancher on a single machine. I deployed an Azure virtual machine with the following properties:
Operating system: Ubuntu 16.04 LTS
Size: DS2v3 (2 vCPUs, 8GB of RAM)
Public IP with open ports 22, 80 and 443
DNS name: somename.westeurope.cloudapp.azure.com
In my personal DNS zone on CloudFlare, I created a CNAME record for the above DNS name. Later, when you install Rancher you can use the custom DNS name in combination with Let’s Encrypt support.
On the virtual machine, install Docker. Use the guide here. You can use the convenience script as a quick way to install Docker.
With Docker installed, install Rancher with the following command:
More details about the single node installation can be found here. Note that Rancher uses etcd as a datastore. With the command above, the data will be in /var/lib/rancher inside the container. This is ok if you are just doing a test drive. In other cases, use external storage and mount it on /var/lib/rancher.
A single-node install is great for test and development. For production, use the HA install. This will actually run Rancher on Kubernetes. Rancher recommends a dedicated cluster in this scenario.
To get started, I added an existing three-node AKS cluster to Rancher. After you add the cluster and turn on monitoring, you will see the following screen when you navigate to Clusters and select the imported cluster:
To demonstrate the functionality, I deployed a 3-node cluster (1.11.9) with RBAC enabled and standard networking. After deployment, open up Azure Cloud shell and get your credentials:
az aks list -o table az aks get-credentials -n cluster-name -g cluster-resource-group kubectl cluster-info
The first command lists the clusters in your subscription, including their name and resource group. The second command configures kubectl, the Kubernetes command line admin tool, which is pre-installed in Azure Cloud Shell. To verify you are connected, the last command simply displays cluster information.
Now that the cluster is deployed, let’s try to import it. In Rancher, navigate to Global – Clusters and click Add Cluster:
Click Import, type a name and click Create. You will get a screen with a command to run:
Continue on in Rancher, the cluster will be added (by the components you deployed above):
Click on the cluster:
To see live metrics, you can click Enable Monitoring. This will install and configure Prometheus and Grafana. You can control several parameters of the deployment such as data retention:
Notice that by default, persistent storage for Grafana and Prometheus is not configured.
Note: with monitoring enabled or not, you will notice the following error in the dashboard:
The error is described here. In short, the components are probably healthy. The error is not related to a Rancher issue but an upstream Kubernetes issue.
When the monitoring API is ready, you will see live metrics and Grafana icons. Clicking on the Graphana icon next to Nodes gives you this:
Of course, Azure provides Container Insights for monitoring. The Grafana dashboards are richer though. On the other hand, querying and alerting on logs and metrics from Container Insights is powerful as well. You can of course enable them all and use the best of both worlds.
We briefly looked at Rancher 2.0 and how it can interact with a existing AKS cluster. An existing cluster is easy to add. Once it is added, adding monitoring is “easy peasy lemon squeezy” as my daughter would call it! 😉 As with Rancher 1.x, I am again pleasantly surprised at how Rancher is able to make complex matters simpler and more fun to work with. There is much more to explore and do of course. That’s for some follow-up posts!