Getting started with Consul on Kubernetes

Although I have heard a lot about Hashicorp’s Consul, I have not had the opportunity to work with it and get acquainted with the basics. In this post, I will share some of the basics I have learned, hopefully giving you a bit of a head start when you embark on this journey yourself.

Want to watch a video about this instead?

What is Consul?

Basically, Consul is a networking tool. It provides service discovery and allows you to store and retrieve configuration values. On top of that, it provides service-mesh capability by controlling and encrypting service-to-service traffic. Although that looks simple enough, in complex and dynamic infrastructure spanning multiple locations such as on-premises and cloud, this can become extremely complicated. Let’s stick to the basics and focus on three things:

  • Installation on Kubernetes
  • Using the key-value store for configuration
  • Using the service catalog to retrieve service information

We will use a small Go program to illustrate the use of the Consul API. Let’s get started… 🚀🚀🚀

Installation of Consul

I will install Consul using the provided Helm chart. Note that the installation I will perform is great for testing but should not be used for production. In production, there are many more things to think about. Look at the configuration values for hints: certificates, storage size and class, options to enable/disable, etc… That being said, the chart does install multiple servers and clients to provide high availability.

I installed Consul with Pulumi and Python. You can check the code here. You can use that code on Azure to deploy both Kubernetes and Consul in one step. The section in the code that installs Consul is shown below:

The code above would be equivalent to this Helm chart installation (Helm v3):

Connecting to the Consul UI

The chart installs Consul in the consul namespace. You can run the following command to get to the UI:

You will see the screen below. The list of services depends on the Kubernetes services in your system.

Consul UI with list of services

The services above include consul itself. The consul service also has health checks configured. The other services in the screenshot are Kubernetes services that were discovered by Consul. I have installed Redis in the default namespace and exposed Redis via a service called redisapp. This results in a Consul service called redisadd-default. Later, we will query this service from our Go application.

When you click Key/Value, you can see the configured keys. I have created one key called REDISPATTERN which is later used in the Go program to know the Redis channels to subscribe to. It’s just a configuration value that is retrieved at runtime.

A simple key/value pair: REDISPATTERn=*

The Key/Value pair can be created via the consul CLI, the HTTP API or via the UI (Create button in the main Key/Value screen). I created the REDISPATTERN key via the Create button.

Querying the Key/Value store

Let’s turn our attention to writing some code that retrieves a Consul key at runtime. The question of course is: “how does your application find Consul?”. Look at the diagram below:

Simplifgied diagram of Consul installation on Kubernetes via the Helm chart

Above, you see the Consul server agents, implemented as a Kubernetes StatefulSet. Each server pod has a volume (Azure disk in this case) to store data such as key/value pairs.

Your application will not connect to these servers directly. Instead, it will connect via the client agents. The client agents are implemented as a DaemonSet resulting in a client agent per Kubernetes node. The client agent pods expose a static port on the Kubernetes host (yes, you read that right). This means that your app can connect to the IP address of the host it is running on. Your app can discover that IP address via the Downward API.

The container spec contains the following code:

The HOST_IP will be set to the IP of the Kubernetes host via a reference to status.hostIP. Next, the environment variable CONSUL_HTTP_ADDR is set to the full HTTP address including port 8500. In your code, you will need to read that environment variable.

Retrieving a key/value pair

Here is some code to read a Consul key/value pair with Go. Full source code is here.

The comments in the code should be self-explanatory. When the REDISPATTERN key is not set or another error occurs, the program will exit. If REDISPATTERN is set, we can use the value later:

Looking up a service

That’s great but how do you look up an address of a service? That’s easy with the following basic code via the catalog:

consul is a *consulapi.client obtained earlier. You use the Catalog() function to obtain access to catalog service functionality. In this case, we simply retrieve the address and port value of the Kubernetes service redisapp in the default namespace. We can use that information to connect to our Redis back-end.

Conclusion

It’s easy to get started with Consul on Kubernetes and to write some code to take advantage of it. Be aware though that we only scratched the surface here and that this is both a sample deployment (without TLS, RBAC, etc…) and some sample code. In addition, you should only use Consul in more complex application landscapes with many services to discover, traffic to secure and more. If you do think you need it, you should also take a look at managed Consul on Azure. It runs in your subscription but fully managed by Hashicorp! It can be integrated with Azure Kubernetes Service as well.

In a later post, I will take a look at the service mesh capabilities with Connect.

Azure SQL, Azure Active Directory and Seamless SSO: An Overview

Instead of pure lift-and-shift migrations to the cloud, we often encounter lift-shift-tinker migrations. In such a migration, you modify some of the application components to take advantage of cloud services. Often, that’s the database but it could also be your web servers (e.g. replaced by Azure Web App). When you replace SQL Server on-premises with SQL Server or Managed Instance on Azure, we often get the following questions:

  • How does Azure SQL Database or Managed Instance integrate with Active Directory?
  • How do you authenticate to these databases with an Azure Active Directory account?
  • Is MFA (multi-factor authentication) supported?
  • If the user is logged on with an Active Directory account on a domain-joined computer, is single sign-on possible?

In this post, we will look at two distinct configuration options that can be used together if required:

  • Azure AD authentication to SQL Database
  • Single sign-on to Azure SQL Database from a domain-joined computer via Azure AD Seamless SSO

In what follows, I will provide an overview of the steps. Use the links to the Microsoft documentation for the details. There are many!!! 😉

Visually, it looks a bit like below. In the image, there’s an actual domain controller in Azure (extra Active Directory site) for local authentication to Active Directory. Later in this post, there is an example Python app that was run on a WVD host joined to this AD.

Azure AD Authentication

Both Azure SQL Database and Managed Instances can be integrated with Azure Active Directory. They cannot be integrated with on-premises Active Directory (ADDS) or Azure Active Directory Domain Services.

For Azure SQL Database, the configuration is at the SQL Server level:

SQL Database Azure AD integration

You should read the full documentation because there are many details to understand. The account you set as admin can be a cloud-only account. It does not need a specific role. When the account is set, you can logon with that account from Management Studio:

Authentication from Management Studio

There are several authentication schemes supported by Management Studio but the Universal with MFA option typically works best. If your account has MFA enabled, you will be challenged for a second factor as usual.

Once connected with the Azure AD “admin”, you can create contained database users with the following syntax:

Note that instead of a single user, you can work with groups here. Just use the group name instead of the user principal name. In the database, the user or group appears in Management Studio like so:

Azure AD user (or group) in list of database users

From an administration perspective, the integration steps are straightforward but you create your users differently. When you migrate databases to the cloud, you will have to replace the references to on-premises ADDS users with references to Azure AD users!

Seamless SSO

Now that Azure AD is integrated with Azure SQL Database, we can configure single sign-on for users that are logged on with Active Directory credentials on a domain-joined computer. Note that I am not discussing Azure AD joined or hybrid Azure AD joined devices. The case I am discussing applies to Windows Virtual Desktop (WVD) as well. WVD devices are domain-joined and need line-of-sight to Active Directory domain controllers.

Note: seamless SSO is of course optional but it is a great way to make it easier for users to connect to your application after the migration to Azure

To enable single sign-on to Azure SQL Database, we will use the Seamless SSO feature of Active Directory. That feature works with both password-synchronization and pass-through authentication. All of this is configured via Azure AD Connect. Azure AD Connect takes care of the synchronization of on-premises identities in Active Directory to an Azure Active Directory tenant. If you are not familiar with Azure AD Connect, please check the documentation as that discussion is beyond the scope of this post.

When Seamless SSO is configured, you will see a new computer account in Active Directory, called AZUREADSSOACC$. You will need to turn on advanced settings in Active Directory Users and Computers to see it. That account is important as it is used to provide a Kerberos ticket to Azure AD. For full details, check the documentation. Understanding the flow depicted below is important:

Seamless Single Sign On - Web app flow
Seamless SSO flow (from Microsoft @ https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-sso-how-it-works)

You should also understand the security implications and rotate the Kerberos secret as discussed in the FAQ.

Before trying SSO to Azure SQL Database, log on to a domain-joined device with an identity that is synced to the cloud. Make sure, Internet Explorer is configured as follows:

Add https://autologon.microsoftazuread-sso.com to the Local Intranet zone

Check the docs for more information about the Internet Explorer setting and considerations for other browsers.

Note: you do not need to configure the Local Intranet zone if you want SSO to Azure SQL Database via ODBC (discussed below)

With the Local Intranet zone configured, you should be able to go to https://myapps.microsoft.com and only provide your Azure AD principal (e.g. first.last@yourdomain.com). You should not be asked to provide your password. If you use https://myapps.microsoft.com/yourdomain.com, you will not even be asked your username.

With that out of the way, let’s see if we can connect to Azure SQL Database using an ODBC connection. Make sure you have installed the latest ODBC Driver for SQL Server on the machine (in my case, ODBC Driver 17). Create an ODBC connection with the Azure SQL Server name. In the next step, you see the following authentication options:

ODBC Driver 17 authentication options

Although all the options for Azure Active Directory should work, we are interested in integrated authentication, based on the credentials of the logged on user. In the next steps, I only set the database name and accepted all the other options as default. Now you can test the data source:

Testing the connection

Great, but what about your applications? Depending on the application, there still might be quite some work to do and some code to change. Instead of opening that can of worms 🥫, let’s see how this integrated connection works from a sample Pyhton application.

Integrated Authentication test with Python

The following Python program uses pyodbc to connect with integrated authentication:

My SQL Database contains a simple table called test. The logged on user has read and write access. As you can see, there is no user and password specified. In the connection string, “authentication=ActiveDirectoryIntegrated” is doing the trick. The result is just my name (hey, it’s a test):

Result returned from table

Conclusion

In this post, I have highlighted how single sign-on works for domain-joined devices when you use Azure AD Connect password synchronization in combination with the Seamless SSO feature. This scenario is supported by SQL Server ODBC driver version 17 as shown with the Python code. Although I used SQL Database as an example, this scenario also applies to a managed instance.

Progressive Delivery on Kubernetes: what are your options?

If you have ever deployed an application to Kubernetes, even a simple one, you are probably familiar with deployments. A deployment describes the pods to run, how many of them to run and how they should be upgraded. That last point is especially important because the strategy you select has an impact on the availability of the deployment. A deployment supports the following two strategies:

  • Recreate: all existing pods are killed and new ones are created; this obviously leads to some downtime
  • RollingUpdate: pods are gradually replaced which means there is a period when old and new pods coexist; this can result in issues for stateful pods or if there is no backward compatibility

But what if you want to use other methods such as BlueGreen or Canary? Although you could do that with a custom approach that uses deployments, there are some solution that provide a more automated approach. Below, I discuss two of them briefly. Videos provide a more in depth look.

Argo Rollouts

One of the solutions out there is Argo Rollouts. It is very easy to use. If you want to start slowly, with BlueGreen deployments and manual approval for instance, Argo Rollouts is recommended. It has a nice kubectl plugin and integration with Argo CD, a GitOps solution.

The following video demonstrates BlueGreen deployments:

BlueGreen deployments with Argo Rollouts

This video discusses a canary deployment with Argo Rollouts albeit a simple one without metric analysis:

Canary deployments with Argo Rollouts

This video shows the integration between Argo Rollouts and Argo CD:

Argo CD and Argo Rollouts integration

One thing to note is that, instead of a deployment, you will create a rollout object. It is easy to convert an existing deployment into a rollout. Other tools such as Flagger (see below), provide their functionality on top of an existing deployment.

For traffic splitting and metrics analysis, Argo Rollouts does not support Linkerd. More information about traffic splitting and management can be found here.

Flagger

Flagger, by Weaveworks, is another solution that provides BlueGreen and Canary deployment support to Kubernetes. In the video below, I demonstrate the basic look and feel of doing a canary deployment that includes metric analysis. Linkerd is used for gradual traffic shifting to the canary based on the built-in success rate metric of Linkerd:

Canary release with Flagger and Linkerd

If you want to get started with canary releases and easy traffic splitting and metrics, I suggest using the Flagger and Linkerd combination. This is based simply on the fact that Linkerd is much easier to install and use than Istio. Argo Rollouts in combination with Istio and Prometheus could be used to achieve exactly the same result.

Which one to use?

If you just want BlueGreen deployments with manual approvals, I would suggest using Argo Rollouts. When you integrate it with Argo CD, you can even use the Argo CD UI to promote your deployment. If you are comfortable with Istio and Prometheus, you can go a step further and add metrics analysis to automatically progress your deployment. You can also use a simple Kubernetes job to validate your deployment. Also, note that other metrics providers are supported.

Flagger supports more options for traffic splitting and metrics, due to its support for both Linkerd and Istio. Because Linkerd is so easy to use, Flagger is simpler to get started with canary releases and metrics analysis.

Trying Windows Virtual Desktop

Update: Windows Virtual Desktop 2020 Spring Update brings an new fully ARM-based deployment method and portal experience. As of May 2020, these features are in public preview. Check the documentation. Links to the documentation in this article will also reflect the update.

It was Sunday afternoon. I had some time. So I had this crazy idea to try out Windows Virtual Desktop. Just so you know, I am not really into “the desktop” and all its intricacies. I have done my fair share of Remote Desktop Services, App-V and Citrix in the past but that is probably already a decade ago.

Before moving on, review the terminology like tenants, host pools, app groups, etc…. That can be found here: https://docs.microsoft.com/en-us/azure/virtual-desktop/environment-setup

To start, I will share the (wrong) assumptions I made:

  • “I will join the virtual desktops to Azure Active Directory directly! That way I do not have to set up domain controllers and stuff. That must be supported!” – NOPE, that is not supported. You will need domain controllers synced to the Azure AD instance you will be using. You can use Azure AD Domain Services as well.
  • “I will just use the portal to provision a host pool. I have seen there’s a marketplace item for that called Windows Virtual Desktop – Provision a host pool” – NOPE, it’s not meant to work just like that. Move to the next point…
  • “I will not read the documentation. Why would I? It’s desktops we’re talking about, not Kubernetes or Istio or something!” 😉

Let’s start with that last assumption shall we? You should definitely read the documentation, especially the following two pages:

The WVD Tenant

Use the link above to create the tenant and follow the instructions to the letter. A tenant is a group of one or more host pools. The host pools contain desktops and servers that your users will connect to. The host pool provisioning wizard in the portal will ask for this tenant.

You will need Global Admin rights in your Azure AD tenant to create this Windows Virtual Desktop tenant. That’s another problem I had since I do not have those access rights at my current employer. I used an Azure subscription tied to my employer’s Azure AD tenant. To fix that, I created an Azure trial with a new Azure AD tenant where I had full control.

Another problem I bumped into is that the account I created the Azure AD tenant with is an account in the baeke.info domain which will become an external account in the directory. You should not use such an account in the host pool provisioning wizard in the portal because it will fail.

When the tenant is created, you can use PowerShell to get information about it (Ids were changed to protect the innocent):

Great, my tenant is called BaekeTenant and the tenant group is Default Tenant Group. During host pool provisioning, Default Tenant Group is automatically proposed as the tenant group.

Service Principals and role assignments

The service principal is used to automate certain Windows Virtual Desktop management tasks. Follow https://docs.microsoft.com/en-us/azure/virtual-desktop/create-service-principal-role-powershell to the letter to create this service principal. It will need a specific role called RDS Owner, via the following PowerShell command (where $svcPrincipal and $myTenantName are variables from previous steps):

If that role is not granted, all sorts of things might happen. I had an error after domain join, in the dscextension resource. You can check the ARM deployment for that. It’s green below but it was red quite a few times because I used an account that did not have the role:

So, if that step fails for you, you know where to look. If the joindomain resource fails, it probably has to do with the WVD hosts not being able to find the domain controllers. Check the DNS settings! Also check the AD Join section below.

During host pool provisioning, you will need the following in the last step of the wizard (see further down below):

  • The application ID: the user name of the service principal ($svcPrincipal.AppId)
  • The secret: the password of the service principal
  • The Azure AD tenant ID: you can find it in the Properties page of your Azure AD tenant

Active Directory Join

When you provision the host pool, one of the provisioning steps is joining the Windows 10 or Windows Server system to Active Directory. An Azure Active Directory Join is not supported. Because I did not want to install and configure domain controllers, I used Azure AD Domain Services to create a domain that syncs with my new Azure AD tenant:

Using Azure AD Domain Services: Windows 10 systems in the host pool will join this AD

Of course, you need to make sure that Windows Virtual Desktop machines, use DNS servers that can resolve requests for this domain. You can find this in Properties:

DNS servers to use are 10.0.0.4 and 10.0.0.5 in my case

The virtual network that contains the subnet with the virtual desktops, has the following setting for DNS:

DNS servers of VNET so virtual desktop hosts in the VNET can be joined to the Azure AD Domain Services domain

During host pool provisioning, you will be asked for an account that can do domain joins. You can also specify the domain to join if the domain suffix of the account you specify is different. The account you use should be member of the following group as well:

Member of AAD DC Administrators group

When you have all of this stuff in place, you are finally ready to provision the host pool. To recap:

  • Create a WVD tenant
  • Create a Service Principal that has the RDS Owner role
  • Make sure WVD hosts can join Active Directory (Azure AD join not supported); you can use Azure AD Domain Services if you want
  • Make sure WVD hosts use DNS servers that can resolve the necessary DNS records to find the domain controllers; I used the IP addresses of the Azure AD Domain Services domain controllers here
  • Make sure the account you use to join the domain is member of the AAD DC Administrators group

When all this is done, you are finally ready to use the marketplace item in the portal to provision the host pool:

Use the search bar to find the WVD marketplace item

Here are the screens of how I filled them out:

The basics

You can specify more users. Use a comma as separator. I only added my own account here. You can add more users later with the Add-RdsAppGroupUser, to add a user to the default Desktop Application Group. For example:

And now the second step:

I only want 1 machine to test the features

Next, virtual machine settings:

VM settings

I use the Windows 10 Enterprise multi-session image from the gallery. Note you can use your own images as well. I don’t need to specify a domain because the suffix of the AD domain join UPN will be used: azurebaeke.onmicrosoft.com. The virtual network has DNS configured to use the Azure AD Domain Services servers. The domain join user is member of the AAD DC Administrators group.

And now the final screen:

At last….

The WVD BaekeTenant was created earlier with PowerShell. We also created a service principal with the RDS Owner role so we use that here. Part of this process is the deployment of the WVD session host(s) in the subnet you specified:

Deployed WVD session host

If you followed the Microsoft documentation and some of the tips in this post, you should be able to get to your desktop fairly easily. But how? Just check this to connect from your desktop, or this to connect from the browser. Just don’t try to use mstsc.exe, it’s not supported. End result: finally able to logon to the desktop…

Soon, a more integrated Azure Portal experience is coming that will (hopefully) provide better guidance with sufficient checks and tips to make this whole process a lot smoother.

Update to IoT Simulator

Quite a while ago, I wrote a small IoT Simulator in Go that creates or deletes multiple IoT devices in IoT Hub and sends telemetry at a preset interval. However, when you use version 0.4 of the simulator, you will encounter issues in the following cases:

  • You create a route to store telemetry in an Azure Storage account: the telemetry will be base 64 encoded
  • You create an Event Grid subscription that forwards the telemetry to an Azure Function or other target: the telemetry will be base 64 encoded

For example, in Azure Storage, when you store telemetry in JSON format, you will see something like this with versions 0.4 and older:

Note that the body is base 64 encoded. The encoding stems from the fact that UTF-8 encoding was not specified as can be seen in the JSON. contentEncoding is indeed empty and the contentType does not mention the character set.

To fix that, a small code change was required. Note that the code uses HTTP to send telemetry, not MQTT or AMQP:

Setting the character set as part of the content type

With the character set as UTF-8, the telemetry in the Storage Account will look like this:

Note that contentEncoding is still empty here, but contentType includes the charset. That is enough for the body to be in plain text.

The change will also allow you to use queries on the body in IoT Hub message routing filters or Event Grid subscription filters.

Enjoy the new version 0.5! All three of you… 😉😉😉

GitOps with Kubernetes: a better way to deploy?

I recently gave a talk at TechTrain, a monthly event in Mechelen (Belgium), hosted by Cronos. The talk is called “GitOps with Kubernetes: a better way to deploy” and is an introduction to GitOps with Weaveworks Flux as an example.

You can find a re-recording of the presentation on Youtube:

Writing a Kubernetes operator with Kopf

In today’s post, we will write a simple operator with Kopf, which is a Python framework created by Zalando. A Kubernetes operator is a piece of software, running in Kubernetes, that does something application specific. To see some examples of what operators are used for, check out operatorhub.io.

Our operator will do something simple in order to easily grasp how it works:

  • the operator will create a deployment that runs nginx
  • nginx will serve a static website based on a git repository that you specify; we will use an init container to grab the website from git and store it in a volume
  • you can control the number of instances via a replicas parameter

That’s great but how will the operator know when it has to do something, like creating or updating resources? We will use custom resources for that. Read on to learn more…

Note: source files are on GitHub

Custom Resource Definition (CRD)

Kubernetes allows you to define your own resources. We will create a resource of type (kind) DemoWeb. The CRD is created with the YAML below:

For more information (and there is a lot) about CRDs, see the documentation.

Once you create the above resource with kubectl apply (or create), you can create a custom resource based on the definition:

Note that we specified our own API and version in the CRD (baeke.info/v1) and that we set the kind to DemoWeb. In the additionalPrinterColumns, we defined some properties that can be set in the spec that will also be printed on screen. When you list resources of kind DemoWeb, you will the see replicas and gitrepo columns:

Custom resources based on the DemoWeb CRD

Of course, creating the CRD and the custom resources is not enough. To actually create the nginx deployment when the custom resource is created, we need to write and run the operator.

Writing the operator

I wrote the operator on a Mac with Python 3.7.6 (64-bit). On Windows, for best results, make sure you use Miniconda instead of Python from the Windows Store. First install Kopf and the Kubernetes package:

Verify you can run kopf:

Running kopf

Let’s write the operator. You can find it in full here. Here’s the first part:

Naturally, we import kopf and other necessary packages. As noted before, kopf and kubernetes will have to be installed with pip. Next, we define a handler that runs whenever a resource of our custom type is spotted by the operator (with the @kopf.on.create decorator). The handler has two parameters:

  • spec object: allows us to retrieve our custom properties with spec.get (e.g. spec.get(‘replicas’, 1) – the second parameter is the default value)
  • **kwargs: a dictionary with lots of extra values we can use; we use it to retrieve the name of our custom resource (e.g. demoweb1); we can use that name to derive the name of our deployment and to set labels for our pods

Note: instead of using **kwargs to retrieve the name, you can also define an extra name parameter in the handler like so: def create_fn(spec, name, **kwargs); see the docs for more information

Our deployment is just yaml stored in the doc variable with some help from the Python yaml package. We use spec.get and the name variable to customise it.

After the doc variable, the following code completes the event handler:

The rest of the operator

With kopf.adopt, we make sure the deployment we create is a child of our custom resource. When we delete the custom resource, its children are also deleted.

Next, we simply use the kubernetes client to create a deployment via the apps/v1 api. The method create_namespaced_deployment takes two required parameters: the namespace and the deployment specification. Note there is only minimal error checking here. There is much more you can do with regards to error checking, retries, etc…

Now we can run the operator with:

You can perfectly run this on your local workstation if you have a working kube config pointing at a running cluster with the CRD installed. Kopf will automatically use that for authentication:

Running the operator on your workstation

Running the operator in your cluster

To run the operator in your cluster, create a Dockerfile that produces an image with Python, kopf, kubernetes and your operator in Python. In my case:

We added the verbose parameter for extra logging. Next, run the following commands to build and push the image (example with my image name):

Now you can deploy the operator to the cluster:

The above is just a regular deployment but the serviceAccountName is extremely important. It gives kopf and your operator the required access rights to create the deployment is the target namespace. Check out the documentation to find out more about the creation of the service account and the required roles. Note that you should only run one instance of the operator!

Once the operator is deployed, you will see it running as a normal pod:

The operator is running

To see what is going on, check the logs. Let’s show them with octant:

Your operator logs

At the bottom, you see what happens when a creation event is detected for a resource of type DemoWeb. The spec is shown with the git repository and the number on replicas.

Now you can create resources of kind DemoWeb and see what happens. If you have your own git repository with some HTML in it, try to use that. Otherwise, just use mine at https://github.com/gbaeke/static-web.

Conclusion

Writing an operator is easy to do with the Kopf framework. Do note that we only touched on the basics to get started. We only have an on.create handler, and no on.update handler. So if you want to increase the number of replicas, you will have to delete the custom resource and create a new one. Based on the example though, it should be pretty easy to fix that. The git repo contains an example of an operator that also implements the on.update handler (with_update.py).

Azure Security Center and Azure Kubernetes Service

Quick post and note to self today… Azure Security Center checks many of your resources for vulnerabilities or attacks. For a while now, it also does so for Azure Kubernetes Service (AKS). In my portal, I saw the following:

Attacked resources?!? Now what?

There are many possible alerts. These are the ones I got:

Some of the alerts for AKS in Security Center

The first one, for instance, reports that a container has mounted /etc/kubernetes/azure.json on the AKS worker node where it runs. That is indeed a sensitive path because azure.json contains the credentials of the AKS security principal. In this case, it’s Azure Key Vault Controller that has been configured to use this principal to connect to Azure Key Vault.

Another useful one is the alert for new high privilege roles. In my case, these alerts are the result from installing Helm charts that include such a role. For example, the helm-operator chart includes a role which uses a ClusterRoleBinding for [{“resources”:[“*”],”apiGroups”:[“*”],”verbs”:[“*”]}]. Yep, that’s high privilege indeed.

Remember, you will need Azure Security Center Standard for these capabilities. Azure Kubernetes Services is charged per AKS core at $2/VM core/month in the preview (according to what I see in the portal).

Security Center Standard pricing in preview for AKS

Be sure to include Azure Security Center Standard when you are deploying Azure resources (not just AKS). The alerts you get are useful. In most cases, you will also learn a thing or two about the software you are deploying! 😆

Giving Argo CD a spin

If you have followed my blog a little, you have seen a few posts about GitOps with Flux CD. This time, I am taking a look at Argo CD which, like Flux CD, is a GitOps tool to deploy applications from manifests in a git repository.

Don’t want to read this whole thing?

Here’s the video version of this post

There are several differences between the two tools:

  • At first glance, Flux appears to use a single git repo for your cluster where Argo immediately introduces the concept of apps. Each app can be connected to a different git repo. However Flux can also use multiple git repositories in the same cluster. See https://github.com/fluxcd/multi-tenancy for more information
  • Flux has the concept of workloads which can be automated. This means that image repositories are scanned for updates. When an update is available (say from tag v1.0.0 to v1.0.1), Flux will update your application based on filters you specify. As far as I can see, Argo requires you to drive the update from your CI process, which might be preferred.
  • By default, Argo deploys an administrative UI (next to a CLI) with a full view on your deployment and its dependencies
  • Argo supports RBAC and integrates with external identity providers (e.g. Azure Active Directory)

The Argo CD admin interface is shown below:

Argo CD admin interface… not too shabby

Let’s take a look at how to deploy Argo and deploy the app you see above. The app is deployed using a single yaml file. Nothing fancy yet such as kustomize or jsonnet.

Deployment

The getting started guide is pretty clear, so do have a look over there as well. To install, just run (with a deployed Kubernetes cluster and kubectl pointing at the cluster):

Note that I installed Argo CD on Azure (AKS).

Next, install the CLI. On a Mac, that is simple (with Homebrew):

You will need access to the API server, which is not exposed over the Internet by default. For testing, port forwarding is easiest. In a separate shell, run the following command:

You can now connect to https://localhost:8080 to get to the UI. You will need the admin password by running:

You can now login to the UI with the user admin and the displayed password. You should also login from the CLI and change the password with the following commands:

Great! You are all set now to deploy an application.

Deploying an application

We will deploy an application that has a couple of dependencies. Normally, you would install those dependencies with Argo CD as well but since I am using a cluster that has these dependencies installed via Azure DevOps, I will just list what you need (Helm commands):

To know more about these dependencies and use an Azure DevOps YAML pipeline to deploy them, see this post. If you want, you can skip the externaldns installation and create a DNS record yourself that resolves to the public IP address of Nginx Ingress. If you do not want to use an Azure static IP address, you can remove the loadBalancerIP parameter from the first command.

The manifests we will deploy with Argo CD can be found in the following public git repository: https://github.com/gbaeke/argo-demo. The application is in three YAML files:

  • Two YAML files that create a certificate cluster issuer based on custom resource definitions (CRDs) from cert-manager
  • realtime.yaml: Redis deployment, Redis service (ClusterIP), realtime web app deployment (based on this), realtime web app service (ClusterIP), ingress resource for https://real.baeke.info (record automatically created by externaldns)

It’s best that you fork my repo and modify realtime.yaml’s ingress resource with your own DNS name.

Create the Argo app

Now you can create the Argo app based on my forked repo. I used the following command with my original repo:

The command above creates an app called realtime based on the specified repo. The app should use the manifests folder and apply (kubectl apply) all the manifests in that folder. The manifests are deployed to the cluster that Argo CD runs in. Note that you can run Argo CD in one cluster and deploy to totally different clusters.

The above command does not configure the repository to be synced automatically, although that is an option. To sync manually, use the following command:

The application should now be synced and viewable in the UI:

Application installed and synced

In my case, this results in the following application at https://real.baeke.info:

Not Secure because we use Let’s Encrypt staging for this app

Set up auto-sync

Let’s set up this app to automatically sync with the repo (default = every 3 minutes). This can be done from both the CLI and the UI. Let’s do it from the UI. Click on the app and then click App Details. You will find a Sync Policy in the app details where you can enable auto-sync

Setting up auto-sync from the UI

You can now make changes to the git repo like changing the image tag for gbaeke/fluxapp (yes, I used this image with the Flux posts as well 😊 ) to 1.0.6 and wait for the sync to happen. Or sync manually from the CLI or the UI.

Conclusion

This was a quick tour of Argo CD. There is much more you can do but the above should get you started quickly. I must say I quite like the solution and am eager to see what the collaboration of Flux CD, Argo CD and Amazon comes up with in the future.

Kustomize and Flux

Flux has a feature called manifest generation that works together with Kustomize. Instead of just picking YAML files from a git repo and applying them, customisation is performed with the kustomize build command. The resulting YAML then gets applied to your cluster.

If you don’t know how customisation works (without Flux), take a look at the article I wrote earlier. Or look at the core docs.

You need to be aware of a few things before you get started. In order for Flux to use this method, you need to turn on manifest generation. With the Flux Helm chart, just pass the following parameter:

In my case, I have plain YAML files without customisation in a config folder. I want the files that use customisation in a different folder, say kustomize, like so:

Two folders to pass as git.path

To pass these folders to the Helm chart, use the following parameter:

The kustomize folder contains the following files:

base files with environments dev and prod

There is nothing special about the base folder here. It is as explained in my previous post. The dev and prod folders are similar so I will focus only on dev.

The dev folder contains a .flux.yaml file, which is required by Flux. In this simple example, it contains the following:

The file specifies the generator to use, in this case Kustomize. The kustomize executable is in the Flux image. I specify one patchFile which contains patches for several resources separated by —:

Above, you see the patches for the dev environment:

  • the workload should be automated by Flux, installing new images based on the semantic version filter ~1
  • the ingress should use host realdev.baeke.info with a different name for the secret as well (the secret will be created by cert-manager)

The prod folder contains a similar configuration. Perhaps naively, I thought that specifying the kustomize folder in git.path was sufficient for Flux to scan the folders and run customisation wherever a .flux.yaml file was found. Sadly, that is not the case. ☹️With just the kustomization folder specified, Flux find conflicts between base, dev and prod folders because they contain similar files. That is expected behaviour for regular YAML files but , in my opinion, should not happen in this case. There is a bit of a clunky way to make this work though. Just specify the following as git.path:

With the above parameter, Flux will find no conflicts and will happily apply the customisations.

As a side note, you should also specify the namespace in the patch file explicitly. It is not added automatically even though kustomization.yaml contains the namespace.

Let’s look at the cluster when Flux has applied the changes.

Namespaces for dev and prod created via Flux & Kustomize

And here is the deployed “production app”:

Who chose that ugly colour!

The way customisations are handled could be improved. It’s unwieldy to specify every “customisation” folder in the git.path parameter. Just give me a –git-kustomize-path parameter and scan the paths in that parameter for .flux.yaml files. On the other hand, maybe I am missing something here so remarks are welcome.