Running Kafka with GitOps – sharing some tips and tricks we’ve gained by experience

Running Kafka with GitOps – sharing some tips and tricks we’ve gained by experience
26 May, 2022 lmg

To set up and get Kafka up and running on Kubernetes can almost seem magical when using Strimzi. However, managing and maintaining the cluster configuration has been less discussed. It is obvious that Kafka is stateful, and that the configuration lies within the cluster. But how do you keep track of all changes to topics, cluster and user configuration? What happens if you want to move or duplicate your cluster configuration to a new instance of managed Kafka? At Irori, we created one of our customers’ Kafka clusters and are managing it with GitOps. In this post, we will share some of the experiences and learnings we’ve gained after launching this concept.

Why we want to manage Kafka with GitOps

By managing Kafka with Gitops, we can automate the management of topics and ACLs from version-controlled code. When we implemented this concept, we wanted to be able to deliver reliable Kafka clusters to multiple developer teams within the organization.

The cluster configuration updates should be easy to implement while ensuring that we will not overwrite any recent changes. More preferably, it should also be scalable and able to run in more than one isolated environment at the same time.

How do we implement Kafka with GitOps then?

In an earlier blog post, What is GitOps, we described the basic concepts and benefits you get with it. A production Kafka cluster with brokers, topics, users, monitoring rules, etc. can be quite complex to manage. It makes sense to keep all configurations of a Kafka cluster in a single Git repository if you want to keep track of changes. Using the repository, you can find out what configuration is (supposed to be) running on a specific Kafka environment you work with. If a new Kafka cluster is needed, the Git repository can be forked and reused with minor modifications in another Kafka installation. Because Git is the industry standard, most software developers should be able to use it right away.

It is obviously possible to structure the Git repository differently based on the use case, but when running Kafa on Kubernetes we recommend creating the basic structure as an HELM chart, as illustrated below.

The final piece to complete GitOps is automating the provisioning of the Kafka cluster with ArgoCD, a topic covered in an earlier post. It only takes a few lines of configuration to have ArgoCD synchronize the HELM chart when there is an update in the repository and apply any changes. We can now ensure that the latest version of the code in Git is the configuration applied to Kubernetes.

The ArgoCD configuration for a (test) environment does not have to be more complicated than in the picture above. ArgoCD will provision all template files under the ‘helm/templates’ folder. In addition to the values.yaml file we added environment specific values files for topics, users and the environment.

Keys to success:

  1. Transparency. Kafka cluster configuration files are now available to view in git for everyone authorized and not just hidden deep somewhere in the Kubernetes maze.
  2. Simplicity. The cluster configuration configured with helm in an organized manner makes it easy to get a good overview of exactly what’s included, probably even easier to read than the standard Kafka CRD.
  3. HELM. One major upside with Helm is that cluster configuration for multiple clusters (dev, test, prod) can be stored as templates in the same repo.

We hope that you gained a better understanding of the process of managing and implementing Kafka with GitOps. Don’t forget to check out our previous blog posts about GitOps and ArgoCD.

Also, we are currently looking for .NET developers who want to join our Irori team – find out more and submit your application here.

Authors:
Gustav Norbäcker
Solution Architect and Kafka Expert

 


Kristoffer Thorin
Software Engineer