Tools for Kafka

Simple steps to enhance your Kafka Toolbox

In this blog, I will share some useful tools tailored for developers working with Kafka. However, inclusivity is paramount; whether you identify as an SRE, DevOps Engineer, Infrastructure Specialist, or Administrator, you’re warmly invited to explore these resources. This curated list focuses on the practical, day-to-day tasks encountered by developers and intentionally excludes tools or components within Kafka’s ecosystem, such as Cruise Control, Kafka Bridge, Kafka Exporter, and others. Nonetheless, I may delve into these components in a future post.

By the end of this post, I aim for you to gain a comprehensive understanding of what these tools offer and how you can seamlessly integrate them into your daily workflow.


Kcat, formerly known as kafkacat, serves as a versatile command-line interface (CLI) tool for engaging with Kafka, primarily functioning as both a producer and consumer. While Apache Kafka does provide CLI tools like and, which offer similar functionalities to kcat, you might wonder why opt for kcat instead?

The difference lies in user experience; the built-in bin tools can sometimes prove cumbersome to navigate. Conversely, kcat offers a more intuitive and user-friendly interface, making interactions with Kafka seamless and straightforward. Its ease of learning and smooth usability set it apart, providing a compelling reason to incorporate it into your Kafka workflow.

Let’s go through some basics of kcat and talk a bit about the difference with the /bin tools.

Kcat’s versatility lies in its various modes, serving as the foundation of the tool’s functionality.


-C: Consume mode
-P: Produce mode
-L: Metadata List mode
-Q: Query mode

In the bin tools, managing consumption, production, and metadata listing requires navigating through different tools, which can be cumbersome. Additionally, each tool comes with its own set of flags, necessitating the learning of various configurations. For instance, flags like –consumer.config and –producer.config could simply be consolidated into a single –config-file flag, as both serve the same purpose of storing configuration settings.

Kcat streamlines this process by offering greater flexibility. With kcat, you can conveniently use -F or specify the path to your configuration file using the $KCAT_CONFIG environment variable. This versatility allows for effortless switching between flags, significantly simplifying your Kafka interactions.

Consume messages

kcat -C -b kafka_broker -t topic_name

Produce messages

kcat -P -b kafka_broker -t topic_name

Metadata listing

kcat -L -b kafka_boker

The true advantage of kcat lies in its seamless piping functionality, allowing for effortless message transmission between various processes and files. For instance, consider the following command: it streams messages from the beginning of one Kafka topic (topic_foo), filters them to select only those containing the string “foo”, and then produces the filtered messages to another Kafka topic (topic_bar).

kcat -C -t topic_foo -o beginning \
| grep "foo" \ 
| kcat -P -t topic_bar

Here’s another example utilizing jq to filter messages within a specific time range to a file:

kcat -C -t topic -o beginning -e \ 
| jq 'select(.timestamp | (. >= "2024-02-16T11:23:00Z") and (. < "2024-02-16T11:24:00Z"))' > filtered_messages.json

Kcat is a standout tool, versatile and indispensable for every Kafka developer’s toolkit, facilitating swift verification, querying, and testing. While it’s already invaluable, the addition of consumer group offset manipulation capabilities in the future would undoubtedly elevate its utility to new heights.


Kafka Connect is the backbone of many Kafka deployments, providing a robust framework for seamlessly integrating streaming data between Kafka and external systems. Whether you’re syncing Kafka with databases, cloud services, or other data repositories, Kafka Connect offers a treasure trove of connectors tailored to your integration needs.

As a Kafka Engineer, mastering Kafka Connect is essential for orchestrating data pipelines efficiently. Typically, this involves interfacing with the Kafka Connect REST API, a process often marred by the complexity of curl commands or the overhead of tools like Postman.

Enter kcctl – a game-changer in Kafka Connect management. Inspired by the user-friendly semantics of kubectl for Kubernetes, kcctl streamlines the process, allowing you to seamlessly switch between different Kafka Connect clusters without the mental overhead of deciphering HTTP methods. Gone are the days of inadvertently mixing up GET and POST requests, or fumbling through incorrect URL paths.

Here are the available commands:

What’s more, kcctl boasts a gentle learning curve, particularly for those with prior experience navigating Kubernetes with kubectl. With its intuitive interface, mastering kcctl feels like a walk in the park. Plus, it offers a comprehensive set of operations, from restart, pause, resume, to apply, empowering you to manage Kafka Connect deployments with confidence and ease.


Certainly, while k9s isn’t tailored exclusively for Kafka, it’s an invaluable tool for Kubernetes cluster management that significantly enhances your Kafka deployment experience when paired with Strimzi Operator or similar stacks. So, what exactly is k9s?

k9s stands as a Text-based User Interface (TUI) designed specifically for navigating and managing Kubernetes clusters. It alleviates the necessity of manually inputting commands with kubectl. Now, this isn’t to discredit kubectl’s utility; however, k9s truly elevates cluster interaction by offering a smoother and faster experience.

With k9s, you gain the capability to effortlessly execute virtually any command you’d typically employ with kubectl. From fundamental tasks like context switching, listing pods/nodes, and restarting pods to more intricate operations such as port-forwarding, and accessing pod shells, k9s streamlines Kubernetes management tasks with unparalleled ease.

Furthermore, if you’re utilizing Helm charts within your Kubernetes ecosystem, k9s seamlessly integrates with them, enabling you to inspect and manage Helm resources.

Below, you’ll find a visual representation of a Strimzi Custom Resource Definition (CRD) which represents a Kafka user and its permissions to the cluster.

This tool undoubtedly stands out as my top pick among all the tools discussed in this post. If you find yourself working with Kubernetes resources on a regular basis, I highly recommend giving it a try!


This aspect isn’t merely a tool; it’s a methodology, effectively a toolkit where X as Code serves as the cornerstone of gitOps.

Implementing gitOps for Kafka management offers a myriad of advantages to developers. Firstly, it brings consistency by treating infrastructure configurations, including Kafka cluster settings, as code stored in version-controlled repositories. This ensures uniformity across various environments, reducing the likelihood of configuration drifts and discrepancies. Additionally, gitOps enhances reproducibility by enabling easy replication of deployments and confident rollbacks to previous states, thanks to configurations being stored in version-controlled repositories.

Collaboration among team members is fostered through gitOps practices, as developers can collectively review, modify, and improve configurations via pull requests and code reviews, leveraging version control systems like Git. Moreover, gitOps facilitates automation within Kafka workflows, streamlining deployment processes through the automation of configuration changes using continuous integration and continuous deployment (CI/CD) pipelines.

Lastly, storing configurations as code provides clear audit trails, enhancing governance and compliance efforts by meticulously tracking changes and ensuring policy adherence. Overall, adopting gitOps for Kafka development not only improves deployment efficiency and reliability but also fosters collaboration, automation, and governance within development teams.

X as Code practices enhance gitOps by promoting consistency, reproducibility, collaboration, automation, and auditability in Kafka cluster management, ultimately improving deployment efficiency and reliability. For more insights, visit our blog post: Running Kafka with GitOps: Sharing Tips and Tricks.


From the intuitive command-line interactions facilitated by kcat and kcctl, to the comprehensive Kubernetes cluster management with k9s, and the robust governance and collaboration offered by gitOps practices, developers are equipped with a powerful suite of resources. These tools not only enhance the daily workflow of developers working with Kafka but also promise a more streamlined, understandable, and collaborative environment. Whether you’re directly managing Kafka clusters, integrating streaming data, or navigating Kubernetes resources, the tools discussed here aim to ease the complexities of Kafka development.

Saulo Valenzuela
Platform Engineer