Many Kafka-journeys start from a technical standpoint. It might be our old messaging platform, which just can’t handle our ever increasing need for faster and more information to support our services. This makes us glance at Kafka’s horizontal-scaling as a savior. Or it might be that we have seen how “events”, as an information abstraction, have very nice properties in comparison with transient web-service responses or old-school files. Or perhaps we’ve discovered that tying together our microservices with nested, synchronous REST calls starts forming our integration patterns into a more and more spaghetti-like solution where a single service failure can have huge consequences upstream in the call stack.
Common things you need to think about when making your Kafka Cluster ready for production
A Kafka-POC later we think we’re home; a platform that enables asynchronous information sharing, scales very nicely and is pretty easy to get started with, from a developers perspective. Time to move to production – right?!
Well, have you thought about the ops-perspectives? How do we get alerts when a consumer is lagging behind or will there be data loss when disaster hits and half of our brokers disappear? What about the security-aspects? “When I said all of the organization’s data should be available everywhere, I didn’t mean that the email-notification system should be able to read the balance of everyone’s accounts!” What do we think the developers say, the 48th time they manually need to create and configure a new topic in production as part of the deployment procedure? And from an architectural view, do we have any patterns and guidelines on how we leverage the platform for event sourced applications and how do we handle failure scenarios?
Of course we want to move to production as soon as possible to start benefiting from the platform. Investing too much time and money in a technology before gaining any production effect can raise eyebrows and kind of go against an agile approach. But looking at some of the aspects above, we clearly have some homework to do before we can call our cluster “production ready”. Luckily, at Irori we’ve worked hard to help you with this. Based on journeys that we have made a couple of times with different clients with unique yet similar requirement lists, we are happy to present our product – we call it Irori Streaming Data Platform.
This is Irori Streaming Data Platform – the backbone for making your Kafka cluster ready for production
Our experience and know-how have now been condensed into our Irori Streaming Data Platform (ISDP) offering. The story behind is pretty simple and something that we as developers have grown into our backbones: don’t repeat yourself, or put in other words: don’t reinvent the wheel. Making a Kafka platform production ready usually requires hundreds of hours in addition to installing Kafka itself. This time is spent figuring out disaster-recovery, security practices, topic management, development processes and so on. Building and gaining experience from scratch is both time-consuming and complex. At Irori, we can offer a production grade platform as a shortcut to all of this, as shown in the picture below.
Irori Streaming Data Platform (ISDP) consist of 3 parts:
- Features: a production ready configured Kubernetes based Kafka installation built on Redhat AMQ Streams or Strimzi. Monitoring, security, governance, developer friendliness already taken care of.
- Support: someone with experience to call when things go south. It’s worth a lot.
- ISDP Knowledge base: a database of know-how, patterns, architectural guidelines and gotchas on how to build asynchronous solutions on Kafka.
We’ll come back with more details on each and every one of these parts in future blog posts. Keep consuming!
Solution Architect and Kafka Expert