The launch of Apache Kafka 3.4.0 brought several new features to the table. The main focus of this release lies in the ability to monitor and adjust clustering settings, and CloudKarafka took some days to look into the details.
One plan to rule them all! CloudKarafka has released a new flexible plan that outshines previous hosting options in flexibility. Running Apache Kafka has never been easier.
We have been very proud of the simplicity of the pricing model on CloudKarafka, with a fixed monthly price that includes every feature. However, this needs to change to keep up with modern cloud infrastructure.
Most big data projects will have to deal with scalability at some point, either to scale up or down. Apache Kafka connects the world of “big data” and offers different scaling capabilities.
Having trouble with disc space? It can be due to retention and segment settings. It is vital to keep disk usage under control.
Is there a "perfect number" to choose as Kafka's ultimate replication factor? The answer is it depends on your setup. However, to guarantee high availability, a few tips and tricks can help you along the way.
When a consumer wants to read data from Kafka, it will sequentially read all messages in a topic. A marker called a 'consumer offset' is recorded to keep track of what has already been read.
A Kafka topic is basically an 'entity' where messages are sent. It is essentially the only detail producers and consumers have to agree upon. Topics are not deleted by default; however, this blog post describes how to delete them.
A high severity security vulnerability has been identified in Apache Kafka >= 2.8.0, all users advised to update to latest patch release.
Safe, solid systems are often built and configured to minimize the risk of losing data. Although, sometimes you might want to remove data from your system. This article will teach you how it's done in Apache Kafka by showing how to purge a Kafka topic.