One plan to rule them all! CloudKarafka has released a new flexible plan that outshines previous hosting options in flexibility. Running Apache Kafka has never been easier.
We have been very proud of the simplicity of the pricing model on CloudKarafka, with a fixed monthly price that includes every feature. However, this needs to change to keep up with modern cloud infrastructure.
Most big data projects will have to deal with scalability at some point, either to scale up or down. Apache Kafka connects the world of “big data” and offers different scaling capabilities.
Having trouble with disc space? It can be due to retention and segment settings. It is vital to keep disk usage under control.
Is there a "perfect number" to choose as Kafka's ultimate replication factor? The answer is it depends on your setup. However, to guarantee high availability, a few tips and tricks can help you along the way.
When a consumer wants to read data from Kafka, it will sequentially read all messages in a topic. A marker called a 'consumer offset' is recorded to keep track of what has already been read.
A Kafka topic is basically an 'entity' where messages are sent. It is essentially the only detail producers and consumers have to agree upon. Topics are not deleted by default; however, this blog post describes how to delete them.
A high severity security vulnerability has been identified in Apache Kafka >= 2.8.0, all users advised to update to latest patch release.
Safe, solid systems are often built and configured to minimize the risk of losing data. Although, sometimes you might want to remove data from your system. This article will teach you how it's done in Apache Kafka by showing how to purge a Kafka topic.
In most Apache Kafka use cases, message compression is used to quickly send large amounts of data. However, you might still want to manually read the content of a compressed message for debugging or other purposes. This article will teach you how.