Having trouble with disc space? It can be due to retention and segment settings. It is vital to keep disk usage under control.
Is there a "perfect number" to choose as Kafka's ultimate replication factor? The answer is it depends on your setup. However, to guarantee high availability, a few tips and tricks can help you along the way.
When a consumer wants to read data from Kafka, it will sequentially read all messages in a topic. A marker called a 'consumer offset' is recorded to keep track of what has already been read.
A Kafka topic is basically an 'entity' where messages are sent. It is essentially the only detail producers and consumers have to agree upon. Topics are not deleted by default; however, this blog post describes how to delete them.
A high severity security vulnerability has been identified in Apache Kafka >= 2.8.0, all users advised to update to latest patch release.
Safe, solid systems are often built and configured to minimize the risk of losing data. Although, sometimes you might want to remove data from your system. This article will teach you how it's done in Apache Kafka by showing how to purge a Kafka topic.
In most Apache Kafka use cases, message compression is used to quickly send large amounts of data. However, you might still want to manually read the content of a compressed message for debugging or other purposes. This article will teach you how.
Learn what a Kafka topic is and how it works. Gain insights about partitions, a subject strictly related to a Kafka topic. Together with other notions, this article show how easy it is to query your Kafka data using the official tools available.
One of the most powerful aspects of Kafka is its ability to guarantee data persistence. In order to leverage this powerful feature, it is important to have a deep understanding on how it works and avoid mistakes that might put our data at stake.
Learn how to get the best out of your Kafka investment and be prepared to scale smartly!