Documentation Getting started

CloudKarafka are managed Apache Kafka servers in the cloud. Kafka is a distributed publish-subscribe messaging systems that is designed to be fast, scalable, and durable. It's an open-source message broker written in Scala, that can support a large number of consumers, and retain large amounts of data with very little overhead.

CloudKarafka automates every part of setup, running and scaling of Apache Kafka. We have support staff available 24/7 to help in any event or with any questions. CloudKarafka let developers focus on the core part of their applications, instead of managing and maintaining servers.

To get started with CloudKarafka, you need to sign up for an account and create a CloudKarafka instance.

Create a CloudKarafka instance

Create an account and login to the control panel and press Create+ to create a new instance. Choose a name for the instance and the datacenter to host the instance. To get started you need to sign up for a customer plan. What plan you want to use depend of your needs. We offer five different plans for different needs. You can try CloudKarafka for free with the plan Developer Duck.

Create CloudKarafka instance

All plans are billed by the second, you can try out even the largest instance types for mere pennies. As soon as you delete the instance you won't be charged for it anymore. Billing occurs at the end of each month, and you are only charged for the time an instance has been available to you.

Apache Kafka control panel

The instance is immediately provisioned after creation and you can view the instance details at the details page.

Getting started with Apache Kafka and CloudKarafka

When you have signed up and created your instance it is time to Get started with CloudKarafka.

Shared instances - Plan Developer Duck

Our shared instances are on a multi tenant Kafka server. You share the server with other users. To get started with your shared instance you need a URL, username and password. This can be found on the details pages for your instance. Once you have that information you can have a look at any of the demo projects we have created to get you up and running quick and simple. The example can be found in menu at the top of this page, the examples are working and you can download it, set the URL, username and password and run.

Dedicated instances

A secure connection has to be set up between you clients and your CloudKarafka server. If you host your servers in the same cloud as your CloudKarafka server you can peer the VPCs, otherwise you can use SASL/SCRAM which will authenticate you using username and password and give you an encrypted connection.

VPC Peering

Press on the details page for your instance and open up the VPC Peering tab. CloudKarafka will request to set up a VPC connection to you as soon as you have saved your VPC peering details. After that you need to accept the VPC peering connection request from us. The request can be accepted from the Amazon VPC console at https://console.aws.amazon.com/vpc/. Please note that the subnet given must be the same as your VPC subnet.

Kafka VPC Peering

CloudKarafka Kafka MGMT

Kafka MGMT is a user-friendly Management Interface for Apache Kafka. The interface let you monitor and handle your Apache Kafka server from a web browser, in a much simpler way than previous versions.

Once your instance is provided to you, you are able to view numbers of consumers, create and delete topics, view broker information, create and delete users and you can view topic size and total topic message count - among other things.

Kafka Concepts

There are a some concepts that is good to be familiar with before you get started with your first CloudKarafka instance.

Kafka cluster
Each Kafka cluster consists of one or more servers called Brokers.
Message Broker
The Kafka server, a queue manager that can handle a large amount of reads and writes per second from a lot of clients. Message data is replicated and persisted on the Brokers.
Messages
Information that is sent from the producer to a consumer through Kafka. Messages are byte arrays that can store any object format - strings or JSON as the most common once.
Topics
Message queues in Kafka are called topics, it is a category or feed name to which messages are published. Producers write data to topics and consumers read from topics.
Producers and Consumers
Producers publish data to the topics of their choice, consumers consumes messages from the topic. Producers and Consumers can simultaneously write to and read from multiple topics.
Distributed
Kafka is a distributed system, topics are partitioned and replicated across multiple nodes. From wikipedia: "A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages."
Partitioned
A topic consist of one or more partitions on different brokers in the cluster. For each topic, the Kafka cluster maintains a partitioned log. The messages in the partitions are each assigned a sequential id number called the offset that identifies each message within the partition. The producer is responsible for choosing which message to assign to which partition within the topic and the consumer need to track what messages that have been consumed.
Replicated
Each partition is replicated across a configurable number of servers for fault tolerance.

Don't hesitate to contact us at support@cloudkarafka.com if you got any questions!