Documentation Getting started

CloudKarafka are managed Apache Kafka servers in the cloud. Kafka is a distributed publish-subscribe messaging systems that is designed to be fast, scalable, and durable. It's an open-source message broker written in Scala, that can support a large number of consumers, and retain large amounts of data with very little overhead.

CloudKarafka automates every part of setup, running and scaling of Apache Kafka. We have support staff available 24/7 to help in any event or with any questions. CloudKarafka let developers focus on the core part of their applications, instead of managing and maintaining servers.

To get started with CloudKarafka, you need to sign up for an account and create a CloudKarafka instance.

Create a CloudKarafka instance

Create an account and login to the control panel and press Create+ to create a new instance. Choose a name for the instance and the datacenter to host the instance. To get started you need to sign up for a customer plan. What plan you want to use depend of your needs. We offer five different plans for different needs. You can try CloudKarafka for free with the plan Developer Duck.

Create CloudKarafka instance

All plans are billed by the second, you can try out even the largest instance types for mere pennies. As soon as you delete the instance you won't be charged for it anymore. Billing occurs at the end of each month, and you are only charged for the time an instance has been available to you.

Apache Kafka control panel

The instance is immediately provisioned after creation and you can view the instance details at the details page.

Getting started with Apache Kafka and CloudKarafka

When you have signed up and created your instance it is time to Get started with CloudKarafka. Before you start coding you need to either download certificates or set up a VPC peering to your AWS VPC.

Shared instances - Plan Developer Duck

Our shared instances are on a multi tenant Kafka server. You share the server with other users. To get started with your shared instance you need to download the Certificates (Connection Environment variables) for the instance from the instances overview page. You will find the Certs download button as in the picture above. Press the button and save the given .env file into your project. If you signed up via a provider you can get all the connection variables you need from the provider. In the documentation language pages you will see examples of how to handle the environment variables (certificates) in different languages. Some languages require a path to the cert-file, in that cases you first need to write the .env certificate information to a file. Other languages can read and handle the certificates stright from the environment variables. See the language specific documentation for more information.

Add and delete topics

When you create your instance you will get a topic by default. You are allowed to add up to 5 topics to your instance. A new topic can be added by entering the details page for your instance.

Apache Kafka Topic

Dedicated instances

A secure connection has to be set up between you clients and your CloudKarafka server. You need to create certificates or you need to peer your instance with your AWS VPC.

VPC Peering

Press on the details page for your instance and open up the VPC Peering tab. CloudKarafka will request to set up a VPC connection to you as soon as you have saved your VPC peering details. After that you need to accept the VPC peering connection request from us. The request can be accepted from the Amazon VPC console at Please note that the subnet given must be the same as your VPC subnet.

Kafka VPC Peering

Cert Management

We do allow connections via certificate based authentication. You can find information about how to set up the certificates in the details page of your instanes under the Cert Management tab. When you have followed all steps in the Cert Management guide you will have one private key, the CA cert and a signed client cert which allowes you to connect. Language specific code can be found in the documentation menu.

Log Available on dedicated instances.

Apache Kafka log stream shows the live log from Kafka.

Metrics Available on dedicated instances.

The Server Metrics helps you to measure performance metrics from your server. CloudKarafka shows monitoring for CPU Usage, Memory Usage, and Disk Usage.

Kafka Manager Enabled for dedicated instances by default.

Kafka Manager is a a tool for management of Apache Kafka.

Kafka Manager

CloudKarafka Integrated monitoring services Available on dedicated instances.

You are able to ship Apache Kafka's server logs to an external log provider. CloudKarafka is integrated to following log services: Papertrail, Loggly, Logentries, and Splunk

integrations in control panel

Kafka Concepts

There are a some concepts that is good to be familiar with before you get started with your first CloudKarafka instance.

Kafka cluster
Each Kafka cluster consists of one or more servers called Brokers.
Message Broker
The Kafka server, a queue manager that can handle a large amount of reads and writes per second from a lot of clients. Message data is replicated and persisted on the Brokers.
Information that is sent from the producer to a consumer through Kafka. Messages are byte arrays that can store any object format - strings or JSON as the most common once.
Message queues in Kafka are called topics, it is a category or feed name to which messages are published. Producers write data to topics and consumers read from topics.
Producers and Consumers
Producers publish data to the topics of their choice, consumers consumes messages from the topic. Producers and Consumers can simultaneously write to and read from multiple topics.
Kafka is a distributed system, topics are partitioned and replicated across multiple nodes. From wikipedia: "A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages."
A topic consist of one or more partitions on different brokers in the cluster. For each topic, the Kafka cluster maintains a partitioned log. The messages in the partitions are each assigned a sequential id number called the offset that identifies each message within the partition. The producer is responsible for choosing which message to assign to which partition within the topic and the consumer need to track what messages that have been consumed.
Each partition is replicated across a configurable number of servers for fault tolerance.

Don't hesitate to contact us at if you got any questions!