Please note that if you use our service via a marketplace, the answers to some of these FAQs might not be applicable.
Kafka is a publish-subscribe based messaging system that exchanges data between processes, applications, and servers.
Apache Kafka is software with defined topics (think of a topic as a category). Applications may connect to this system and transfer a message onto the topic. A message can include any kind of information; for example, information about an event that has happened on your website. Alternatively, it could just be a simple text message that is supposed to trigger an event. Another application connects to the system and processes messages from a topic.
A complete beginner guide for Apache Kafka can be found here .
CloudKarafka automates every part of the setup, running, and scaling of Apache Kafka. CloudKarafka offers hosted publish-subscribe messaging systems in the cloud. Thanks to the ease of using CloudKarafka, fully managed Kafka clusters can be up and running within two minutes, including a managed internal Zookeeper cluster on all nodes.
Zookeeper is a top-level software developed by Apache that acts as a centralized service, keeping track of the status of the Kafka cluster nodes and of Kafka topics, partitions, etc. All of our plans have a managed Zookeeper cluster. More information about Zookeeper can be found here .
You should use the following ports:
You should list all brokers’ URLs to your client not only the cluster hostname. For example, if your cluster name is test-speedcar and you have three nodes in the cluster, your broker list should look like this:
The application cannot verify the certificate from the Kafka broker, which could mean three things:
First verify that you have entered the correct URL to all the brokers.
the CA certificates installed on your server live in different folders depending on which OS or Linux distribution you are using. For example for Debian and Ubuntu, the certificates are installed in the directory
Make sure that your application uses the correct path to the installed CA certificates.
If you are using a Kafka library that uses librdkafka it may be that it's compiled with the wrong configuration and it's looking for the CA certificates in the wrong directory. This can be fixed by setting the configuration option
which makes librdkafka looks for CA certificates in that directory.
if your application still cannot connect due to the above error, it's probable that the CA Root certificate is not on your server. To address this, consider installing a package like
on your server (it should exist for most systems). Alternatively, you can get the
Mozilla CA certificate bundle
directly. For both options, you can use the
configuration option to specify the path to the certificates.
As a last option,
if you are unable to install the CA certificate bundle, you can point the
configuration option to our certificate chain directly:
Please note that the certificate chain that CloudKarafka uses will change over time and you will need to update this file in the future.
Put the certificate file in the same folder as the application source code, and configure your Kafka client with this option:
If you still see the error, despite everything above, your OpenSSL version might be too old. Please upgrade OpenSSL to a newer version and try again.
Yes, in the CloudKarafka Control Panel you can edit the size of your cluster and add additional nodes. Currently, nodes cannot be removed.
Yes, you can upgrade to a larger plan. In the CloudKarafka Control Panel, you can edit the plan and select a bigger one. Upgrade does not cause downtime for multi-node instances. You cannot downgrade and you cannot migrate from the shared plan to a dedicated plan.
We guarantee at least 99.95% availability on all dedicated plans. CloudKarafka will refund 50% of the cost of the plan for longer outages. Requests for refunds must be submitted in writing, within 30 days from the outage to which they refer, via email to firstname.lastname@example.org
We guarantee a maximum 30 minute initial response time on critical issues correctly submitted to our support system.
Complete SLA information can be found here.
Yes, for plans with VPC peering you can connect to the Zookeeper CLI using the local IP addresses. Connect from a VPC peered with the CloudKarafka VPC, then connect using zkCli.sh -server PRIVATEIP:2181, where PRIVATEIP is the IP of the Zookeeper you want to connect to.
SASL/SCRAM or certificate-based authentication methods are available. Examples are found in the different language sections.
Alarms: VictorOps, OpsGenie
Logging: Papertrail, Loggly, LogEntries, Splunk, Stackdriver, CloudWatch
Metrics: CloudWatch, Liberato, DataDog
Yes, for dedicated plans.
Kafka Connect allows integration with other systems and Kafka. You can add a data source that allows you to consume data from that source and store it in Kafka. Alternatively, all data in a topic can be sent to another system for processing or storage. There are many possibilities that Kafka Connect gives, and it's easy to get started since there are already a lot of connectors available.
More information can be found here.
Yes, we can install your custom Kafka connector for you, just send us an email. The connector will only be available for internal use on your cluster.
Yes, for dedicated plans.
The Kafka Schema Registry integration acts as a standalone component interacting with both the producer and the consumer and provides a serving layer for your metadata.
Schema Registry ensures that the number of possible conflicts between producer and consumer messages, such as bad data or sudden change of formats in messages, are reduced while Kafka's unique character is not affected. Schema Registry is a standalone component, which makes it possible for the Kafka broker to remain a powerful player in the field of Message Streaming.
Schema registry runs by default on port 8081. The port 8081 is only open within the VPC, so to have access to the Schema Registry, you need to run your service in Amazon Web Services or Google Cloud and peer your VPC with the Kafka cluster.
More information about Schema registry can be found here.
Yes, for dedicated plans.
The Kafka REST Proxy allows for the opportunity to receive metadata from a cluster and produce and consume messages over a simple REST API. This feature can easily be enabled from the Control Panel for any cluster.
More information can be found here.
Yes, for dedicated plans.
MirrorMaker is a tool for maintaining a replica of an existing Kafka cluster.
When MirrorMaker is enabled, all messages are consumed from the source cluster and re-published on the target cluster; i.e. data is read from topics in the source cluster and written to a topic with the same name in the destination cluster. This allows for the option to send data to one cluster, which in turn can be read from both clusters. MirrorMaker can run one or multiple nodes. If you as a customer have a five node cluster, you can enable MirrorMaker on one node or all five of them. A higher number of nodes means faster processing and a better rate of keeping the cluster in sync.
No discounts are available at this time.
No. There is no shipping cost since the service is shipped electronically.
No. The service is non-returnable.
You can choose to pay through credit card (due on charge date) or via wire transfers (NET15). If you would like us to enable manual invoicing via wire transfer, send us an email once you have added all information and we will enable it for you. Please note that we don’t accept checks.
Email email@example.com to receive an official quote. Include your plan selection, if the quote should include VPC or not, and the subscription period.
The service will be provided off-premise in a data center and region chosen on behalf of the customer. The data centers and regions currently provided can be found at the bottom of this page: https://www.cloudkarafka.com/plans.html.
Our billing is pro-rated, which means that our customers only pay for the time the service has been available to them due the month after delivery. Thus, you won’t receive your first invoice when the account has been created; instead, you will receive it at the beginning of the upcoming month.
No. Our customers often change their plan while they are using our service, therefore it’s not convenient to pay for a year upfront. However, we do allow prepayments with credit.
No. We don’t need any documents with signatures.
No. To safeguard customer data, active subscriptions aren’t deleted. For example, resellers that provide us with a PO for two months’ of the Happy Hippo plan are responsible for deleting the plan after the subscription expires. Otherwise, you will be charged for the extra time your data remains on the system.
It’s best to extend the current PO. If you need to have two separate PO’s for the subscriptions, you need to open a new account in order for the subscriptions to be billed separately.
Go to https://customer.cloudkarafka.com/login and enter your email address. Fill out all the information in the billing section, such as billing address, email, etc. Please note that it’s important that we have your billing information registered and not the end-user information since you are our direct customer and not the end-user.
The PO number can be specified in the billing section under “billing notes”. Or send it to us, and we will add it for you.
You are free to create and delete instances once the billing information is set up. It’s up to you and the end-user to decide who will create the subscription specified in the PO.
Invite the end-user to the account via https://customer.cloudkarafka.com/team so that he/she can start using the service.
Change the role of the person that created your account to “Billing Manager”. By doing so, you can access all invoices of the account and update the billing information. But you will not be able to edit the customer's subscription. See more information here: https://www.cloudkarafka.com/blog/manage-instance-access-acl.html
Yes. You can read more here.
No separate agreement is required. Our Data Processing Agreement (DPA) for GDPR is an exhibit to our Terms of Service. Thus, our business relationship is automatically covered by a DPA when signing up for an account.
You have the right to see what personal information 84codes AB holds about you. You are entitled to be given a description of the information, what we use it for, who we might pass it onto, and any information we might have about the source of the information.
A subject access requests should be made via email to firstname.lastname@example.org.
As a data controller, you decide for yourself where you want to host your data by choosing a data center and region. The data will not leave that region unless you choose to move it. In CloudKarafka’s role as data controller, we may collect and store contact information, such as email address, and physical address, when customers sign up for our services or seek support help.
Your personal customer data (email and billing information) is stored in the US.
We are proud to be compliant with SOC 2 by AICPA. We have been audited against the Security (common criteria) and Availability Trust Services Criteria.
Our SOC 2 Type 2 report can be obtained under an NDA per request. Please send an email to email@example.com.