Frequently Asked Questions about CloudKarafka

Please note that if you use our service via a marketplace, the answers to some of these FAQs might not be applicable.

General

What is Apache Kafka?

Kafka is a publish-subscribe based messaging system that exchanges data between processes, applications, and servers.

Apache Kafka is software with defined topics (think of a topic as a category). Applications may connect to this system and transfer a message onto the topic. A message can include any kind of information; for example, information about an event that has happened on your website. Alternatively, it could just be a simple text message that is supposed to trigger an event. Another application connects to the system and processes messages from a topic.

A complete beginner guide for Apache Kafka can be found here .

What is CloudKarafka

CloudKarafka automates every part of the setup, running, and scaling of Apache Kafka. CloudKarafka offers hosted publish-subscribe messaging systems in the cloud. Thanks to the ease of using CloudKarafka, fully managed Kafka clusters can be up and running within two minutes, including a managed internal Zookeeper cluster on all nodes.

What is Zookeeper?

Zookeeper is a top-level software developed by Apache that acts as a centralized service, keeping track of the status of the Kafka cluster nodes and of Kafka topics, partitions, etc. All of our plans have a managed Zookeeper cluster. More information about Zookeeper can be found here .

Connect to the Apache Kafka cluster

Which port should I use to connect to when connecting to my Kafka cluster?

You should use the following ports:

  • 9092 For connection inside a peered VPC
  • 9093 For an encrypted connection using SSL
  • 9094 For an encrypted connection using SASL/SCRAM over SSL

Which URL should I use when connecting to my Kafka cluster?

You should list all brokers’ URLs to your client not only the cluster hostname. For example, if your cluster name is test-speedcar and you have three nodes in the cluster, your broker list should look like this:

test-speedcar-01.srvs.cloudkafka.com:9094,test-speedcar-02.srvs.cloudkafka.com:9094,test-speedcar-03.srvs.cloudkafka.com:9094

What does "Failed to verify broker certificate: unable to get local issuer certificate,” mean?

The application cannot verify the certificate from the Kafka broker, which could mean three things:

  1. The URL to the broker is wrong and connecting to the wrong machine.
  2. The directory where the application is looking for CA certificates is wrong.
  3. The CA root certificate that the server certificates are signed with isn’t installed on your server.

First verify that you have entered the correct URL to all the brokers.

Second the CA certificates installed on your server live in different folders depending on which OS or Linux distribution you are using. For example for Debian and Ubuntu, the certificates are installed in the directory /etc/ssl/certs. Make sure that your application uses the correct path to the installed CA certificates.
If you are using a Kafka library that uses librdkafka it may be that it's compiled with the wrong configuration and it's looking for the CA certificates in the wrong directory. This can be fixed by setting the configuration option ssl.ca.location: '/etc/ssl/certs' which makes librdkafka looks for CA certificates in that directory.

Third if your application still cannot connect due to the above error, it's probable that the CA Root certificate is not on your server. To address this, consider installing a package like ca-certificates on your server (it should exist for most systems). Alternatively, you can get the Mozilla CA certificate bundle directly. For both options, you can use the ssl.ca.location configuration option to specify the path to the certificates.

As a last option, if you are unable to install the CA certificate bundle, you can point the ssl.ca.location configuration option to our certificate chain directly:

Please note that the certificate chain that CloudKarafka uses will change over time and you will need to update this file in the future. Put the certificate file in the same folder as the application source code, and configure your Kafka client with this option: ssl.ca.location: './<filename>.ca'

If you still see the error, despite everything above, your OpenSSL version might be too old. Please upgrade OpenSSL to a newer version and try again.

Handle instances

Can I add nodes to my cluster?

Yes, in the CloudKarafka Control Panel you can edit the size of your cluster and add additional nodes. Currently, nodes cannot be removed.

Can I migrate between plans?

Yes, you can upgrade to a larger plan. In the CloudKarafka Control Panel, you can edit the plan and select a bigger one. Upgrade does not cause downtime for multi-node instances. You cannot downgrade and you cannot migrate from the shared plan to a dedicated plan.

Integration and availability

What is the service-level agreement (SLA) for CloudKarafka?

We guarantee at least 99.95% availability on all dedicated plans. CloudKarafka will refund 50% of the cost of the plan for longer outages. Requests for refunds must be submitted in writing, within 30 days from the outage to which they refer, via email to contact@cloudkarafka.com

We guarantee a maximum 30 minute initial response time on critical issues correctly submitted to our support system.

Complete SLA information can be found here.

Is it possible to connect to the Zookeeper command-line interface (CLI)?

Yes, for plans with VPC peering you can connect to the Zookeeper CLI using the local IP addresses. Connect from a VPC peered with the CloudKarafka VPC, then connect using zkCli.sh -server PRIVATEIP:2181, where PRIVATEIP is the IP of the Zookeeper you want to connect to.

Which authentication mechanisms are available?

SASL/SCRAM or certificate-based authentication methods are available. Examples are found in the different language sections.

Supported features

What service integrations do you support?

Alarms: VictorOps, OpsGenie

Logging: Papertrail, Loggly, LogEntries, Splunk, Stackdriver, CloudWatch

Metrics: CloudWatch, Liberato, DataDog

Do you support Kafka Connect?

Yes, for dedicated plans.

Kafka Connect allows integration with other systems and Kafka. You can add a data source that allows you to consume data from that source and store it in Kafka. Alternatively, all data in a topic can be sent to another system for processing or storage. There are many possibilities that Kafka Connect gives, and it's easy to get started since there are already a lot of connectors available.

More information can be found here.

Can I connect a custom Kafka connector?

Yes, we can install your custom Kafka connector for you, just send us an email. The connector will only be available for internal use on your cluster.

Do you support Schema registry?

Yes, for dedicated plans.

The Kafka Schema Registry integration acts as a standalone component interacting with both the producer and the consumer and provides a serving layer for your metadata.

Schema Registry ensures that the number of possible conflicts between producer and consumer messages, such as bad data or sudden change of formats in messages, are reduced while Kafka's unique character is not affected. Schema Registry is a standalone component, which makes it possible for the Kafka broker to remain a powerful player in the field of Message Streaming.

Schema registry runs by default on port 8081. The port 8081 is only open within the VPC, so to have access to the Schema Registry, you need to run your service in Amazon Web Services or Google Cloud and peer your VPC with the Kafka cluster.

More information about Schema registry can be found here.

Do you support REST proxy?

Yes, for dedicated plans.

The Kafka REST Proxy allows for the opportunity to receive metadata from a cluster and produce and consume messages over a simple REST API. This feature can easily be enabled from the Control Panel for any cluster.

More information can be found here.

Do you support MirrorMaker?

Yes, for dedicated plans.

MirrorMaker is a tool for maintaining a replica of an existing Kafka cluster.

When MirrorMaker is enabled, all messages are consumed from the source cluster and re-published on the target cluster; i.e. data is read from topics in the source cluster and written to a topic with the same name in the destination cluster. This allows for the option to send data to one cluster, which in turn can be read from both clusters. MirrorMaker can run one or multiple nodes. If you as a customer have a five node cluster, you can enable MirrorMaker on one node or all five of them. A higher number of nodes means faster processing and a better rate of keeping the cluster in sync.

Reseller

Do you offer any reseller discounts?

No discounts are available at this time.

Are there any shipping costs?

No. There is no shipping cost since the service is shipped electronically.

Is the service returnable?

No. The service is non-returnable.

What payment options and terms do you offer?

You can choose to pay through credit card (due on charge date) or via wire transfers (NET15). If you would like us to enable manual invoicing via wire transfer, send us an email once you have added all information and we will enable it for you. Please note that we don’t accept checks.

We need an official quote, how do we get that?

Email sales@cloudkarafka.com to receive an official quote. Include your plan selection, if the quote should include VPC or not, and the subscription period.

The service will be provided off-premise in a data center and region chosen on behalf of the customer. The data centers and regions currently provided can be found at the bottom of this page: https://www.cloudkarafka.com/plans.html.

How does your billing work?

Our billing is pro-rated, which means that our customers only pay for the time the service has been available to them due the month after delivery. Thus, you won’t receive your first invoice when the account has been created; instead, you will receive it at the beginning of the upcoming month.

Our PO is set up for a year, could we get an annual invoice?

No. Our customers often change their plan while they are using our service, therefore it’s not convenient to pay for a year upfront. However, we do allow prepayments with credit.

Please advise if any documents with signatures will be required in addition to the PO we would submit in the event a PO is executed.

No. We don’t need any documents with signatures.

Does the service have an automatic renewal once the current subscription on the PO expires?

No. To safeguard customer data, active subscriptions aren’t deleted. For example, resellers that provide us with a PO for two months’ of the Happy Hippo plan are responsible for deleting the plan after the subscription expires. Otherwise, you will be charged for the extra time your data remains on the system.

We have a customer using your service with an active subscription and PO that wants to add one more subscription. Should we extend the current PO or issue a new one?

It’s best to extend the current PO. If you need to have two separate PO’s for the subscriptions, you need to open a new account in order for the subscriptions to be billed separately.

I’m a reseller and want to set up an account, how do I do that?

  1. Go to https://customer.cloudkarafka.com/login and enter your email address. Fill out all the information in the billing section, such as billing address, email, etc. Please note that it’s important that we have your billing information registered and not the end-user information since you are our direct customer and not the end-user.

  2. The PO number can be specified in the billing section under “billing notes”. Or send it to us, and we will add it for you.

  3. You are free to create and delete instances once the billing information is set up. It’s up to you and the end-user to decide who will create the subscription specified in the PO.

  4. Invite the end-user to the account via https://customer.cloudkarafka.com/team so that he/she can start using the service.

  5. Change the role of the person that created your account to “Billing Manager”. By doing so, you can access all invoices of the account and update the billing information. But you will not be able to edit the customer's subscription. See more information here: https://www.cloudkarafka.com/blog/manage-instance-access-acl.html

GDPR and SOC2

Is CloudKarafka GDPR-compliant?

Yes. You can read more here.

Can we sign a DPA with CloudKarafka?

No separate agreement is required. Our Data Processing Agreement (DPA) for GDPR is an exhibit to our Terms of Service. Thus, our business relationship is automatically covered by a DPA when signing up for an account.

How do you make a subject access request to 84codes AB?

You have the right to see what personal information 84codes AB holds about you. You are entitled to be given a description of the information, what we use it for, who we might pass it onto, and any information we might have about the source of the information.

A subject access requests should be made via email to compliance@84codes.com.

Where is my data located?

As a data controller, you decide for yourself where you want to host your data by choosing a data center and region. The data will not leave that region unless you choose to move it. In CloudKarafka’s role as data controller, we may collect and store contact information, such as email address, and physical address, when customers sign up for our services or seek support help.

Your personal customer data (email and billing information) is stored in the US.

Are CloudKarafka SOC2 compliant?

We are proud to be compliant with SOC 2 by AICPA. We have been audited against the Security (common criteria) and Availability Trust Services Criteria.

Our SOC 2 Type 2 report can be obtained under an NDA per request. Please send an email to compliance@cloudkarafka.com.