Apache Kafka

Step-by-step guide to send event data from RudderStack to Apache Kafka

Apache Kafka is a popular distributed streaming platform. It allows you to handle large-scale workloads with high throughput and low latency. Apache Kafka is highly available and is used across the world for building real-time data pipelines and streaming applications.

RudderStack allows you to configure Apache Kafka as a destination to which you can send your event data seamlessly.

Find the open-source transformer code for this destination in our GitHub repo.

Getting Started

In order to enable dumping data to Kafka, you will first need to add it as a destination to the source from which you are sending event data. Once the destination is enabled, events from RudderStack will start flowing to Kafka.

Before configuring your source and destination on the dashboard, please check whether the platform you are working on is supported by Apache Kafka. Please refer to the table below:

Connection Mode




Device mode




Cloud mode




To know more about the difference between Cloud mode and Device mode in RudderStack, read the RudderStack connection modes guide.

Once you have confirmed that the platform supports sending events to Kafka, perform the steps below:

  • Choose a source to which you would like to add Kafka as a destination.

Please follow our guide on How to Add a Source and Destination in RudderStack to add a source and destination in RudderStack.

  • Select the destination as Kafka to your source. Give your destination a name and then click on Next.

  • Next, in the Connection Settings, fill all the fields with the relevant information and click Next

Kafka Connection Settings
  • Host Name: Your Kafka server broker's host name goes here

  • Port: The port to connect to the broker

  • Topic Name: Please provide the topic name, to which you want to send data

  • SSL Enabled: Please enable this option if you have enabled SSL to connect to your broker

  • CA Certificate: If you have enabled SSL, then please provide the CA certificate in this field

Partition Key

We use userId as the partition key of message.

If userId is not present in payload, then anonymousId is used instead.

So, if you have a multi-partitioned topic, then the records of the same userId (or anonymousId in absence of userId) will always go to the same partition.


Does my Kafka server require Client Authentication?

If you have enabled 2-way SSL, i.e. your server requires client authentication, then you need to have our CA certificate and put that in the Truststore of your server.

How can I enable the 2-way SSL in Kafka and connect to RudderStack?

Please follow the steps below that make use of Java's keytool utility.

  1. Generate Key and Certificates: keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -genkey

  2. Create your own CA

    1. Generate a CA that is simply a public-private key pair and certificate, and it is intended to sign other certificates. You need to put this certificate at the RudderStack Web App as CA certificate.

      openssl req -new -x509 -keyout ca-key -out ca-cert -days {validity}

    2. Add the generated CA to the broker's truststore so that the brokers can trust this CA.

      keytool -keystore kafka.server.truststore.jks -alias CARoot -importcert -file ca-cert

  3. Sign the certificates

    1. Export the certificate from the keystore, like so: keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file

    2. Sign it with the CA: openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}

    3. Import both the certificate of the CA and the signed certificate into the broker keystore: 1. keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert 2. keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed

By following all the steps described above, the script to create the CA and broker and client truststores and keystores is as shown:

keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -validity {validity} -genkey
openssl req -new -x509 -keyout ca-key -out ca-cert -days {validity}
keytool -keystore kafka.server.truststore.jks -alias CARoot -importcert -file ca-cert
keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}
keytool -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore kafka.server.keystore.jks -alias localhost -import -file cert-signed
  • Put the below parameters in your server.properties

ssl.keystore.location=<keystore location>
ssl.keystore.password=<keystore password>
ssl.key.password=<ca key password>
ssl.truststore.location=<truststore location>
ssl.truststore.password=<truststore password>
  • You also need to put RudderStack's CA certificate to your truststore, as shown:

keytool -keystore kafka.server.truststore.jks -alias CARootRudder -import -file ca-cert-rudder
// here ca-cert-rudder is the rudder CA certificate

Here is the CA certificate that you need to add to your trust store:


How can you connect to RudderStack if your Kafka server is running in a Kubernetes cluster?

You will need to expose one public address, to which RudderStack connects. We recommend using SSL for that. Please note that you should allow only the authenticated clients for this exposed address. If you use PLAINTEXT for your internal services within your cluster, you might have the same.

Open this address with SSL in addition to that. For that, you need to update advertised.listeners in your server.properties.

A sample entry is as shown below:

# Hostname and port the broker will advertise to producers and consumers.
# here the INTERNAL listerner is your cluster kafka service host for kafka server
# and the EXTERNAL is public loadbalancer for kafka server
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details

Contact Us

If you come across any issues while configuring or using Kafka with RudderStack, please feel free to contact us. You can also start a conversation on our Slack channel; we will be happy to talk to you!