Can't write data to kafka cluster

412 Views Asked by At

I deployed a three-node kafka cluster with docker-compose.

here is my file

version: '2'
services:
  zookeeper:
    image: bitnami/zookeeper
    container_name: zook_kafka
    ports:
      - "192.168.0.90:2181:2181"
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: bitnami/kafka
    container_name: kafka
    volumes:
      - /home/nnz/voystrik/kafka/data:/bitnami/kafka
    ports:
      - "192.168.0.90:9092:9092"
    user: root
    environment:
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_ZOOKEEPER_CONNECT=192.168.0.90:2181
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://192.168.0.90:9092
    depends_on:
      - zookeeper

  kafka_2:
    image: bitnami/kafka
    container_name: kafka2
    volumes:
      - /home/nnz/voystrik/kafka/data2:/bitnami/kafka
    ports:
      - "9092"
    user: root
    environment:
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_ZOOKEEPER_CONNECT=192.168.0.90:2181
    depends_on:
      - zookeeper

  kafka_3:
    image: bitnami/kafka
    container_name: kafka3
    volumes:
      - /home/nnz/voystrik/kafka/data3:/bitnami/kafka
    ports:
      - "9092"
    user: root
    environment:
      - ALLOW_PLAINTEXT_LISTENER=yes
      - KAFKA_ZOOKEEPER_CONNECT=192.168.0.90:2181
    depends_on:
      - zookeeper



  kafka-ui:
    image: provectuslabs/kafka-ui
    container_name: web_kafka
    ports:
      - "192.168.0.90:8844:8080"
    environment:
      - KAFKA_CLUSTERS_0_NAME=local
      - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=192.168.0.90:9092
      - KAFKA_CLUSTERS_0_ZOOKEEPER=192.168.0.90:2181
    depends_on:
      - kafka

I create a topic with the command:

./kafka-topics.sh -create --zookeeper 192.168.0.90:2181 --topic first --partitions 3 --replication-factor 3

There are no problems with creation. However, when trying to write data to a topic with the command:

./kafka-console-producer.sh --topic first --bootstrap-server 192.168.0.90:9092

the following errors appear:

    [2021-06-04 13:48:48,716] WARN [Producer clientId=console-producer] Error connecting to node 8d76612df858:9092 (id: 1001 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: 8d76612df858
    at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
    at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
    at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
    at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:111)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:512)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:466)
    at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:172)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:985)
    at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1158)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1046)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:559)
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
    at java.base/java.lang.Thread.run(Thread.java:829)
[2021-06-04 13:48:49,069] WARN [Producer clientId=console-producer] Error connecting to node b66168595792:9092 (id: 1002 rack: null) (org.apache.kafka.clients.NetworkClient)
java.net.UnknownHostException: b66168595792
    at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797)
    at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1509)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1368)
    at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1302)
    at org.apache.kafka.clients.DefaultHostResolver.resolve(DefaultHostResolver.java:27)
    at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:111)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.currentAddress(ClusterConnectionStates.java:512)
    at org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.access$200(ClusterConnectionStates.java:466)
    at org.apache.kafka.clients.ClusterConnectionStates.currentAddress(ClusterConnectionStates.java:172)
    at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:985)
    at org.apache.kafka.clients.NetworkClient.access$600(NetworkClient.java:73)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1158)
    at org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:1046)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:559)
    at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:327)
    at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:242)
    at java.base/java.lang.Thread.run(Thread.java:829)

Also tried listing brokers via the --broker-list flag, but the result is the same. What could be the cause of the errors?

2

There are 2 best solutions below

0
On

First off, running three broker nodes on one machine is actually less performant than just one. If you're using Swarm, then you should be using 3 Zookeeper servers as well

Secondly, you're missing variables for the KAFKA_CFG_ADVERTISED_LISTENERS, where the default would be the Docker container ID, which is the hexadecimal host addresses you're getting in the error

2
On

I believe it's due to you using the Zookeper refrerence in the kafka-console-producer.sh command --bootstrap-server section.

As per README.md on bitnami repo You should use the kafka broker endpoint as --bootstrap-server

kafka-console-producer.sh --broker-list kafka:9092 --topic test

I suggest you to check the settings around port forwarding that are contained in the Development setup environment section