Simple Kafka Consumer not receiving messages

2020-06-19 00:26发布

问题:

I am a newbie to Kafka and running a simple kafka consumer/producer example as given on KafkaConsumer and KafkaProducer. When I am running consumer from terminal, consumer is receiving messages but I am not able to listen using Java code. I have searched for similar issues on StackoverFlow also (Links: Link1, Link2) and tried that solutions but nothing seems to be working for me. Kafka Version: kafka_2.10-0.10.2.1 and corresponding maven dependency is used in pom.

Java Code for producer and consumer:

public class SimpleProducer {
public static void main(String[] args) throws InterruptedException {
    Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:9094");
    props.put("acks", "all");
    props.put("retries", 0);
    props.put("batch.size", 16384);
    props.put("linger.ms", 1);
    props.put("buffer.memory", 33554432);
    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

    Producer<String, String> producer = new KafkaProducer<>(props);
    for(int i = 0; i < 10; i++)
        producer.send(new ProducerRecord<String, String>("topic3", Integer.toString(i), Integer.toString(i)));

    producer.close();

}}

public class SimpleConsumer {

public static void main(String[] args) {
    Properties props = new Properties();
    props.put("bootstrap.servers", "localhost:9094");
    props.put("group.id", "test");
    props.put("zookeeper.connect", "localhost:2181");
    props.put("enable.auto.commit", "true");
    props.put("auto.commit.interval.ms", "1000");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
    consumer.subscribe(Arrays.asList("topic3", "topic2"));
    while (true) {
        ConsumerRecords<String, String> records = consumer.poll(100);
        for (ConsumerRecord<String, String> record : records)
            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    }
}}

Starting kafka: bin/kafka-server-start.sh config/server.properties (I have already set port, brokerid in properties file)

回答1:

First check what all the groups are available by using :

./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list

Then check for which group your topic belongs by using below cmd :

./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group <your group name> --describe

Once you find your topic and associated group name (just replace group.id with your group if it not belongs to default group) then try with below prop and let me know if it works :

  props.put("bootstrap.servers", "localhost:9092");
  props.put("group.id", "test-consumer-group"); // default topic name
  props.put("enable.auto.commit", "true");
  props.put("auto.commit.interval.ms", "1000");
  props.put("session.timeout.ms", "30000");
  props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
  props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
  KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

  //Kafka Consumer subscribes list of topics here.
  consumer.subscribe(Arrays.asList(topicName));  // replace you topic name

  //print the topic name

  java.util.Map<String,java.util.List<PartitionInfo>> listTopics = consumer.listTopics();
  System.out.println("list of topic size :" + listTopics.size());

  for(String topic : listTopics.keySet()){
      System.out.println("topic name :"+topic);
  }


回答2:

Run the consumer before running the producer so that the consumer registers with the group coordinator first.Later when u run the producer the consumer consumes the messages.The first time u run the consumer its registering with the group coordinator.In order to find out till what offset the consumer has consumed the messages use this kafka-consumer-offset-checker.bat --group group-1 --topic testing-1 --zookeeper localhost:2181 This shows the consumer has consumed which offset of the topic last.



回答3:

Clear Your 'tmp' folder in the drive which you accessing kafka. then open new 'cmd' command window! Restart server freshly, and post " .\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic H1 --from-beginning "this code in the command window to run consumer without any error



回答4:

Try to set enable.partition.eof parameter to false:

props.put("enable.partition.eof", "false");

That worked for me.



回答5:

Try this one this code worked for me.

Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
"org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
"org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-consumer-group");
KafkaConsumer<String, String> myConsumer = new KafkaConsumer<>(props);
myConsumer.subscribe(Arrays.asList(topicName));
myConsumer.subscribe(topics);

try{
      while (true) {
              ConsumerRecords<String, String> records = myConsumer.poll(100);
              for (ConsumerRecord<String, String> record : records) {
                  System.out.println(String.format( "Topic: %s, Partition: %d, Offset: %d, key: %s, value: %s",
                          record.topic(),record.partition(), record.offset(),record.key(),record.value()
                  ));
              }}
    }catch (Exception e){
        System.out.println(e.getMessage());
    }finally {
        myConsumer.close();
    }