I am a new student studying Kafka and I've run into some fundamental issues with understanding multiple consumers that articles, documentations, etc. have not been too helpful with so far.
One thing I have tried to do is write my own high level Kafka producer and consumer and run them simultaneously, publishing 100 simple messages to a topic and having my consumer retrieve them. I have managed to do this successfully, but when I try to introduce a second consumer to consume from the same topic that messages were just published to, it receives no messages.
It was my understanding that for each topic, you could have consumers from separate consumer groups and each of these consumer groups would get a full copy of the messages produced to some topic. Is this correct? If not, what would be the proper way for me to set up multiple consumers? This is the consumer class that I have written so far:
public class AlternateConsumer extends Thread {
private final KafkaConsumer<Integer, String> consumer;
private final String topic;
private final Boolean isAsync = false;
public AlternateConsumer(String topic, String consumerGroup) {
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("group.id", consumerGroup);
properties.put("partition.assignment.strategy", "roundrobin");
properties.put("enable.auto.commit", "true");
properties.put("auto.commit.interval.ms", "1000");
properties.put("session.timeout.ms", "30000");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.IntegerDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
consumer = new KafkaConsumer<Integer, String>(properties);
consumer.subscribe(topic);
this.topic = topic;
}
public void run() {
while (true) {
ConsumerRecords<Integer, String> records = consumer.poll(0);
for (ConsumerRecord<Integer, String> record : records) {
System.out.println("We received message: " + record.value() + " from topic: " + record.topic());
}
}
}
}
Furthermore, I noticed that originally I was testing the above consumption for a topic 'test' with only a single partition. When I added another consumer to an existing consumer group say 'testGroup', this trigged a Kafka rebalance which slowed down the latency of my consumption by a significant amount, in the magnitude of seconds. I thought that this was an issue with rebalancing since I only had a single partition, but when I created a new topic 'multiplepartitions' with say 6 partitions, similar issues arose where adding more consumers to the same consumer group caused latency issues. I have looked around and people are telling me I should be using a multi-threaded consumer -- can anyone shed light on that?
I think your problem lies with the auto.offset.reset property. When a new consumer reads from a partition and there's no previous committed offset, the auto.offset.reset property is used to decide what the starting offset should be. If you set it to "largest" (the default) you start reading at the latest (last) message. If you set it to "smallest" you get the first available message.
So add:
properties.put("auto.offset.reset", "smallest");
and try again.
In the documentation here it says: "if you provide more threads than there are partitions on the topic, some threads will never see a message". Can you add partitions to your topic? I have my consumer group thread count equal to the number of partitions in my topic, and each thread is getting messages.
Here's my topic config:
buffalo-macbook10:kafka_2.10-0.8.2.1 aakture$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic recent-wins
Topic:recent-wins PartitionCount:3 ReplicationFactor:1 Configs:
Topic: recent-wins Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: recent-wins Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: recent-wins Partition: 2 Leader: 0 Replicas: 0 Isr: 0
And my consumer:
package com.cie.dispatcher.services;
import com.cie.dispatcher.model.WinNotification;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.inject.Inject;
import io.dropwizard.lifecycle.Managed;
import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
/**
* This will create three threads, assign them to a "group" and listen for notifications on a topic.
* Current setup is to have three partitions in Kafka, so we need a thread per partition (as recommended by
* the kafka folks). This implements the dropwizard Managed interface, so it can be started and stopped by the
* lifecycle manager in dropwizard.
* <p/>
* Created by aakture on 6/15/15.
*/
public class KafkaTopicListener implements Managed {
private static final Logger LOG = LoggerFactory.getLogger(KafkaTopicListener.class);
private final ConsumerConnector consumer;
private final String topic;
private ExecutorService executor;
private int threadCount;
private WinNotificationWorkflow winNotificationWorkflow;
private ObjectMapper objectMapper;
@Inject
public KafkaTopicListener(String a_zookeeper,
String a_groupId, String a_topic,
int threadCount,
WinNotificationWorkflow winNotificationWorkflow,
ObjectMapper objectMapper) {
consumer = kafka.consumer.Consumer.createJavaConsumerConnector(
createConsumerConfig(a_zookeeper, a_groupId));
this.topic = a_topic;
this.threadCount = threadCount;
this.winNotificationWorkflow = winNotificationWorkflow;
this.objectMapper = objectMapper;
}
/**
* Creates the config for a connection
*
* @param zookeeper the host:port for zookeeper, "localhost:2181" for example.
* @param groupId the group id to use for the consumer group. Can be anything, it's used by kafka to organize the consumer threads.
* @return the config props
*/
private static ConsumerConfig createConsumerConfig(String zookeeper, String groupId) {
Properties props = new Properties();
props.put("zookeeper.connect", zookeeper);
props.put("group.id", groupId);
props.put("zookeeper.session.timeout.ms", "400");
props.put("zookeeper.sync.time.ms", "200");
props.put("auto.commit.interval.ms", "1000");
return new ConsumerConfig(props);
}
public void stop() {
if (consumer != null) consumer.shutdown();
if (executor != null) executor.shutdown();
try {
if (!executor.awaitTermination(5000, TimeUnit.MILLISECONDS)) {
LOG.info("Timed out waiting for consumer threads to shut down, exiting uncleanly");
}
} catch (InterruptedException e) {
LOG.info("Interrupted during shutdown, exiting uncleanly");
}
LOG.info("{} shutdown successfully", this.getClass().getName());
}
/**
* Starts the listener
*/
public void start() {
Map<String, Integer> topicCountMap = new HashMap<>();
topicCountMap.put(topic, new Integer(threadCount));
Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap = consumer.createMessageStreams(topicCountMap);
List<KafkaStream<byte[], byte[]>> streams = consumerMap.get(topic);
executor = Executors.newFixedThreadPool(threadCount);
int threadNumber = 0;
for (final KafkaStream stream : streams) {
executor.submit(new ListenerThread(stream, threadNumber));
threadNumber++;
}
}
private class ListenerThread implements Runnable {
private KafkaStream m_stream;
private int m_threadNumber;
public ListenerThread(KafkaStream a_stream, int a_threadNumber) {
m_threadNumber = a_threadNumber;
m_stream = a_stream;
}
public void run() {
try {
String message = null;
LOG.info("started listener thread: {}", m_threadNumber);
ConsumerIterator<byte[], byte[]> it = m_stream.iterator();
while (it.hasNext()) {
try {
message = new String(it.next().message());
LOG.info("receive message by " + m_threadNumber + " : " + message);
WinNotification winNotification = objectMapper.readValue(message, WinNotification.class);
winNotificationWorkflow.process(winNotification);
} catch (Exception ex) {
LOG.error("error processing queue for message: " + message, ex);
}
}
LOG.info("Shutting down listener thread: " + m_threadNumber);
} catch (Exception ex) {
LOG.error("error:", ex);
}
}
}
}
If you want multiple consumers to consume same messages (like a broadcast), you can spawn them with different consumer group and also setting auto.offset.reset to smallest in consumer config.
If you want multiple consumers to finish consuming in parallel ( divide the work among them ), you should create number of partitions >= number of consumers. One partition can be only consumed by at most one consumer process. But One consumer can consume more than one partitions.