I've seen a lot of examples of using the high level consumer (consumer group) to consume a topic using many threads within the same process. Can you have multiple processes (on different machines) split the partitions and consume in parallel? If so, do you have any examples?
标签:
apache-kafka
相关问题
- Delete Messages from a Topic in Apache Kafka
- Serializing a serialized Thrift struct to Kafka in
- Kafka broker shutdown while cleaning up log files
- Getting : Error importing Spark Modules : No modul
- How to transform all timestamp fields when using K
相关文章
- Kafka doesn't delete old messages in topics
- Kafka + Spark Streaming: constant delay of 1 secon
- Spring Kafka Template implementaion example for se
- How to fetch recent messages from Kafka topic
- Determine the Kafka-Client compatibility with kafk
- Kafka to Google Cloud Platform Dataflow ingestion
- Kafka Producer Metrics
- Spark Structured Streaming + Kafka Integration: Mi
The short answer is yes. With the high-level consumer, each thread handles one or more partitions and zookeeper is used to coordinate. Since zookeeper is used, its fine to spread them out across separate processes and machines. The Kafka wiki has an example using the high-level consumer. You can run that on multiple machines to see it in action. The high-level consumer will automatically rebalance across consumers when they are added or removed. Remember that partitions define the level of parallelism for a topic so if you have more consumer threads than partitions, some of those threads will just sit idle.
It's also worth noting that Kafka does not provide any sort of distributed framework for running the consumer applications across machines. That's where systems like Storm or Spark are useful since they can consume from Kafka and manage the processes doing the consuming. The folks behind Kafka also recently open sourced a package called Samza which provides higher-level kafka-based stream processing on Hadoop/YARN.