Killing node with __consumer_offsets leads to no m

2020-06-23 09:34发布

I have 3 node(nodes0,node1,node2) Kafka cluster(broker0, broker1, broker2) with replication factor 2 and Zookeeper(using zookeeper packaged with Kafka tar) running on a different node (node 4).

I had started broker 0 after starting zookeper and then remaining nodes. It is seen in broker 0 logs that it is reading __consumer_offsets and seems they are stored on broker 0. Below are sample logs:

Kafka Version: kafka_2.10-0.10.2.0

    2017-06-30 10:50:47,381] INFO [GroupCoordinator 0]: Loading group metadata for console-consumer-85124 with generation 2 (kafka.coordinator.GroupCoordinator)
    [2017-06-30 10:50:47,382] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from __consumer_offsets-41 in 23 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 10:50:47,382] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 10:50:47,387] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from __consumer_offsets-44 in 5 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 10:50:47,387] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 10:50:47,398] INFO [Group Metadata Manager on Broker 0]: Finished loading offsets from __consumer_offsets-47 in 11 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 10:50:47,398] INFO [Group Metadata Manager on Broker 0]: Loading offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.GroupMetadataManager)

Also, I can see GroupCoordinator messages in the same broker 0 logs.

[2017-06-30 14:35:22,874] INFO [GroupCoordinator 0]: Preparing to restabilize group console-consumer-34472 with old generation 1 (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:35:22,877] INFO [GroupCoordinator 0]: Group console-consumer-34472 with generation 2 is now empty (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:35:25,946] INFO [GroupCoordinator 0]: Preparing to restabilize group console-consumer-6612 with old generation 1 (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:35:25,946] INFO [GroupCoordinator 0]: Group console-consumer-6612 with generation 2 is now empty (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:35:38,326] INFO [GroupCoordinator 0]: Preparing to restabilize group console-consumer-30165 with old generation 1 (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:35:38,326] INFO [GroupCoordinator 0]: Group console-consumer-30165 with generation 2 is now empty (kafka.coordinator.GroupCoordinator)
    [2017-06-30 14:43:15,656] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 3 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-06-30 14:53:15,653] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)

While testing fault tolerance for the cluster using the kafka-console-consumer.sh and kafka-console-producer.sh, I see that on killing broker 1 or broker 2, the consumer can still receive new messages coming from producer. The Rebalance is happening correctly.

However, killing broker 0 leads to no new or old messages consumption at any number of consumers. Below is the state of topic before and after broker 0 is killed.

Before

Topic:test-topic    PartitionCount:3    ReplicationFactor:2 Configs:
    Topic: test-topic   Partition: 0    Leader: 2   Replicas: 2,0   Isr: 0,2
    Topic: test-topic   Partition: 1    Leader: 0   Replicas: 0,1   Isr: 0,1
    Topic: test-topic   Partition: 2    Leader: 1   Replicas: 1,2   Isr: 1,2

After

Topic:test-topic    PartitionCount:3    ReplicationFactor:2 Configs:
    Topic: test-topic   Partition: 0    Leader: 2   Replicas: 2,0   Isr: 2
    Topic: test-topic   Partition: 1    Leader: 1   Replicas: 0,1   Isr: 1
    Topic: test-topic   Partition: 2    Leader: 1   Replicas: 1,2   Isr: 1,2

Following are the WARN messages seen in the consumer logs after broker 0 is killed

[2017-06-30 14:19:17,155] WARN Auto-commit of offsets {test-topic-2=OffsetAndMetadata{offset=4, metadata=''}, test-topic-0=OffsetAndMetadata{offset=5, metadata=''}, test-topic-1=OffsetAndMetadata{offset=4, metadata=''}} failed for group console-consumer-34472: Offset commit failed with a retriable exception. You should retry committing offsets. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
[2017-06-30 14:19:10,542] WARN Auto-commit of offsets {test-topic-2=OffsetAndMetadata{offset=4, metadata=''}, test-topic-0=OffsetAndMetadata{offset=5, metadata=''}, test-topic-1=OffsetAndMetadata{offset=4, metadata=''}} failed for group console-consumer-30165: Offset commit failed with a retriable exception. You should retry committing offsets. (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)

Broker Properties. The remaining default properties are unchanged.

broker.id=0
delete.topic.enable=true

auto.create.topics.enable=false
listeners=PLAINTEXT://XXX:9092
advertised.listeners=PLAINTEXT://XXX:9092
log.dirs=/tmp/kafka-logs-test1
num.partitions=3
zookeeper.connect=XXX:2181

Producer properties. The remaining default properties are unchanged.

bootstrap.servers=XXX,XXX,XXX
compression.type=snappy

Consumer properties. The remaining default properties are unchanged.

zookeeper.connect=XXX:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

As far I understand, if node holding/acting GroupCoordinator and __consumer_offsets dies, then the consumer unable to resume normal operations in spite of new leaders elected for partitions.

I see something similar posted in post. This post suggests to restart the dead broker node. However, there would be delay in message consumption in-spite of having more nodes until broker 0 is restarted in production environment.

Q1: How can the above situation be mitigated ?

Q2: Is there a way to change the GroupCoordinator, __consumer_offsets to another node?

Any suggestions/help is appreciated.

1条回答
【Aperson】
2楼-- · 2020-06-23 10:21

Check the replication factor on the __consumer_offsets topic. If it's not 3 then that's your problem.

Run the following command kafka-topics --zookeeper localhost:2181 --describe --topic __consumer_offsets and see if in the first line of output it says "ReplicationFactor:1" or "ReplicationFactor:3".

It's a common problem when doing trials to first setup one node and then this topic gets created with replication factor of 1. Later when you expand to 3 nodes you forget to change the topic level settings on this existing topic so even though the topics you are producing and consuming from are fault tolerant, the offsets topic is still stuck on broker 0 only.

查看更多
登录 后发表回答