handling broker down in kafka

2019-06-07 01:12发布

问题:

I'am using kafka producer in async mode but when all brokers are down it acts like sync and it waits until metadata.fetch.timeout.ms expires which is 60 sec for my case. My first question, is this is a normal behaviour or am i doing something wrong?

Since transactions in my logic should finish in maximum 100 ms, this timeout value is a really big delay for me. Perhaps setting metadata.fetch.timeout.ms to 10 ms may solve my problem but i am not sure how this effect my system. Does this cause a bottleneck or much consumption of cpu in somewhere?

Another possible solution may be producing messages in an executorservice which makes the producing really async but i don't want to make things more complex. Did anyone try this before?

My last question is, may be i use switch mechanism to disable producing to kafka if all brokers are down and enable if all brokers are up. Is there any functionality for hearthbeat issues in kafka?

Thanks.

回答1:

Best way to do this is to hook directly into Zookeeper. Not sure what language you use, but there should be a Zookeeper client available. I use Node, which has node-zookeeper-client. In Node, you call first do createClient(), then do getChildren() on the Zookeeper path /brokers/ids. At least in Node, you can set a trigger to change every time the array of ids changes. When there are no children, all brokers are down. As long as there are any children, then there is a broker up.