Kafka consumer offsets out of range with no config

2019-04-22 18:52发布

问题:

I'm receiving exception when start Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

I'm using Kafka version 9.0.0 with Java 7.

回答1:

So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

There could be two cases for this

  1. Your topic partition 0 may not have those many messages
  2. Your message at offset 29898318 might have already deleted by retention period

To avoid this you can do one of following:

  1. Set auto.offset.reset config to either smallest or largest . You can find more info regarding this here
  2. You can get smallest offset available for a topic partition by running following Kafka command line tool

command:

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2

Hope this helps!



回答2:

I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

  • cleanup.policy=cleanup,delete
  • retention of 4 days

When my application was down for more than 4 days, Kafka Streams still had a snapshot pointing to an offset that doesn't exist anymore (removed by kafka, because outside retention window). The restore consumer is configured to fail in those cases, it doesn't fall back to the earliest offset.

Since I'm not interested in data older than 4 days, I used the streams-application-reset tool to clear the changelog topic.