I'm receiving exception when start Kafka consumer.
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}
I'm using Kafka version 9.0.0 with Java 7.
I'm receiving exception when start Kafka consumer.
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}
I'm using Kafka version 9.0.0 with Java 7.
So you are trying to access offset(29898318
) in topic(test
) partition(0
) which is not available right now.
There could be two cases for this
0
may not have those many messages 29898318
might have already deleted by retention periodTo avoid this you can do one of following:
auto.offset.reset
config to either smallest
or largest
.
You can find more info regarding this heresmallest offset
available for a topic partition by
running following Kafka command line toolcommand:
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
Hope this helps!
I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:
cleanup.policy=cleanup,delete
When my application was down for more than 4 days, Kafka Streams still had a snapshot pointing to an offset that doesn't exist anymore (removed by kafka, because outside retention window). The restore consumer is configured to fail in those cases, it doesn't fall back to the earliest offset.
Since I'm not interested in data older than 4 days, I used the streams-application-reset tool to clear the changelog topic.