I am using the batch mode to pull in the records from kinesis stream. We are using spring aws kinesis binder.
Most of the times we are not able to pull messages from stream. Only some times we are able to pull messages from stream.
My config looks like below
My config
spring:
cloud:
stream:
kinesis:
binder:
locks:
leaseDuration: 30
readCapacity: 1
writeCapacity: 1
checkpoint:
readCapacity: 1
writeCapacity: 1
bindings:
InStreamGroupOne:
consumer:
listenerMode: batch
idleBetweenPolls: 30000
recordsLimit: 5000
consumer-backoff: 1000
bindings:
InStreamGroupOne:
group: in-stream-group
destination: stream-1
content-type: application/json
OutboundStreamOne:
destination: stream-2
content-type: application/json
OutboundStreamTwo:
destination: stream-3
content-type: application/json
OutboundStreamThree:
destination: stream-4
content-type: application/json
When I enable the debug logging, I could able to see this exception
Received error response: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException;
I tried reducing the batch size as 150 and idleBetweenPools to 1sec. I also updated readCapacity and writeCapacity to 10. But same error.
From AWS console, I could see that SpringIntegrationLockRegistry has crossed read threshold.
Can you please help us understand whats wrong.
It works some times and does not work some time.
Here is what you can do in regards to DynamoDB on AWS: How to solve throughput error for dynamodb?
From the application perspective, you can play with options for the locks: https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/master/spring-cloud-stream-binder-kinesis-docs/src/main/asciidoc/overview.adoc#lockregistry