Spring Aws Kinesis Binder ProvisionedThroughputExc

2020-05-05 18:18发布

I am using the batch mode to pull in the records from kinesis stream. We are using spring aws kinesis binder.

Most of the times we are not able to pull messages from stream. Only some times we are able to pull messages from stream.

My config looks like below

My config

spring:
  cloud:
    stream:
      kinesis:
        binder:
          locks:
            leaseDuration: 30
            readCapacity: 1
            writeCapacity: 1
          checkpoint:
            readCapacity: 1
            writeCapacity: 1
        bindings:
          InStreamGroupOne:
            consumer:
              listenerMode: batch
              idleBetweenPolls: 30000
              recordsLimit: 5000
              consumer-backoff: 1000
      bindings:
        InStreamGroupOne:
          group: in-stream-group
          destination: stream-1
          content-type: application/json
        OutboundStreamOne:
          destination: stream-2
          content-type: application/json
        OutboundStreamTwo:
          destination: stream-3
          content-type: application/json
        OutboundStreamThree:
          destination: stream-4
          content-type: application/json

When I enable the debug logging, I could able to see this exception

Received error response: com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API. (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ProvisionedThroughputExceededException; 

I tried reducing the batch size as 150 and idleBetweenPools to 1sec. I also updated readCapacity and writeCapacity to 10. But same error.

From AWS console, I could see that SpringIntegrationLockRegistry has crossed read threshold.

Can you please help us understand whats wrong.

It works some times and does not work some time.

1条回答
Melony?
2楼-- · 2020-05-05 18:38

Here is what you can do in regards to DynamoDB on AWS: How to solve throughput error for dynamodb?

From the application perspective, you can play with options for the locks: https://github.com/spring-cloud/spring-cloud-stream-binder-aws-kinesis/blob/master/spring-cloud-stream-binder-kinesis-docs/src/main/asciidoc/overview.adoc#lockregistry

leaseDuration

The length of time that the lease for the lock will be granted for. If this is set to, for example, 30 seconds, then the lock will expire if the heartbeat is not sent for at least 30 seconds (which would happen if the box or the heartbeat thread dies, for example.)

Default: 20

heartbeatPeriod

How often to update DynamoDB to note that the instance is still running (recommendation is to make this at least 3 times smaller than the leaseDuration - for example heartBeatPeriod=1 second, leaseDuration=10 seconds could be a reasonable configuration, make sure to include a buffer for network latency.)

Default: 5

refreshPeriod

How long to wait before trying to get the lock again (if set to 10 seconds, for example, it would attempt to do so every 10 seconds)

Default: 1000

查看更多
登录 后发表回答