Does Kafka support request response messaging

2020-02-26 04:12发布

I am investigating Kafka 9 as a hobby project and completed a few "Hello World" type examples.

I have got to thinking about Real World Kafka applications based on request response messaging in general and more specifically how to link a Kafka request message to its response message.

I was thinking along the lines of using a generated UUID as the request message key and employ this request UUID as the associated response message key. Much the same type of mechanism that WebSphere MQ has message correlation id.

My end 2 end process would be.

1). Kafka client generates a random UUID and sends a single Kafka request message. 2). The server would consume this request message extract & store the request UUID value 3). complete a Business Process using the message payload. 4). Respond with a response message that employs the stored UUID value from the request message as response message Key. 5). the Kafka client polls the response topic until it either timeouts or retrieves a message with the original request UUID value.

What I concerned about is that the Kafka Consumer polling will remove other clients messages from the response topic and increment the offsets making other clients fail.

Am I trying to apply Kafka in a use case it was never designed for?

Is it possible to implement request/response messaging in Kafka?

4条回答
乱世女痞
2楼-- · 2020-02-26 04:38

I think you need a well defined shard key of the service that invokes the request. Your request should contain this shard key and the name of the topic where to post response. Also you should create some sort of state machine and when a message regarding your task comes you transition to some state... this would be for strict async design

查看更多
Root(大扎)
3楼-- · 2020-02-26 04:43

Even though Kafka provides convenience methods to persist the committed offsets for a given consumer group, you're not required to use that behavior and can write your own if you feel the need. Even so, the use of Kafka the way you've described it is a bit awkward for the use case as each client needs to repeatedly search the topic for a specific response. That's inefficient at best.

You could break the problem into two parts, continuing to use Kafka to deliver requests to and responses from your server. The only piece you'd need to add would be some sort of API layer that your clients talk to and which encapsulates the Kafka-specific logic from your clients. This layer would need a local DB (relational or NoSQL) that could store responses by uuid making it very fast and easy for the API to answer whether a response is available for a specific uuid.

查看更多
We Are One
4楼-- · 2020-02-26 04:43

I never tried that, but in theory, if you before starting any produce, produce some messages keyed with numbers from 0 to number of partitions from answer topic and your producers are already consumers of that topic, so every producer would receive at least one of those messages. So you could store that key on each producer and publish it with the uuid... After the process on the consumer, it can publish the answer (on the answer topic) with the uuid and keyed with that same key sent with it, so it will be got by the same producer that sent it... Once all messages with the same key is published in the same partition...

查看更多
何必那么认真
5楼-- · 2020-02-26 05:00

Easier! You can only write on zookeeper that the UUID X should be answered on partition Y, and make the producer that sent that UUID consume the partition Y... Does that make sense?

查看更多
登录 后发表回答