What happens when shared MemoryAwareThreadPoolExec

2019-09-08 05:20发布

What happens to the channels, when the MemoryAwareThreadPoolExecutor's channel or total threshold is reached? The MemoryAwareThreadPoolExecutor is set into an ExecutionHandler, that's on every pipeline before the I/O-Handler.

My current state of information:

I found: channel.setReadable(false) is called. That means all read-operations on all channels are stopped, right? So incoming data will not be delivered to any pipeline, won't it? When I got it right you should devide your code at the end of a pipeline into a non-blocking business-handler and a blocking business handler with an execution handler before the blocking one. Example: -> Decoder, Encode, NonBlockingHandler, ExecutionHandler, I/O-Handler

That's where I think it would be better to get the messages at least to the last handler before the execution handler. If I am right, then messages that would not need to be processed by the I/O-Handler will not get into NonBlockingHandler until Thread-Pool of the execution handler is below threshold again.

I admit that this will not guarantee the execution of the messages in received order per channel. But let's just assume that it is not necessary.

Best regards and cheers to Netty!

标签: netty
2条回答
等我变得足够好
2楼-- · 2019-09-08 05:56

Channel.setReadable(false) will only affect the Channel on which it was called an no other channels.

查看更多
Evening l夕情丶
3楼-- · 2019-09-08 06:10

When the channel threshold for a given channel is reached, channel.setReadable(false) will be called thus preventing further reads from that channel. When enough data has been processed channel.setReadable(true) will be called allowing data to be read again. In the meantime any unread data will be stored in the OS network stack buffers or backed up to the sending host.

When the total threshold is reached the IO thread trying to queue the request is blocked until enough data has been processed. You have to be really careful with this because it can lead to deadlock in the following situation:

  1. Channel (or channels) receives data faster than can be processed
  2. Channel (or channels) queue more than the total threshold limit, blocking the IO thread
  3. The thread pool thread writes some data back to the channel and waits for it to complete.

The thread pool thread is never released because the IO thread is blocked waiting for it to return and thus cannot process the write request.

One other thing, unless you are queuing ChannelBuffers in the thread pool you really need to create a custom implementation of ObjectSizeEstimator to ensure the thread pool can manage the thresholds properly.

查看更多
登录 后发表回答