Broker Network flooded with unconsumed ActiveMQ.Ad

2019-04-08 04:48发布

问题:

I'm currently investigating a memory problem in my broker network. According to JConsole the ActiveMQ.Advisory.TempQueue is taking up 99% of the configured memory when the broker starts to block messages.

A few details about the config

Default config for the most part. One open stomp+nio connector, one open openwire connector. All brokers form a hypercube (one on-way connection to every other broker (easier to auto-generate)). No flow-control.

Problem details

The webconsole shows something like 1974234 enqueued and 45345 dequeued messages at 30 consumers (6 brokers, one consumer and the rest is clients that use the java connector). As far as I know the dequeue count should be not much smaller than: enqueued*consumers. so in my case a big bunch of advisories is not consumed and starts to fill my temp message space. (currently I configured several gb as temp space)

Since no client actively uses temp queues I find this very strange. After taking a look at the temp queue I'm even more confused. Most of the messages look like this (msg.toString):

ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:srv007210-36808-1318839718378-1:1:0:0:203650, originalDestination = null, originalTransactionId = null, producerId = ID:srv007210-36808-1318839718378-1:1:0:0, destination = topic://ActiveMQ.Advisory.TempQueue, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1318840153501, brokerOutTime = 1318840153501, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@45290155, dataStructure = DestinationInfo {commandId = 0, responseRequired = false, connectionId = ID:srv007210-36808-1318839718378-2:2, destination = temp-queue://ID:srv007211-47019-1318835590753-11:9:1, operationType = 1, timeout = 0, brokerPath = null}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=broker.coremq-behaviortracking-675-mq-01-master, originBrokerId=ID:srv007210-36808-1318839718378-0:1, originBrokerURL=stomp://srv007210:61612}, readOnlyProperties = true, readOnlyBody = true, droppable = false}

After seeing these messages I have several questions:

  1. Do I understand correctly that the origin of the message is a stomp connection?
  2. If yes, how can a stomp connection create temp queues?
  3. Is there a simple reason why the advisories are not consumed?

Currently I sort of postponed the problem by deactivating the bridgeTempDestinations property on the network connectors. this way the messages are not spread and they fill the temp space much slower. If I can not fix the source of these messages I would at least like to stop them from filling the store:

  1. Can I drop these unconsumed messages after a certain time?
  2. what consequences can this have?

UPDATE: I monitored my cluster some more and found out that the messages are consumed. They are enqueued and dispatched but the consumers (the other cluster nodes as mutch as java consumers that use the activemq lib) fail to acknowledge the messages. so they stay in the dispatched messages queue and this queue grows and grows.

回答1:

This is an old thread but in case somebody runs into it having the same problem, you might want to check out this post: http://forum.spring.io/forum/spring-projects/integration/111989-jms-outbound-gateway-temporary-queues-never-deleted

The problem in that link sounds similar, i.e. temp queues producing large amount of advisory messages. In my case, we were using temp queues to implement synchronous request/response messaging but the volume of advisory messages caused ActiveMQ to spend most of its time in GC and eventually throw a GC Overhead Limit Exceeded Exception. This was on v5.11.1. Even though we closed connection, session, producer, consumer the temp queue would not be GC'd and would continue receiving advisory messages.

The solution was to explicitly delete the temp queues when cleaning up the other resources (see https://docs.oracle.com/javaee/7/api/javax/jms/TemporaryQueue.html)



回答2:

If you are not using this advisory topic - you may want to turn it off as it's suggested at http://activemq.2283324.n4.nabble.com/How-to-disable-advisory-for-gt-topic-ActiveMQ-Advisory-TempQueue-td2356134.html

Dropping the advisory messages will not have any consequences - since those are just the messages meant for system health analysis and statistics.



标签: activemq