KahaDB db.data and ActiveMQ 5.6 - java

I am looking for answers for the following questions related to KahaDB.
I have an app that uses ActiveMQ and the app receives about 500,000 to 1 million messages which are written to an ActiveMQ queue with consumers picking them up.
I observed that the db.data file size varies anywhere between a few KB to almost 600 or 700 MB.
I noticed that when the db.data file size grows, messages get stuck, forcing me to shutdown my app, restart the app to have new consumers connecting to the broker to drain all messages.
Ideally what should be the size of db.data? If an app produces and consumes at the same time, irrespective of the number of messages, I think the size of db.data should be small.
Why sometimes the file size grows and what triggers the file size of db.data to grow?
I have ActiveMQ 5.6 and I see this issue at least 2 or 3 times a month. In the brokers that are running smooth, I observed that the file size of db.data is in KB and not 500 MB or 700 MB.
Here is the sample activemq.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" dataDirectory="/foo/activemq_5.6/data">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
<pendingSubscriberPolicy>
<vmCursor />
</pendingSubscriberPolicy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb" useCache="false" />
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="true" />
</managementContext>
<persistenceAdapter>
<kahaDB directory="/foo/activemq_5.6/data/kahadb" />
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb" />
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb" />
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616" />
</transportConnectors>
</broker>
<import resource="jetty.xml" />
</beans>

Generally the kahadb directory has segments for messages that haven't been consumed yet. If there is any message in the segment that hasn't been consumed yet, it will keep the whole segment. So if you nothing in any queue, you generally won't have much of anything in the kaha directory.
If you have the jetty console enabled, you can use it to see which queues have messages in them.
If you sometimes have some queues that are consumed slowly, you probably want to adjust the segment size from its default (something like 33 meg) to something smaller to avoid filling up your disk/hitting your storage limit from the config file)

Related

ActiveMQ with JDBC Persistence results in OutOfMemory on the broker

I'm using ActiveMQ 5.16.0, broker is started with 512MB heap size (-Xms=512m -Xmx=512).
Configuration file activemq.xml looks like this:
<bean id="postgresql-ds" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver" />
<property name="url" value="jdbc:postgresql://localhost:5432/postgres?currentSchema=amq" />
<property name="username" value="..." />
<property name="password" value="..." />
</bean>
(...)
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="2 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="5 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
(...)
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="50 mb" maxPageSize="100" maxBrowsePageSize="100" lazyDispatch="true"></policyEntry>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
I'm using single producer which pushes 500 durable messages, with about 1MB per each message, into the single queue. After producer finishes producing messages everything seems to be ok, 500 records are available in activemq_msgs table and ActiveMQ web console shows about 9% memory percent used, which (if my calculations are correct) should be about right:
70% * 512MB = 358.4MB - total available memory
70% * 50MB = 35MB - total memory available for single queue
35MB / 358.4MB * 100 = ~9% - memory used
Now, the problem occurs when I restart the broker. Shortly after AMQ is started I'm getting an exception:
org.postgresql.util.PSQLException: Ran out of memory retrieving query results.
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2255)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:322)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:481)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:401)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:164)
at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:114)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:122)
at org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:122)
at org.apache.activemq.store.jdbc.adapter.DefaultJDBCAdapter.doRecoverNextMessages(DefaultJDBCAdapter.java:1081)
at org.apache.activemq.store.jdbc.JDBCMessageStore.recoverNextMessages(JDBCMessageStore.java:365)
at org.apache.activemq.store.ProxyMessageStore.recoverNextMessages(ProxyMessageStore.java:110)
at org.apache.activemq.broker.region.cursors.QueueStorePrefetch.doFillBatch(QueueStorePrefetch.java:127)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.fillBatch(AbstractStoreCursor.java:476)
at org.apache.activemq.broker.region.cursors.AbstractStoreCursor.reset(AbstractStoreCursor.java:179)
at org.apache.activemq.broker.region.cursors.StoreQueueCursor.reset(StoreQueueCursor.java:166)
at org.apache.activemq.broker.region.Queue.doPageInForDispatch(Queue.java:2047)
at org.apache.activemq.broker.region.Queue.pageInMessages(Queue.java:2283)
at org.apache.activemq.broker.region.Queue.doBrowse(Queue.java:1183)
at org.apache.activemq.broker.region.Queue.expireMessages(Queue.java:957)
at org.apache.activemq.broker.region.Queue.access$100(Queue.java:108)
at org.apache.activemq.broker.region.Queue$2.run(Queue.java:152)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.postgresql.core.PGStream.receiveTupleV3(PGStream.java:561)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2251)
... 23 more
I've checked PostgreSQL logs and I've noticed there is a SELECT query which pretty much retrieves whole table, so it looks like AMQ tries to load everything from the table, so possibly this is the cause of the OOM:
LOG: execute <unnamed>: SELECT ID, MSG FROM ACTIVEMQ_MSGS WHERE CONTAINER=$1 AND ID < $2 AND ID > $3 AND XID IS NULL ORDER BY ID LIMIT 32767
DETAIL: parameters: $1 = 'queue://TEST', $2 = '501', $3 = '-1'
I tried to play a bit with Destination Policies properties like: maxPageSize, lazyDispatch, but it doesn't seem to have any impact on this issue.
Maybe I'm doing something wrong? I'll appreciate any ideas, thanks!

activemq oom after enabling stomp

After enabling STOMP protocol (before it was only default protocol enabled) on the Activemq server it started to fail with oom. I have only 1 client using STOMP. It can work for 1 week without fails or fail a day after a restart. Here is the config file:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<bean id="logQuery" class="io.fabric8.insight.log.log4j.Log4jLogQuery"
lazy-init="false" scope="singleton"
init-method="start" destroy-method="stop">
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker useJmx="true" xmlns="http://activemq.apache.org/schema/core" brokerName="cms-mq" dataDirectory="${activemq.data}">
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VirtualTopic.>" selectorAware="true"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="false">
</policyEntry>
<policyEntry queue=">" producerFlowControl="false">
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="4 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="4 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="auto" uri="auto+nio://0.0.0.0:61616?maximumConnections=1000&auto.protocols=default,stomp"/>
</transportConnectors>
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
<plugins>
... security plugins config...
</plugins>
</broker>
<import resource="jetty.xml"/>
</beans>
start args:
/usr/java/default/bin/java -Xms256M -Xmx1G -Dorg.apache.activemq.UseDedicatedTaskRunner=false -XX:HeapDumpPath=/var/logs/heapDumps -XX:+HeapDumpOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=8162 -Dcom.sun.management.jmxremote.rmi.port=8162 -Dcom.sun.management.jmxremote.password.file=/opt/apache-activemq-5.13.0//conf/jmx.password -Dcom.sun.management.jmxremote.access.file=/opt/apache-activemq-5.13.0//conf/jmx.access -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote -Djava.awt.headless=true -Djava.io.tmpdir=/opt/apache-activemq-5.13.0//tmp -Dactivemq.classpath=/opt/apache-activemq-5.13.0//conf:/opt/apache-activemq-5.13.0//../lib/ -Dactivemq.home=/opt/activemq -Dactivemq.base=/opt/activemq -Dactivemq.conf=/opt/apache-activemq-5.13.0//conf -Dactivemq.data=/opt/apache-activemq-5.13.0//data -jar /opt/activemq/bin/activemq.jar start
UPD:
From Eclipse MemoryAnalizer:
Leak Suspects
247,036 instances of "org.apache.activemq.command.ActiveMQBytesMessage", loaded by "java.net.URLClassLoader # 0xc02e9470" occupy 811,943,360 (76.92%) bytes.
81 instances of "org.apache.activemq.broker.region.cursors.FilePendingMessageCursor", loaded by "java.net.URLClassLoader # 0xc02e9470" occupy 146,604,368 (13.89%) bytes.
UPD:
Before having OOM error there are several error in the log like the following:
| ERROR | Could not accept connection from null: java.lang.IllegalStateException: Timer already cancelled. | org.apache.activemq.broker.TransportConnector | ActiveMQ BrokerService[cms-mq] Task-13707
| INFO | The connection to 'null' is taking a long time to shutdown. | org.apache.activemq.broker.TransportConnection | ActiveMQ BrokerService[cms-mq] Task-13738
Would appriciate any help in debugging it.
Can provide additional info if needed.
One guess is that you are flooding the broker with messages from the producer over STOMP and eventually blowing the broker memory. You have turned producer flow control off which can lead to this with even the default JMS client, and STOMP is even easier to get into this situation since there isn't by default an ack going back to the producer to allow for a flow control mechanism, you have to request a receipt on each send to get that.
To debug this you need to start examining your broker logs and the destination and usage stats via the console or other tool of your choosing to see what the state of the broker is.
I examined the client code (ruby stomp client) and it turned out that there was 'activemq.subscriptionName' sub header without 'client-id' connect header. When you set 'activemq.subscriptionName' subscription header that means that you want your subscriber to be durable. But you should also set 'client-id' connect header because otherwise it is autogenerated. When 'client-id' header is not set we have a situation in which the broker can't identify the stomp client by it's client id when it reconnects. As a result there were a lot of Offline Durable Topic Subscribers and messages were piling up for every client-id => OOM error.

Is it possible to embed ActiveMQ like instance in Mulesoft

I would like to embed Active-MQ broker into Mule flow. I have read that the messages are saved into the file (KahaDB). I read the book "Active-MQ in action" and so I have used "Embedding ActiveMQ using Spring" and I have configured by Using the BrokerFactoryBean:
<spring:beans>
<spring:bean class="org.apache.activemq.xbean.BrokerFactoryBean"
id="broker">
<spring:property value="classpath:activemq.xml" name="config" />
<spring:property value="true" name="start" />
</spring:bean>
</spring:beans>
(Add the library Activemq-all-5.8.0.jar)
And the configuration file is activemq.xml:
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd
http://activemq.apache.org/camel/schema/spring http://activemq.apache.org/camel/schema/spring/camel-spring.xsd">
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" />
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost">
<!-- Destination specific policies using destination names or wildcards -->
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry queue=">" memoryLimit="50mb"/>
<policyEntry topic=">" memoryLimit="50mb"/>
</policyEntries>
</policyMap>
</destinationPolicy>
<!-- Use the following to configure how ActiveMQ is exposed in JMX -->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!-- The store and forward broker networks ActiveMQ will listen to -->
<networkConnectors>
<!-- by default just auto discover the other brokers -->
<networkConnector name="default-nc" uri="multicast://default"/>
<!-- Example of a static configuration:
<networkConnector name="host1 and host2" uri="static://(tcp://host1:61616,tcp://host2:61616)"/>
-->
</networkConnectors>
<!-- KahaDB definition -->
<persistenceAdapter>
<kahaDB directory="{user.dir}/activemq-data/KahaDB" />
</persistenceAdapter>
<!-- The maximum about of space the broker will use before slowing down producers -->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="200 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="10 gb" name="foo"/>
</storeUsage>
<tempUsage>
<tempUsage limit="1000 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!-- The transport connectors ActiveMQ will listen to -->
<transportConnectors>
<transportConnector name="openwire" uri="tcp://localhost:61616"/>
</transportConnectors>
</broker>
The questions are:
1) I have the flow with JMS, how to visualize with web interface the messages in the tail? Is it possible to visualize the queue from web interface, like when the broken url is tcp://localhost:61616 (URL used to connect to the JMS Server) Active-MQ ?
2) I would like to store the messages into DB from the MULE console, is it possible?

Connect active mq to my standalone JVM

I have downloaded Apache active mq and I am able to run it via activemq.xml config file. How can I monitor my JVMs ESB data via this xml file.
I need to expose these attributes: Name, Enqueue Count, Dequeue Count, Consumer count etc
The XML file is as under:
<!--
<!-- START SNIPPET: example -->
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<value>file:${activemq.conf}/credentials.properties</value>
</property>
</bean>
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="BROKER1" useJmx="true" dataDirectory="${activemq.data}">
<destinationPolicy>
<policyMap>
<policyEntries>
<!--
Limit ALL queues and topics to using 5mb of memory and turn on producer flow control
-->
<policyEntry queue=">" producerFlowControl="true" memoryLimit="5mb"/>
<policyEntry topic=">" producerFlowControl="true" memoryLimit="5mb">
<dispatchPolicy>
<!--
Use total ordering, see:
http://activemq.apache.org/total-ordering.html
-->
<strictOrderDispatchPolicy/>
</dispatchPolicy>
<subscriptionRecoveryPolicy>
<!--
Upon subscription, receive the last image sent
on the destination.
-->
<lastImageSubscriptionRecoveryPolicy/>
</subscriptionRecoveryPolicy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before disabling caching and/or slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap="70" />
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613? maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="mqtt" uri="mqtt://0.0.0.0:1883?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ws" uri="ws://0.0.0.0:61614?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
The web consoles requires by default login, you can disable this in the jetty.xml file
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
</beans>
<!-- END SNIPPET: example -->
This article looks like it should be helpful for you. I imagine that you would want to use jconsole (run from command line) to connect to ActiveMQ via JMX, which gives you access to the data you need.
http://activemq.apache.org/jmx.html

ActiveMQ producer generating heapdump

We are using Jboss FuseESB Active MQ on IBM AIX machine. We are sending upto 10000 requests per day to consumer. Each request has about 10Kb size. We are getting heap dump in every 3 days. Here is image of IBM Heap Analyzer.
Right before heap is generated, there are upto I think JMS messages are getting processed slowly by consumer. What are strategies to increase the processing speed of JMS consumer.
Edit:
Here is image of Heap Analyzer:
Edit:
We are using embedded broker. Here is activemq.xml that we are using:
<beans
xmlns="http://www.springframework.org/schema/beans"
xmlns:amq="http://activemq.apache.org/schema/core"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org /schema/beans/spring-beans-2.0.xsd
http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core.xsd">
<!-- Allows us to use system properties and fabric as variables in this configuration file -->
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="properties">
<bean class="org.fusesource.mq.fabric.ConfigurationProperties"/>
</property>
</bean>
<broker xmlns="http://activemq.apache.org/schema/core"
brokerName="${broker-name}"
dataDirectory="${data}"
start="false">
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" queuePrefetch="1">
<deadLetterStrategy>
<individualDeadLetterStrategy queuePrefix="DLQ."/>
</deadLetterStrategy>
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<persistenceAdapter>
<kahaDB directory="${data}/kahadb"/>
</persistenceAdapter>
<!--
<plugins>
<jaasAuthenticationPlugin configuration="karaf" />
</plugins>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="128 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:0?maximumConnections=1000"/>
</transportConnectors>
</broker>
</beans>
Is it possible that there are any Queue and/or durable topic subscribers that are not started and that their messages are persisted in the JMS broker?
You should browse the JMS destinations, for example using JMX, and identify the messages and their consumers. If you checked that every consumer is correctly configured and running, debug why the messages are consumed that slowly.

Categories