Spring-amqp - Message processing delayed - java

We have a Java/spring/tomcat application deployed on a RHEL 7.0 VM, which uses AlejandroRivera/embedded-rabbitmq and it starts the Rabbitmq server as soon as the war gets deployed, and it connects to it. We have multiple queues that we use to handle and filter out events.
The flow is something like this:
event that we received -> publish event queue -> listener class filters events --> publish to another queue for processing
-> we publish to yet another queue for logging.
The issue is:
Processing starts normally, we can see messages flowing though the queues, but after some time the listener class, stops receiving events. It seems like we were able to publish it to the RabbitMQ channel, but it never got out of the queue to the listener.
This seems to start degrading causing events to be processed after some time, rising up till minutes. The load isn't as high, it's like around 200 events, from which we care about it's only a handful of them.
What we tried:
Initially the queues had pre-fetch set to 1, and consumers to be min of 2 and max of 5, we removed pre-fetch and we added more consumers as max concurrency setting, but the issue is still there, the delay just takes longer to present, but after a few minutes, the processing starts to take around 20/30 seconds.
We see in the logs that we published the event to the queue, and we see the log that we got it off the queue with a delay. So there's nothing running in our code in the middle to generate this delay.
As far as we can tell, the rest of the queues seem to process messages properly, but it's this one that gets in this stuck mode..
The errors that I see, are the following, but I'm usure what it means and if it's related:
Jun 4 11:16:04 server: [pool-3-thread-10] ERROR com.rabbitmq.client.impl.ForgivingExceptionHandler - Consumer org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer#70dfa413 (amq.ctag-VaWc-hv-VwcUPh9mTQTj7A) method handleDelivery for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198) threw an exception for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198)
Jun 4 11:16:04 server: java.io.IOException: Unknown consumerTag
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ChannelN.basicCancel(ChannelN.java:1266)
Jun 4 11:16:04 server: at sun.reflect.GeneratedMethodAccessor180.invoke(Unknown Source)
Jun 4 11:16:04 server: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jun 4 11:16:04 server: at java.lang.reflect.Method.invoke(Method.java:498)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:955)
Jun 4 11:16:04 server: at com.sun.proxy.$Proxy119.basicCancel(Unknown Source)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer.handleDelivery(BlockingQueueConsumer.java:846)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:100)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Jun 4 11:16:04 server: at java.lang.Thread.run(Thread.java:748)
This one happens on shutdown of the application, but I've seen it happen while the application is still running..
2018-06-05 13:22:45,443 ERROR CachingConnectionFactory$DefaultChannelCloseLogger - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 109, class-id=60, method-id=120)
I'm not sure how to address these two errors, nor if they are related.
Here's my Spring config:
<!-- Queues -->
<rabbit:queue id="monitorIncomingEventsQueue" name="MonitorIncomingEventsQueue"/>
<rabbit:queue id="interestingEventsQueue" name="InterestingEventsQueue"/>
<rabbit:queue id="textCallsEventsQueue" name="TextCallsEventsQueue"/>
<rabbit:queue id="callDisconnectedEventQueue" name="CallDisconnectedEventQueue"/>
<rabbit:queue id="incomingCallEventQueue" name="IncomingCallEventQueue"/>
<rabbit:queue id="eventLoggingQueue" name="EventLoggingQueue"/>
<!-- listeners -->
<bean id="monitorListener" class="com.example.rabbitmq.listeners.monitorListener"/>
<bean id="interestingEventsListener" class="com.example.rabbitmq.listeners.InterestingEventsListener"/>
<bean id="textCallsEventListener" class="com.example.rabbitmq.listeners.TextCallsEventListener"/>
<bean id="callDisconnectedEventListener" class="com.example.rabbitmq.listeners.CallDisconnectedEventListener"/>
<bean id="incomingCallEventListener" class="com.example.rabbitmq.listeners.IncomingCallEventListener"/>
<bean id="eventLoggingEventListener" class="com.example.rabbitmq.listeners.EventLoggingListener"/>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="40" acknowledge="none">
<rabbit:listener queues="interestingEventsQueue" ref="interestingEventsListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="textCallsEventsQueue" ref="textCallsEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="callDisconnectedEventQueue" ref="callDisconnectedEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="30" acknowledge="none">
<rabbit:listener queues="incomingCallEventQueue" ref="incomingCallEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="1" max-concurrency="3" acknowledge="none">
<rabbit:listener queues="monitorIncomingEventsQueue" ref="monitorListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="10" acknowledge="none">
<rabbit:listener queues="EventLoggingQueue" ref="eventLoggingEventListener" method="handleLoggingEvent"/>
</rabbit:listener-container>
<rabbit:connection-factory id="connectionFactory" host="${host.name}" port="${port.number}" username="${user.name}" password="${user.password}" connection-timeout="20000"/>
I've read here that the delay on processing could be caused by a network problem, but in this case the server and the app are on the same VM. It's a locked down environment, so most ports aren't open, but I doubt that's what's wrong.
More logs: https://pastebin.com/4QMFDT7A
Any help is appreciated,
Thanks,

I need to see much more log than that - this is the smoking gun:
Storing...Storing delivery for Consumer#a2ce092: tags=[{}]
The (consumer) tags is empty, which means the consumer has already been canceled at that time (for some reason, which should appear earlier in the log).
If there's any chance you could reproduce with 1.7.9.BUILD-SNAPSHOT, I added some TRACE level logging which should help diagnosing this.
EDIT
In reply to your recent comment on rabbitmq-users...
Can you try with fixed concurrency? Variable concurrency in Spring AMQP's container is often not very useful because consumers will typically only be reduced if the entire container is idle for some time.
It might explain, however, why you are seeing consumers being canceled.
Perhaps there are/were some race conditions in that logic; using a fixed number of consumers (don't specify max...) will avoid that; if you can try it, it will at least eliminate that as a possibility.
That said, I am confused (I didn't notice this in your Stack Overflow configuration); with acknowledge="none" there should be no acks being sent to the broker (NONE is used to set the autoAck)
String consumerTag = this.channel.basicConsume(queue, this.acknowledgeMode.isAutoAck(), ...
and
public boolean isAutoAck() {
return this == NONE;
}
Are you sending acks from your code? If so , the ack mode should be MANUAL. I can't see a scenario where the container will send an ack for a NONE ack mode.

Related

Cluster instability with TCPPING protocol

I have 8 different processes distributed across 6 different servers with the following TCP/TCPPING protocol configuration:
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP
bind_port="${jgroups.tcp.bind_port:16484}"
bind_addr="${jgroups.tcp.bind_addr:127.0.0.1}"
recv_buf_size="20M"
send_buf_size="20M"
max_bundle_size="64K"
sock_conn_timeout="300"
use_fork_join_pool="true"
thread_pool.min_threads="10"
thread_pool.max_threads="100"
thread_pool.keep_alive_time="30000" />
<TCPPING
async_discovery="true"
initial_hosts="${jgroups.tcpping.initial_hosts:127.0.0.1[16484]}"
port_range="5" #/>
<MERGE3 min_interval="10000" max_interval="30000" />
<FD_SOCK get_cache_timeout="10000"
cache_max_elements="300"
cache_max_age="60000"
suspect_msg_interval="10000"
num_tries="10"
sock_conn_timeout="10000"/>
<FD timeout="10000" max_tries="10" />
<VERIFY_SUSPECT timeout="10000" num_msgs="5"/>
<BARRIER />
<pbcast.NAKACK2
max_rebroadcast_timeout="5000"
use_mcast_xmit="false"
discard_delivered_msgs="true" />
<UNICAST3 />
<pbcast.STABLE
stability_delay="1000"
desired_avg_gossip="50000"
max_bytes="4M" />
<AUTH
auth_class="com.qfs.distribution.security.impl.CustomAuthToken"
auth_value="distribution_password"
token_hash="SHA" />
<pbcast.GMS
print_local_addr="true"
join_timeout="10000"
leave_timeout="10000"
merge_timeout="10000"
num_prev_mbrs="200"
view_ack_collection_timeout="10000"/>
</config>
The cluster keeps splitting in subgroups, then merges again and again which results in high memory usages. I can also see in the logs a lot of "suspect" warning resulting from the periodic heartbeats sent by all other cluster members. Am I missing something ?
EDIT
After enabling gc logs, nothing suspect appeared to me. On the other hand, I've noticed this jgroups logs appearing a lot:
WARN: lonlx21440_FrtbQueryCube_QUERY_29302: I was suspected by woklxp00330_Sba-master_DATA_36219; ignoring the SUSPECT message and sending back a HEARTBEAT_ACK
DEBUG: lonlx21440_FrtbQueryCube_QUERY_29302: closing expired connection for redlxp00599_Sba-master_DATA_18899 (121206 ms old) in send_table
DEBUG: I (redlxp00599_Sba-master_DATA_18899) will be the merge leader
DEBUG: redlxp00599_Sba-master_DATA_18899: heartbeat missing from lonlx21503_Sba-master_DATA_2175 (number=1)
DEBUG: redlxp00599_Sba-master_DATA_18899: suspecting [lonlx21440_FrtbQueryCube_QUERY_29302]
DEBUG: lonlx21440_FrtbQueryCube_QUERY_29302: removed woklxp00330_Sba-master_DATA_36219 from xmit_table (not member anymore)enter code here
and this one
2020-08-31 16:35:34.715 [ForkJoinPool-3-worker-11] org.jgroups.protocols.pbcast.GMS:116
WARN: lonlx21440_FrtbQueryCube_QUERY_29302: failed to collect all ACKs (expected=6) for view [redlxp00599_Sba-master_DATA_18899|104] after 2000ms, missing 6 ACKs from (6) lonlx21503_Sba-master_DATA_2175, lonlx11179_DRC-master_DATA_15999, lonlx11184_Rrao-master_DATA_31760, lonlx11179_Rrao-master_DATA_25194, woklxp00330_Sba-master_DATA_36219, lonlx11184_DRC-master_DATA_49264
I still can;'t figure out where the instability comes from.
Thanks
Any instability is not due to TCPPING protocol - this belongs to the Discovery protocol family and its purpose is to find new members, not kick them out of the cluster.
You use both FD_SOCK and FD to find if members left, and then VERIFY_SUSPECT to confirm that the node is not reachable. The setting seems pretty normal.
First thing to check is your GC logs. If you experience STW pauses longer than, say, 15 seconds, chances are that the cluster disconnect because of unresponsiviness due to GC.
If your GC logs are fine, increase logging level for FD, FD_SOCK and VERIFY_SUSPECT to TRACE and see what's going on.

CRLF not found before max message length: 2048 with payload-deserializing-transformer

I am using payload-deserializer-transformer in my TCP client as follows.
<context:property-placeholder />
<int:gateway id="gw"
service-interface="myGateway"
default-request-channel="objectIn"
default-reply-channel="objectOut" />
<int-ip:tcp-connection-factory id="client"
type="client"
host="${client.server.TCP.host}"
port="${client.server.TCP.port}"
single-use="true"
so-timeout="10000" />
<int:channel id="objectIn" />
<int:payload-serializing-transformer input-channel="objectIn" output-channel="bytesOut"/>
<int:channel id="bytesOut" />
<int-ip:tcp-outbound-gateway id="outGateway"
request-channel="bytesOut"
reply-channel="bytesIn"
connection-factory="client"
request-timeout="10000"
reply-timeout="10000" />
<int:channel id="bytesIn" />
<int:payload-deserializing-transformer input-channel="bytesIn" output-channel="objectOut" />
<int:channel id="objectOut" />
The above works fine for message length < 2048 but if the message exceeds this limit I get following error.
Caused by: java.io.IOException: CRLF not found before max message length: 2048
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.fillToCrLf(ByteArrayCrLfSerializer.java:66)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:44)
at org.springframework.integration.ip.tcp.serializer.ByteArrayCrLfSerializer.deserialize(ByteArrayCrLfSerializer.java:31)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.getPayload(TcpNetConnection.java:120)
at org.springframework.integration.ip.tcp.connection.TcpMessageMapper.toMessage(TcpMessageMapper.java:113)
at org.springframework.integration.ip.tcp.connection.TcpNetConnection.run(TcpNetConnection.java:165)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
How can I set maxMessageSize property on the payload-deserializing-transformer in this case?
This has nothing to do with the transformer; the error is in the outbound gateway.
First of all, you should not be using text-based delimiting for inbound TCP messages; a serialized object contains binary data and might contain CRLF (0x0d0a) somewhere in the middle.
You should be using one of the binary-capable deserializers in the gateway.
You can read about TCP serializers/deserializers in the reference manual.
You should configure the outbound gateway to use a ByteArrayLengthHeaderSerializer in the serializer and deserializer attributes; it can handle binary payloads.
The remote system will also need to be changed to use a length header instead of using CRLF to detect the end of a message. If the remote system is also Spring Integration, simply change its serializer/deserializer too.
For other readers who are using text-based messaging, the ByteArrayCrlfSerializer can be configured with a maxMessageSize which defaults to 2048.
The ByteArrayLengthHeaderSerializer also has a maxMessageSize (also 2048) which is configurable - this is to prevent OOM conditions when a bad message is received.

How to force Rabbit MQ to accumulate and send messages again?

I have several Spring-Integration elements configured in the XML file (see below)
From the amqp channel adapter the messages are directed to the router integrationSecondaryRouter that has implementation integrationRouterImpl.
If there is a not caught exception in integrationRouterImpl I expect that the Rabbit MQ will send the message again and again. However, this does not happen. The Rabbit MQ monitor does not show any messages accumulation. An error in my configuration?
<int-amqp:inbound-channel-adapter
channel="integrationFrontDoorQueueChannel"
queue-names="${integration.creation.orders.queue.name}"
header-mapper="integrationHeaderMapper"
connection-factory="connectionFactory"
error-channel="errorChannel"
/>
<int:chain
id="integrationFrontDoorQueueChain"
input-channel="integrationFrontDoorQueueChannel"
output-channel="integrationRouterChannel">
<int:transformer ref="integrationJsonPayloadTransformer" method="transformMessagePayload"/>
<int:filter ref="integrationNonDigitalCancellationFilter" method="filter"/>
<int:filter ref="integrationPartnerFilter" method="filter"/>
<int:filter ref="integrationOrderDtoDgcAndGoSelectFilter" method="filter"/>
</int:chain>
<int:header-value-router
id="integrationPrimaryRouter"
input-channel="integrationRouterChannel"
default-output-channel="integrationFrontDoorRouterChannel"
resolution-required="false"
header-name="#{T(com.smartdestinations.constants.SdiConstants).INTEGRATION_PAYLOAD_ACTION_HEADER_KEY}">
<int:mapping
value="#{T(com.smartdestinations.service.integration.dto.IntegrationAction).EXCLUSION_SCAN.name()}"
channel="integrationExclusionChannel"
/>
</int:header-value-router>
<int:router
id="integrationSecondaryRouter"
ref="integrationRouterImpl"
input-channel="integrationFrontDoorRouterChannel"
method="route"
resolution-required="false"
default-output-channel="nullChannel"
/>
Look, you have error-channel="errorChannel" and the Documentation on the matter points out:
The default "errorChannel" is a PublishSubscribeChannel.
Yes, there is one subscriber. but it just _org.springframework.integration.errorLogger.
Since there is no anyone who re-throws your exception to the SimpleMessageListenerContainer, thefore no reason to nack message and redelive it again.

Spring JMS listener-container concurrency attribute not working

Hi I am learning Spring JMS with ActiveMQ.
In my example scenario is Producer application sends around 50 messages in queue and when I start Consumer application it starts to consume those messages.
Now I want multiple consumer threads to consume messages from queue.
I am using JMS listener-container. When I googled for that I found there is a concurrency attribute.
According to Spring JMS doc concurrency attribute specifies
The number of concurrent sessions/consumers to start for each listener. Can either be a simple number indicating the maximum number (e.g. "5") or a range indicating the lower as well as the upper limit (e.g. "3-5"). Note that a specified minimum is just a hint and might be ignored at runtime. Default is 1; keep concurrency limited to 1 in case of a topic listener or if queue ordering is important; consider raising it for general queues.
But in my configuration I am setting this attribute to 5 but it seems it fails to start 5 concurrent listeners.
Configuration for listener:
consumer-applicationContext.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms
http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory" p:brokerURL="tcp://localhost:61616" />
<bean id="listener" class="com.jms.example.MyMessageListener"></bean>
<jms:listener-container container-type="default" concurrency="5"
connection-factory="connectionFactory">
<jms:listener destination="MyQueue" ref="listener"
method="onMessage"></jms:listener>
</jms:listener-container>
</beans>
And If I used bean DefaultMessageListenerContainer instead of jms:listener-container with properties:
<bean id="msgListenerContainer"
class="org.springframework.jms.listener.DefaultMessageListenerContainer"
p:connectionFactory-ref="connectionFactory"
p:destination-ref="destination"
p:messageListener-ref="listener"
p:concurrentConsumers="10"
p:maxConcurrentConsumers="50" />
Then in ActiveMQ console I could see 10 consumers but in reality it starts 3 consumers simultaneously or sometimes 6 or only 1 consumer.
EDIT:
Consumer code:
public class MyMessageListener implements MessageListener{
public void onMessage(Message m) {
TextMessage message=(TextMessage)m;
try{
System.out.println("Start = " + message.getText());
Thread.sleep(5000);
System.out.println("End = " + message.getText());
}catch (Exception e) {e.printStackTrace(); }
}
}
I am printing consumed messages on console whose output is explained in scenarios below:
OBSERVATION:
I observed some weird behavior.
My producer and consumer are two independent applications.
Scenario - 1:
I start producer and send messages(Meanwhile consumer is NOT running)
Then I start consumer to consume messages.
Here problem is it does not loads all 10 consumers. Sometimes it loads 3 OR 1.
Start = hello jms 1 // consumer 1 started
Start = hello jms 2 // consumer 2 started
Start = hello jms 3 // consumer 3 started
End = hello jms 1 // consumer 1 ended
Start = hello jms 4 // consumer 4 started and hence always 3 consumers and not 10
End = hello jms 2
Start = hello jms 5
End = hello jms 3
Start = hello jms 6
Scenario - 2:
I start producer and send messages(Meanwhile consumer is running)
Since the consumer is in running state it starts to consume them.
So it does load all 5 consumers properly as expected. so the output is:
Start = hello jms 1 // consumer 1 started
Start = hello jms 2 // consumer 2 started
Start = hello jms 3 // consumer 3 started
Start = hello jms 4 // consumer 4 started
Start = hello jms 5 // consumer 5 started
Start = hello jms 6 // consumer 6 started
Start = hello jms 7 // consumer 7 started
Start = hello jms 8 // consumer 8 started
Start = hello jms 9 // consumer 9 started
Start = hello jms 10 // consumer 10 started. Hence all them started at same time as expected.
End = hello jms 1
Start = hello jms 11
End = hello jms 2
Start = hello jms 12
End = hello jms 3
Start = hello jms 13
Why is this happening. It is really eating my brain.
I don't want to keep consumer to be running forever. I want to keep both detached.
Please help.
As Strelok pointed me about prefetching of messages. Created prefetchPolicy bean with queuePrefetch property set to 1.
Whose reference is set in connectionFactory.
I did some changes in configuration, those are as below:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jms="http://www.springframework.org/schema/jms"
xmlns:p="http://www.springframework.org/schema/p"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/jms
http://www.springframework.org/schema/jms/spring-jms-3.0.xsd">
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory" p:brokerURL="tcp://localhost:61616"
p:prefetchPolicy-ref="prefetchPolicy" />
<bean id="prefetchPolicy" class="org.apache.activemq.ActiveMQPrefetchPolicy"
p:queuePrefetch="1" />
<bean id="listener" class="com.javatpoint.MyMessageListener"></bean>
<jms:listener-container concurrency="10-15" connection-factory="connectionFactory">
<jms:listener destination="javatpointQueue" ref="listener"
method="onMessage"></jms:listener>
</jms:listener-container>
<!-- The JMS destination -->
<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="javatpointQueue" />
</bean>
</beans>
Just met this problem on spring-boot 1.5.9 application.
As pointed out by #Strelok and #mahendra kawde, the issue is due to prefetchPolicy parameter. The default value is 1000.
Large prefetch values are recommended for high performance with high message volumes. However, for lower message volumes, where each message takes a long time to process, the prefetch should be set to 1. This ensures that a consumer is only processing one message at a time. Specifying a prefetch limit of zero, however, will cause the consumer to poll for messages, one at a time, instead of the message being pushed to the consumer.
Take a look at http://activemq.apache.org/what-is-the-prefetch-limit-for.html
One can change prefetchPolicy parameter as following:
In application.properties file (working example)
spring.activemq.broker-url=tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=1
In DefaultMessageListenerContainer by modifying destinationName parameter (working example)
<bean id="cons-even" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="destinationName" value="queue-name?consumer.prefetchSize=1"/>
...
</bean>
In ConnectionFactory bean (working example):
#Bean
public ConnectionFactory jmsConnectionFactory() {
ActiveMQConnectionFactory factory = new ActiveMQConnectionFactory(brokerUrl);
ActiveMQPrefetchPolicy policy = new ActiveMQPrefetchPolicy();
policy.setQueuePrefetch(1);
factory.setPrefetchPolicy(policy);
return factory;
}
Related topics:
How do I make Spring JMSListener burst to max concurrent threads?
Dynamic scaling of JMS consumer in spring boot
JMS can work in concurrency mode. Below I am sharing the sample snippet
concurrentConsumers = 100 value
Spring JMS Documentation
<bean id="listenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<property name="concurrentConsumers">
<value>100</value>
</property>
<property name="connectionFactory" ref="connectionFactory" />
<property name="destination" ref="queue" />
<property name="messageListener" ref="messageListener" />
<property name="sessionTransacted" value="false" />
<property name="sessionAcknowledgeMode" value="1" />
</bean>

Spring Integration - Queue channel + Service activator Poller exhausts threadpool

I have a simple spring integration app, where I'm attempting to publish a task to a queue-channel, and then have a worker pick up the task and execute it. (from a pool with multiple concurrent workers available).
I'm finding that the thread pool is quickly exhausted, and tasks are rejected.
Here's my config:
<int:annotation-config />
<task:annotation-driven executor="executor" scheduler="scheduler"/>
<task:executor id="executor" pool-size="5-20" rejection-policy="CALLER_RUNS" />
<task:scheduler id="scheduler" pool-size="5"/>
<int:gateway service-interface="com.example.MyGateway">
<int:method name="queueForSync" request-channel="worker.channel" />
</int:gateway>
<int:channel id="worker.channel">
<int:queue />
</int:channel>
<bean class="com.example.WorkerBean" id="workerBean" />
<int:service-activator ref="workerBean" method="doWork" input-channel="worker.channel">
<int:poller fixed-delay="50" task-executor="executor" receive-timeout="0" />
</int:service-activator>
This question is very similar to another I asked a while back, here. The main difference is that I'm not using an AMQP message broker here, just internal spring message channels.
I haven't been able to find an an analogy for the concurrent-consumer concept in vanilla spring channels.
Moreover, I've adopted Gary Russell's suggested config:
To avoid this, simply set the receive-timeout to 0 on the <poller/>
Despite that, I'm still getting the pool exhausted.
What's the correct configuration for this goal?
As an aside - two other smells here suggest that my config is wrong:
Why am I getting rejected exceptions when the rejection-policy is CALLER_RUNS?
The exceptions start occurring when the queued tasks = 1000. Given there's no queue-capacity on the executor, shouldn't the queue be unbounded?
Exception stack trace shown:
[Mon Dec 2013 17:44:57.172] ERROR [task-scheduler-6] (org.springframework.integration.handler.LoggingHandler:126) - org.springframework.core.task.TaskRejectedException: Executor [java.util.concurrent.ThreadPoolExecutor#48e83911[Running, pool size = 20, active threads = 20, queued tasks = 1000, completed tasks = 48]] did not accept task: org.springframework.integration.util.ErrorHandlingTaskExecutor$1#a5798e
at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:244)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:231)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.util.concurrent.RejectedExecutionException: Task org.springframework.integration.util.ErrorHandlingTaskExecutor$1#a5798e rejected from java.util.concurrent.ThreadPoolExecutor#48e83911[Running, pool size = 20, active threads = 20, queued tasks = 1000, completed tasks = 48]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:241)
... 11 more
Best guess is you have another executor bean somewhere else in the context.
Turn on debug logging and look for ...DefaultListableBeanFactory] Overriding bean definition for bean 'executor'.
The default queue capacity is Integer.MAX_VALUE.

Categories