Spring Integration - Queue channel + Service activator Poller exhausts threadpool - java

I have a simple spring integration app, where I'm attempting to publish a task to a queue-channel, and then have a worker pick up the task and execute it. (from a pool with multiple concurrent workers available).
I'm finding that the thread pool is quickly exhausted, and tasks are rejected.
Here's my config:
<int:annotation-config />
<task:annotation-driven executor="executor" scheduler="scheduler"/>
<task:executor id="executor" pool-size="5-20" rejection-policy="CALLER_RUNS" />
<task:scheduler id="scheduler" pool-size="5"/>
<int:gateway service-interface="com.example.MyGateway">
<int:method name="queueForSync" request-channel="worker.channel" />
</int:gateway>
<int:channel id="worker.channel">
<int:queue />
</int:channel>
<bean class="com.example.WorkerBean" id="workerBean" />
<int:service-activator ref="workerBean" method="doWork" input-channel="worker.channel">
<int:poller fixed-delay="50" task-executor="executor" receive-timeout="0" />
</int:service-activator>
This question is very similar to another I asked a while back, here. The main difference is that I'm not using an AMQP message broker here, just internal spring message channels.
I haven't been able to find an an analogy for the concurrent-consumer concept in vanilla spring channels.
Moreover, I've adopted Gary Russell's suggested config:
To avoid this, simply set the receive-timeout to 0 on the <poller/>
Despite that, I'm still getting the pool exhausted.
What's the correct configuration for this goal?
As an aside - two other smells here suggest that my config is wrong:
Why am I getting rejected exceptions when the rejection-policy is CALLER_RUNS?
The exceptions start occurring when the queued tasks = 1000. Given there's no queue-capacity on the executor, shouldn't the queue be unbounded?
Exception stack trace shown:
[Mon Dec 2013 17:44:57.172] ERROR [task-scheduler-6] (org.springframework.integration.handler.LoggingHandler:126) - org.springframework.core.task.TaskRejectedException: Executor [java.util.concurrent.ThreadPoolExecutor#48e83911[Running, pool size = 20, active threads = 20, queued tasks = 1000, completed tasks = 48]] did not accept task: org.springframework.integration.util.ErrorHandlingTaskExecutor$1#a5798e
at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:244)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:231)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:53)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.util.concurrent.RejectedExecutionException: Task org.springframework.integration.util.ErrorHandlingTaskExecutor$1#a5798e rejected from java.util.concurrent.ThreadPoolExecutor#48e83911[Running, pool size = 20, active threads = 20, queued tasks = 1000, completed tasks = 48]
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor.execute(ThreadPoolTaskExecutor.java:241)
... 11 more

Best guess is you have another executor bean somewhere else in the context.
Turn on debug logging and look for ...DefaultListableBeanFactory] Overriding bean definition for bean 'executor'.
The default queue capacity is Integer.MAX_VALUE.

Related

Camel route publisher not publishing to JMS outbound queue fails with java.util.concurrent.RejectedExecutionException

So I have a route consuming a JMS message from one queue and I want to send it to another queue on the same broker. Blueprint xml pretty simple:
<camel:route id="from-jms-consumer-to-jms" trace="false">
<from uri="jms:queue:test" />
<log message="Message consumed :: ${body}" />
<to uri="jms:queue:other_test" />
</camel:route>
But it doesn't work! It consumes and logs the message (so no issue with connection factory) but when it tries to publish it fails with java.util.concurrent.RejectedExecutionException
Why? I tried everything:
separate connection factories
pooling/no-pooling
transacted/non-transacted
But even a simple route like this fails on the publisher.
This is Camel 3.0.0-M4 and I'm running on Karaf 4.2.1 on JDK 9. There are no other issues. All other components work and have been individually tested.
I even looked at the code for JmsPublisher in the Camel source. There's an isRunAllowed() boolean that returns false for jmsPublisher. I don't know what I could be missing? Thread pool? JMSHeader on the message? I'm stumped!
Error stack trace:
Message History
---------------------------------------------------------------------------------------------------------------------------------------
RouteId ProcessorId Processor Elapsed (ms)
[from-jms-consumer-] [from-jms-consumer-] [jmsFrom://queue:test ] [ 2]
[from-jms-consumer-] [log7 ] [log ] [ 1]
[from-jms-consumer-] [to7 ] [jms:queue:other_test ] [ 0]
Stacktrace
---------------------------------------------------------------------------------------------------------------------------------------
java.util.concurrent.RejectedExecutionException: null
at org.apache.camel.component.jms.JmsProducer.process(JmsProducer.java:140) ~[127:org.apache.camel.camel-jms:3.0.0.M4]
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:130) ~[110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$RedeliveryState.run(RedeliveryErrorHandler.java:480) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:185) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:59) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.processor.Pipeline.process(Pipeline.java:87) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:222) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.impl.engine.DefaultAsyncProcessorAwaitManager.process(DefaultAsyncProcessorAwaitManager.java:77) [110:org.apache.camel.camel-base:3.0.0.M4]
at org.apache.camel.support.AsyncProcessorSupport.process(AsyncProcessorSupport.java:40) [143:org.apache.camel.camel-support:3.0.0.M4]
at org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:111) [127:org.apache.camel.camel-jms:3.0.0.M4]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:736) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:696) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:674) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:318) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:245) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076) [167:org.apache.servicemix.bundles.spring-jms:5.0.8.RELEASE_1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) [?:?]
at java.lang.Thread.run(Thread.java:844) [?:?]
I got publisher to work using "toD" instead of "to" on the route (https://camel.apache.org/message-endpoint.html)
Now transaction manager is still throwing an ERROR not being able to LOG a transaction, something about connection not being a "NamedXAResource" wtf that means, but at least the publisher is publishing

Spring-amqp - Message processing delayed

We have a Java/spring/tomcat application deployed on a RHEL 7.0 VM, which uses AlejandroRivera/embedded-rabbitmq and it starts the Rabbitmq server as soon as the war gets deployed, and it connects to it. We have multiple queues that we use to handle and filter out events.
The flow is something like this:
event that we received -> publish event queue -> listener class filters events --> publish to another queue for processing
-> we publish to yet another queue for logging.
The issue is:
Processing starts normally, we can see messages flowing though the queues, but after some time the listener class, stops receiving events. It seems like we were able to publish it to the RabbitMQ channel, but it never got out of the queue to the listener.
This seems to start degrading causing events to be processed after some time, rising up till minutes. The load isn't as high, it's like around 200 events, from which we care about it's only a handful of them.
What we tried:
Initially the queues had pre-fetch set to 1, and consumers to be min of 2 and max of 5, we removed pre-fetch and we added more consumers as max concurrency setting, but the issue is still there, the delay just takes longer to present, but after a few minutes, the processing starts to take around 20/30 seconds.
We see in the logs that we published the event to the queue, and we see the log that we got it off the queue with a delay. So there's nothing running in our code in the middle to generate this delay.
As far as we can tell, the rest of the queues seem to process messages properly, but it's this one that gets in this stuck mode..
The errors that I see, are the following, but I'm usure what it means and if it's related:
Jun 4 11:16:04 server: [pool-3-thread-10] ERROR com.rabbitmq.client.impl.ForgivingExceptionHandler - Consumer org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer#70dfa413 (amq.ctag-VaWc-hv-VwcUPh9mTQTj7A) method handleDelivery for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198) threw an exception for channel AMQChannel(amqp://agent#127.0.0.1:5672/,198)
Jun 4 11:16:04 server: java.io.IOException: Unknown consumerTag
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ChannelN.basicCancel(ChannelN.java:1266)
Jun 4 11:16:04 server: at sun.reflect.GeneratedMethodAccessor180.invoke(Unknown Source)
Jun 4 11:16:04 server: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
Jun 4 11:16:04 server: at java.lang.reflect.Method.invoke(Method.java:498)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:955)
Jun 4 11:16:04 server: at com.sun.proxy.$Proxy119.basicCancel(Unknown Source)
Jun 4 11:16:04 server: at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$InternalConsumer.handleDelivery(BlockingQueueConsumer.java:846)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149)
Jun 4 11:16:04 server: at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:100)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
Jun 4 11:16:04 server: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Jun 4 11:16:04 server: at java.lang.Thread.run(Thread.java:748)
This one happens on shutdown of the application, but I've seen it happen while the application is still running..
2018-06-05 13:22:45,443 ERROR CachingConnectionFactory$DefaultChannelCloseLogger - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - unknown delivery tag 109, class-id=60, method-id=120)
I'm not sure how to address these two errors, nor if they are related.
Here's my Spring config:
<!-- Queues -->
<rabbit:queue id="monitorIncomingEventsQueue" name="MonitorIncomingEventsQueue"/>
<rabbit:queue id="interestingEventsQueue" name="InterestingEventsQueue"/>
<rabbit:queue id="textCallsEventsQueue" name="TextCallsEventsQueue"/>
<rabbit:queue id="callDisconnectedEventQueue" name="CallDisconnectedEventQueue"/>
<rabbit:queue id="incomingCallEventQueue" name="IncomingCallEventQueue"/>
<rabbit:queue id="eventLoggingQueue" name="EventLoggingQueue"/>
<!-- listeners -->
<bean id="monitorListener" class="com.example.rabbitmq.listeners.monitorListener"/>
<bean id="interestingEventsListener" class="com.example.rabbitmq.listeners.InterestingEventsListener"/>
<bean id="textCallsEventListener" class="com.example.rabbitmq.listeners.TextCallsEventListener"/>
<bean id="callDisconnectedEventListener" class="com.example.rabbitmq.listeners.CallDisconnectedEventListener"/>
<bean id="incomingCallEventListener" class="com.example.rabbitmq.listeners.IncomingCallEventListener"/>
<bean id="eventLoggingEventListener" class="com.example.rabbitmq.listeners.EventLoggingListener"/>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="40" acknowledge="none">
<rabbit:listener queues="interestingEventsQueue" ref="interestingEventsListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="textCallsEventsQueue" ref="textCallsEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="20" acknowledge="none">
<rabbit:listener queues="callDisconnectedEventQueue" ref="callDisconnectedEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="30" acknowledge="none">
<rabbit:listener queues="incomingCallEventQueue" ref="incomingCallEventListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="1" max-concurrency="3" acknowledge="none">
<rabbit:listener queues="monitorIncomingEventsQueue" ref="monitorListener" method="handleIncomingMessage"/>
</rabbit:listener-container>
<rabbit:listener-container connection-factory="connectionFactory" message-converter="defaultMessageConverter" concurrency="5" max-concurrency="10" acknowledge="none">
<rabbit:listener queues="EventLoggingQueue" ref="eventLoggingEventListener" method="handleLoggingEvent"/>
</rabbit:listener-container>
<rabbit:connection-factory id="connectionFactory" host="${host.name}" port="${port.number}" username="${user.name}" password="${user.password}" connection-timeout="20000"/>
I've read here that the delay on processing could be caused by a network problem, but in this case the server and the app are on the same VM. It's a locked down environment, so most ports aren't open, but I doubt that's what's wrong.
More logs: https://pastebin.com/4QMFDT7A
Any help is appreciated,
Thanks,
I need to see much more log than that - this is the smoking gun:
Storing...Storing delivery for Consumer#a2ce092: tags=[{}]
The (consumer) tags is empty, which means the consumer has already been canceled at that time (for some reason, which should appear earlier in the log).
If there's any chance you could reproduce with 1.7.9.BUILD-SNAPSHOT, I added some TRACE level logging which should help diagnosing this.
EDIT
In reply to your recent comment on rabbitmq-users...
Can you try with fixed concurrency? Variable concurrency in Spring AMQP's container is often not very useful because consumers will typically only be reduced if the entire container is idle for some time.
It might explain, however, why you are seeing consumers being canceled.
Perhaps there are/were some race conditions in that logic; using a fixed number of consumers (don't specify max...) will avoid that; if you can try it, it will at least eliminate that as a possibility.
That said, I am confused (I didn't notice this in your Stack Overflow configuration); with acknowledge="none" there should be no acks being sent to the broker (NONE is used to set the autoAck)
String consumerTag = this.channel.basicConsume(queue, this.acknowledgeMode.isAutoAck(), ...
and
public boolean isAutoAck() {
return this == NONE;
}
Are you sending acks from your code? If so , the ack mode should be MANUAL. I can't see a scenario where the container will send an ack for a NONE ack mode.

GC over limit exceeded when using Spring integration executor channel

I have got the below exception , I suspect heap memory is full so GC exception was thrown . Kindly explain if any other perspective for the below application solution
2017:06:07 21:18:36.275 [loginputtaskexecutor-7] ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessageHandlingException: nested exception is java.lang.IllegalStateException: Cannot process message
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:96)
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:89)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.access$000(UnicastingDispatcher.java:53)
at org.springframework.integration.dispatcher.UnicastingDispatcher$3.run(UnicastingDispatcher.java:129)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Cannot process message
at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:333)
at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:155)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:93)
... 11 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
**Application flow in detail :**
Spring integration application build to listen message from ActiveMQ , after consuming message from ActiveMQ it will be handed over to input channel (Executor channel) which has subscriber as service activator . In Service Activator message is converted to json then stored to Cassandra . # transaction was mentioned on the Service activator method .
With the above solution , I thought of breaking Message transaction flow by implementing executor channel , after message consumed it will be handed over to executor channel and the transaction ends . after then threads in executor channel would take care of performing parallel write to cassandra.
Is there any better way to write as fast as possible for large volume of data to casandra using java spring integration
If the data sink can't keep up, add a limit to the queue size in the TaskExecutor and use a CallerRunsPolicy or CallerBlocksPolicy when the queue is full.
That will naturally throttle the workload at the rate the sink can deal with.

Jedis Get Data: JedisConnectionFailureException iterating a section of code over long period of time

So I have a code that gets value from Redis using Jedis Client. But at a time, the Redis was at maximum connection and these exceptions were getting thrown:
org.springframework.data.redis.RedisConnectionFailureException
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:140)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:229)
...
org.springframework.data.redis.RedisConnectionFailureException
java.net.SocketTimeoutException: Read timed out; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: java.net.SocketTimeoutException: Read timed out
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:47)
at org.springframework.data.redis.connection.jedis.JedisExceptionConverter.convert(JedisExceptionConverter.java:36)
...
org.springframework.data.redis.RedisConnectionFailureException
Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:140)
at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:229)
When I check an AppDynamics analysis of this scenario, I saw some iteration of some calls over a long period of time (1772 seconds). The calls are shown in the snips.
Can anyone explain what's happening here? And why Jedis didn't stop after the Timeout setting (500ms)? Can I prevent this from happening for long?
This is what my Bean definitions for the Jedis look like:
<bean id="jedisConnectionFactory" class="org.springframework.data.redis.connection.jedis.JedisConnectionFactory" p:host-name="100.90.80.70" p:port="6382" p:timeout="500" p:use-pool="true" p:poolConfig-ref="jedisPoolConfig" p:database="3" />
<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig">
<property name="maxTotal" value="1000" />
<property name="maxIdle" value="10" />
<property name="maxWaitMillis" value="500" />
<property name="testOnBorrow" value="true" />
<property name="testOnReturn" value="true" />
<property name="testWhileIdle" value="true" />
<property name="numTestsPerEvictionRun" value="10" />
</bean>
I'm not familiar with the AppDynamics output. I assume that's a cumulative view of Threads and their sleep times. So Threads get reused and so the sleep times add up. In some cases, a Thread gets a connection directly, without any waiting and in another call the Thread has to wait until the connection can be provided. The wait duration depends on when a connection becomes available, or the wait limit is hit.
Let's have a practical example:
Your screenshot shows a Thread, which waited 172ms. Assuming the sleep is only called within the Jedis/Redis invocation path, the Thread waited 172ms in total to get a connection.
Another Thread, which waited 530ms looks to me as if the first attempt to get a connection wasn't successful (which explains the first 500ms) and on a second attempt, it had to wait for 30ms. It could also be that it waited 2x for 265ms.
Sidenote:
1000+ connections could severely limit scalability. Spring Data Redis also supports other drivers which don't require pooling but work with fewer connections (see Spring Data Redis Connectors and here).

What is causing this JMS error connecting to OracleAQ?

I am getting sporadic errors from a java service that is listening to OracleAQ.
It seems to be happening each night, and I can't be sure what is going on. Could it really be a database connection problem ?
Or does the "Dequeue failed" suggest that it was connected and something else happened ?
Here is the exception below :
[2013-11-04 18:16:16,508] WARN org.springframework.jms.listener.DefaultMessageListenerContainer - Setup of JMS message listener invoker failed for destination 'MYCOMPANY_INFO_QUEUE' - trying to recover. Cause: JMS-120: Dequeue failed; nested exception is java.sql.SQLException: Io exception: Socket read timed out
oracle.jms.AQjmsException: JMS-120: Dequeue failed
at oracle.jms.AQjmsError.throwEx(AQjmsError.java:311)
at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:2234)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:430)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:310)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1096)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1088)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:985)
at java.lang.Thread.run(Thread.java:662)
[Linked-exception]
java.sql.SQLException: Io exception: Socket read timed out
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:976)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1168)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3329)
at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1732)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:430)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:310)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:263)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1096)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1088)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:985)
at java.lang.Thread.run(Thread.java:662)
[2013-11-04 18:16:16,569] INFO org.springframework.jms.listener.DefaultMessageListenerContainer - Successfully refreshed JMS Connection
The jms receive timeout should be configured in seconds (while the db timeout is in milliseconds). So make sure your jms value is less. For example, here is my working spring config:
<bean id="xxxJmsTemplate" class="org.springframework.jms.core.JmsTemplate102">
<property name="connectionFactory" ref="xxxJmsConnectionFactory"/>
<property name="defaultDestinationName" value="some_queue"/>
<property name="receiveTimeout" value="10"/><!-- seconds -->
</bean>
PS: The special Spring constant RECEIVE_TIMEOUT_NO_WAIT value of -1 does not seem to work for this setting. But setting a reasonably short time in seconds should do the trick.
I suggest looking at your dequeue options for wait time.
import oracle.AQ.AQDequeueOption;
...
AQDequeueOption options = null;
options = new AQDequeueOption();
options.setWaitTime(AQDequeue.WAIT_NONE);
//WAIT_NONE = do not wait if messages are not available
//WAIT_FOREVER = waits "forever"; default value
...
The WAIT_FOREVER setting is default and will wait until a message is available on the queue; however this holds the database connection open.
I believe this is the reason why you experience the errors "sporadically"; most of the time messages are being enqueued and running smoothly; and then when messages are not enqueued (each night) your database connection is timing out.

Categories