Recreate all connections of spring rabbit CachingConnectionFactory - java

I have a CachingConnectionFactory with multiple addresses. When one broker goes down, it connects with the second. Now, when the broker comes up again, I need to destroy existing connections and reconnect to it. But CachingConnectionFactory doesn't have any start, stop methods, instead has only destroy, which might render the factory unusable(?).
Config:
<bean id="testConnFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="addresses" value="rabbit1,rabbit2" />
<property name="cacheMode" value="CONNECTION" />
<property name="connectionCacheSize" value="${connection.cache.size}" />
</bean>
Is there any way to do this, programatically?

Calling destroy() is safe; the connection(s) will be reset and re-established the next time a component wants one.
Bear in mind, though, that this will impact any in-process operations.
We should probably add a less scary method, such as resetConnection() like we have with the Spring JMS connection factory.

Related

Spring JMSTemplate with ibm.mq.jms.MQQueueConnectionFactory

I'm working on a JMS intensive application that sends/receives hundreds of thousands of messages. I found that performance wasn't all that great and narrowed down the issue to 1 line like below, root cause from what I can tell is it doesn't play well with IBM MQ.
JMSTemplate.receive(queueName);
After wrapping this code in a simple timer, I found that receive was taking anywhere from 20-50 milliseconds and due to the sheer amount of throughput I'm dealing with that will surely add up over time. After a bit of googling I stumbled upon springs "CachingConnectionFactory", which I implemented with blind luck like below (wasn't sure if this would have worked with IBM MQ Connection factory that I was already using). Note that some code is omitted for legibility...
<bean id="jmsContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
...
</bean>
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory">
<ref bean="cacheFactory" />
</property>
...
</bean>
<!--This seems to be the magic piece-->
<bean id="cacheFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="ibmMQConnectionFactory" />
<property name="sessionCacheSize" value="100" />
</bean>
<bean id="ibmMQConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory">
...
</bean>
To my surprise, this cut down my JMSTemplate.receive() calls from anywhere between 20-50+ milliseconds to about 1-2 millis per message. I'm not able to find any solid information about how exactly this works behind the scenes and how "sessionCacheSize" will affect performance. My first test I used a value of 50 and the second time 100, with the second option proving much faster. So my question is, what is an ideal "sessionCacheSize" for an application with a massive amount of throughput, and what are any drawbacks to consider with this approach?
I look forward to what you guys have to say on this one...
My knowledge on Spring is limited. But by reading you description, I believe Spring is doing the following every time for receiving a message:
1) Creating a connection to IBM MQ Queue Manager
2) Opening specified queue
3) Getting message from queue
4) Closing the queue
5) Closing the connection.
Because of all the above operations, the time taken to receive single message is more. But when session is cached, Spring is re-using a cached connection. Hence better message receive throughput.

SPRING BATCH : How to configure remote chunking for multiple jobs running in a task executor

I am new to spring batch processing. I am using remote chunking where there is a master , multiple slaves and ActiveMQ for messaging.
Master has a job and a job launcher and the job launcher has a task-executor which is having following configuration
<task:executor id="batchJobExecutor" pool-size="2"queue-capacity="100" />.
Chunk configuration is
<bean id="chunkWriter"
class="org.springframework.batch.integration.chunk.ChunkMessageChannelItemWriter" scope="step">
<property name="messagingOperations" ref="messagingGateway" />
<property name="replyChannel" ref="replies" />
<property name="throttleLimit" value="50" />
<property name="maxWaitTimeouts" value="60000" />
</bean>
<bean id="chunkHandler"
class="org.springframework.batch.integration.chunk.RemoteChunkHandlerFactoryBean">
<property name="chunkWriter" ref="chunkWriter" />
<property name="step" ref="someJobId" />
</bean>
<integration:service-activator
input-channel="requests" output-channel="replies" ref="chunkHandler" />
So we are allowed to run two jobs at a time and the remaining jobs will be in the queue.
When two jobs are submitted Master is creating the chunk and submitting to the queue and slave is processing.
But the acknowledgment from the slave to master is giving error
java.lang.IllegalStateException: Message contained wrong job instance id [9331] should have been [9332].
at org.springframework.util.Assert.state(Assert.java:385) ~[Assert.class:4.1.6.RELEASE]
at org.springframework.batch.integration.chunk.ChunkMessageChannelItemWriter.getNextResult
Please help me with this.
The ChunkMessageChannelItemWriter is only designed for one concurrent step - you need to put it in step scope so each job gets its own instance - see this test case
EDIT
Actually, no; that won't work - since the bean instances are using the same reply channel, they could get each other's replies. I opened a JIRA Issue.
This is a very old post, but I think the issue you see here might be related to the throttle limit being larger than the maxWaitTimouts value 4.
What we have seen is that the implementation will not read more than maxWaitTimeouts entries from the reply queue after the job finished. I think this is a bug.
See also the question I asked on stackoverflow here : Remote batch job does not read all responses in afterStep method
I made a bug report for this as well: https://jira.spring.io/browse/BATCH-2651 and am creating a PR to fix the issue.

Caching JMS Session with CachingConnectionFactory

I learned that CachingConnectionFactory has the ability to cache JMS session. However, I don't understand how can I retrieve the cached Session programmingly.
My Spring configuration looks like this:
<bean id="jmsConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<construction-arg index="0" ref="connectionFactory"/>
<property name="sessionCacheSize" value="50" />
</bean>
where connectionFactory is the connection factory created by JNDI.
The way I create session is that:
QueueConnection connection = jmsConnectionFactory.createQueueConnection();
queueConnection.createQueueSession(false, Session.DUP_OK_ACKNOWLEDGE);
However, it looks like the session created by createQueueSession is always a new session rather than cached ones. It takes about 1.5 milliseconds to create a session, which doesn't sound retrieved a cached one.
Can someone please let me know how can I get the session cache working?

How to run DefaultMessageListenerContainer on the 'main' thread

I have a case where i want to run DefaultMessageListenerContainer in the same 'main' thread. Right now it uses SimpleAsyncTaskExecutor which spawns new thread every time it receives a message.
We have a test case which connects to different distributed system and do the processing and in the end it asserts few things. As DefaultMessageListenerContainer runs in seperate thread, main thread returns and start executing assertions before DefaultMessageListenerContainer can complete. This leads to failure of the test case. As work around we have made the main thread to sleep for few seconds.
Sample config
<int-jms:message-driven-channel-adapter
id="mq.txbus.publisher.channel.adapter"
container="defaultMessageListenerContainer"
channel="inbound.endpoint.publisher"
acknowledge="transacted"
extract-payload="true" />
<beans:bean id="defaultMessageListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer">
<beans:property name="connectionFactory" ref="mockConnectionFactory"/>
<beans:property name="destination" ref="publisherToTxmQueue"/>
<beans:property name="taskExecutor" ref="taskExecutor"/>
<beans:property name="maxMessagesPerTask" value="10"/>
<beans:property name="sessionTransacted" value="true"/>
</beans:bean>
<beans:bean id="taskExecutor" class="org.springframework.scheduling.timer.TimerTaskExecutor" />
I am trying to use TimerTaskExecutor here because it creates single thread but that thread is seperate than main thread so problem is unresolved. I tried using SyncTaskExecutor but that does not work either (or may be i dint provide the correct property value?).
Answer:
We solved this problem by using SimpleMessageListenerContainer.
This is the new config
<int-jms:message-driven-channel-adapter
id="mq.txbus.publisher.channel.adapter"
container="messageListenerContainer"
channel="inbound.endpoint.publisher"
acknowledge="transacted"
extract-payload="true" />
<beans:bean id="messageListenerContainer" class="org.springframework.jms.listener.SimpleMessageListenerContainer">
<beans:property name="connectionFactory" ref="mockConnectionFactory"/>
<beans:property name="destination" ref="publisherToTxmQueue"/>
<beans:property name="sessionTransacted" value="true"/>
<beans:property name="exposeListenerSession" value="false"/>
</beans:bean>
First you must understand that jms is inherently asynchronous and does not block. This means once you send a message to a queue it will be processed by another thread, maybe on a different machine, possibly minutes or hours later if the consumer is down.
Reading the description of your test case it looks like you are doing some system/integration testing. Unfortunately there is not much you can do except waiting, however you should not wait blindly as this makes your tests slow but also not very stable - no matter how long you wait, on a busy system or some lengthy GC process your test might still time out even though there is no error.
So instead of sleeping for a fixed number of seconds - sleep for e.g. ~100 milliseconds and check some condition that is only met when the processing of message was done. For example if processing the message insert some record into the database, check the database periodically.
Much more elegant way (without busy waiting) is to implement request/repply pattern, see How should I implement request response with JMS? for implementation details. Basically when sending a message you define a reply queue and block waiting for a message in that queue. When processing the original message is done, the consumer should send a reply message to a defined queue. When you receive that message - perform all assertions.
if its for testing purpose then why not use a CyclicBarrier that runs the assertion once the jms activity is completed.

Spring scoped-proxy transactions are fine via JPA but not commiting via JDBC

I have a situation where I have to handle multiple clients in one app and each client has separate database. To support that I'm using Spring custom scope, quite similar to the built in request scope. A user authenticates in each request and can set context client ID based passed credentials. The scoping itself seems to be working properly.
So I used my custom scope to create a scoped-proxy for my DataSource to support a diffrent database per client. And I get connections to proper databases.
Than I created a scoped-proxy for EntityManagerFactory to use JPA. And this part also looks OK.
Than I added a scoped-proxy for PlatformTransactionManager for declarative transaction management. I use #Transactional on my service layer and it gets propagated nicely to my SpringData powered repository layer.
All is fine and works correctly as long a s I use only JPA. I can even switch context to a diffrent client within the request (I use ThreadLocals under the hood) and transactions to both databases are handled correctly.
The problems start when I try to use JDBCTempate in one of my custom repositiries. Than at first glance all looks OK too, as no exceptions are thrown. But when I check the database for the objects I thought I inserted with my custom JDBC-based repository the're not there!
I know for sure I can use JPA and JDBC together by declaring only JpaTransactionManager and passing both the DataSource and EntityManagerFactory to it - I checked it and without the scoped-proxies and it works.
So the question is how to make JDBC work together with JPA using the JpaTransactionManager when I have scoped-proxied the DataSource, EntityManagerFactory and PlatformTransactionManager beans? I remind that using only JPA works perfectly, but adding plain JDBC into the mix is not working.
UPDATE1: And one more thing: all readonly (SELECT) operations work fine with JDBC too - only writes (INSERT, UPDATE, DELETE) end up not commited or rolledback.
UPDATE2: As #Tomasz suggested I've removed scoped proxy from EntityManagerFactory and PlatformTransactionManager as those are indeed not needed and provide more confusion than anything else.
The real problem seems to be switching the scope context within a transaction. The TransactionSynchronizationManager bounds transactional resources (i.e. EMF or DS) to thread at transaction start. It has the ability to unwrap the scoped proxy, so it binds the actual instance of the resource from the scope active at the time of starting a transaction. Then when I change the context within a transaction it all gets messed up.
It seems like I need to suspend the active transaction and store aside the current transaction context to be able to clear it upon entering another scope to make Spring think it's not inside a transaction any more and to force it create one for the new scope when needed. And then when leaving the scope I'd have to restore the previously suspended transaction. Unfortunatelly I was unable to come up with a working implementation yet. Any hints appreciated.
And below is some code of mine, but it's pretty standard, except for the scoped-proxies.
The DataSource:
<!-- provides database name based on client context -->
<bean id="clientDatabaseNameProvider"
class="com.example.common.spring.scope.ClientScopedNameProviderImpl"
c:clientScopeHolder-ref="clientScopeHolder"
p:databaseName="${base.db.name}" />
<!-- an extension of org.apache.commons.dbcp.BasicDataSource that
uses proper database URL based on database name given by above provider -->
<bean id="jpaDataSource" scope="client"
class="com.example.common.spring.datasource.MysqlDbInitializingDataSource"
destroy-method="close"
p:driverClassName="${mysql.driver}"
p:url="${mysql.url}"
p:databaseNameProvider-ref="clientDatabaseNameProvider"
p:username="${mysql.username}"
p:password="${mysql.password}"
p:defaultAutoCommit="false"
p:connectionProperties="sessionVariables=storage_engine=InnoDB">
<aop:scoped-proxy proxy-target-class="false" />
</bean>
The EntityManagerFactory:
<bean id="jpaVendorAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"
p:database="MYSQL"
p:generateDdl="true"
p:showSql="true" />
<util:properties id="jpaProperties">
<!-- omitted for readability -->
</util:properties>
<bean id="jpaDialect"
class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:packagesToScan="com.example.model.core"
p:jpaVendorAdapter-ref="jpaVendorAdapter"
p:dataSource-ref="jpaDataSource"
p:jpaDialect-ref="jpaDialect"
p:jpaProperties-ref="jpaProperties" />
The PlatformTracsactionManager:
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager"
p:dataSource-ref="jpaDataSource"
p:entityManagerFactory-ref="entityManagerFactory" />
<tx:annotation-driven proxy-target-class="false" mode="proxy"
transaction-manager="transactionManager" />

Categories