I have a standalone Spring application that reads messages from a Weblogic cluster. It is not a MDP, rather it runs multiple threads that each use a JMSTemplate to browse the queue and retrieve messages based on specific criteria.
I would like to cache the JMS connections, while also ensuring that I open enough connections that I always am retrieving messages from each server in the cluster. My issue is the default ConnectionFactory does not cache at all, but the Spring wrappers SingleConnectionFactory and CachingConnectionFactory do not allow for multiple connections open at once.
Should I implement my own ConnectionFactory that caches on a limited basis? Or what is the recommended approach.
Use case seemed to be quite clumsy to do in Spring, so resolved the issue by directly managing the JMS resources directly.
Related
I am using Java, spring-boot and ActiveMQ.
I need to send a large bunch of messages in shortest time.
Right now it takes lot of time to send message one by one using JMSTemplate.
Is there any way I can bunch the messages and send if to activemq at once with guarantee to maintain the order of messages?
thanks in advance
Default ActiveMQ configuration can be slow for large message flow. We use following configuration for improving the message rates -
connection.setOptimizeAcknowledge(true);
consumerSession = connection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
setOptimizeAcknowledge configures optimized acknowledgement of received messages while Session.DUPS_OK_ACKNOWLEDGE allows batched acknowledgements.
Spring's JMSTemplate is notorious for bad performance outside a Java EE container (or some other environment which provides pooled connection resources). Read more on the Apache ActiveMQ website. Therefore, you need to use a connection pool or ditch the JMSTemplate for something else.
We have a requirement with one of the source applications which allows very few connections per user to its database.
Since we have multiple spring batch interfaces connected to same source Db, we run out of connections, Spring batches run as individual java programs and we do not have a container.
Please suggest a way to have a datasource with multiple users or a data source confirguration that can help us maintain a common pool with multiple users connected to that database.
One ugly workaround, I think of is to have a common service to fetch connections from a databases and in that service we can use some container to maintain a pool for multiple users, by having a list/array of datasource connections per user.
To solve this we had to add a container. Tomcat to be precise. It managed all connections. All applications were deployed on tomcat and tomcat manager was used to start stop the applications, when needed.
I'm actively using ActiveMQ in my project. Although production use standalone ActiveMQ instance my tests require embedded ActiveMQ instance. After execution of particular test method ActiveMQ holds unprocessed messages in queues. I'd like to wipe out ActiveMQ instance after each test. I tried to use JMX to connect to local ActiveMQ instance and wipe out queues, but it's heavy-weight solution. Could anyone suggest me something more lightweight?
just turn off broker persistence when you define the broker URL for your unit tests
vm://localhost?broker.persistent=false
ActiveMQ has an option to delete all message on startup, if you use the XML way of configuring ActiveMQ broker, you can set it on the < activemq > tag,
<activemq:broker .... deleteAllMessagesOnStartup="true">
...
</activemq:broker>
Another approach could be to use unique data directories per unit test which is what we do when unit testing camel-jms component with embedded ActiveMQ broker. We have a helper class that setup ActiveMQ for us, depending on we needed persistent queues or not
https://git-wip-us.apache.org/repos/asf?p=camel.git;a=blob;f=components/camel-jms/src/test/java/org/apache/camel/component/jms/CamelJmsTestHelper.java;h=8c81f3e2bed738a75841988fd1239f54a100cd89;hb=HEAD
I believe you want to purge the queue. There are several options for that.
https://activemq.apache.org/how-do-i-purge-a-queue.html
from the link
"You can use the Web Console to view queues, add/remove queues, purge queues or delete/forward individual messages. Another option is to use JMX to browse the queues and call the purge() method on the QueueViewMBean. You could also delete the queue via removeQueue(String) or removeTopic(String) methods on the BrokerViewMBean. You can also do it programmatically"
The link describes each option in details
Question: What is the failover strategy that spring batch supports best? Resource usage, failover mechanism have to be focussed on. Any suggestions?
Usecase - Spring batch has to be run to read a file(that will be put on the server by another application) from the server and process it.
Environment is clustered. So, there could be multiple server instances that could trigger the batch jobs trying to read the same file on arrival.
My thoughts: Polling can be done to check the arrival of the file and call the spring batch job. Since it is clustered, we could use active/passive strategy to poll. The other types such as roundrobin or time slicing can also be used.
Pardon me if I am not clear. I can explain if something is unclear.
As I understand from here
http://static.springsource.org/spring-batch/reference/html/scalability.html
the better approach would be to have just one poller and than distribute the job to the cluster through one of the mechanisms provided by spring Batch (I think the one named Remote Chunks is the best choice here).
I don't think you should worry about the clustering strategy as this is handled either by Spring Batch or by other clustering distribution mechanisms.
Reading the ActiveMQ documentation (we are using the 5.3 release), I find a section about the possibility of using a JDBC persistence adapter with ActiveMQ.
What are the benefits? Does it provide any gain in performance or reliability? When should I use it?
In my opinion, you would use JDBC persistence if you wanted to have a failover broker and you could not use the file system. The JDBC persistence was significantly slower (during our tests) than journaling to the file system. For a single broker, the journaled file system is best.
If you are running two brokers in an active/passive failover, the two brokers must have access to the same journal / data store so that the passive broker can detect and take over if the primary fails. If you are using the journaled file system, then the files must be on a shared network drive of some sort, using NFS, WinShare, iSCSI, etc. This usually requires a higher-end NAS device if you want to eliminate the file share as a single point of failure.
The other option is that you can point both brokers to the database, which most applications already have access to. The tradeoff is usually simplicity at the price of performance, as the journaled JDBC persistence was slower in our tests.
We run ActiveMQ in an active/passive broker pair with journaled persistence via an NFS mount to a dedicated NAS device, and it works very well for us. We are able to process over 600 msgs/sec through our system with no issues.
Hey, the use of journaled JDBC seems to be better than using JDBC persistence only since the journaling is very much faster than JDBC persistence. It is better than just journalled persistence only cos' you have an additional backup of the messages in the db. Journalled JDBC has the additional advantage that the same data in journal is persisted to the db later and this can be accessed by developers when needed!
However, when you are using master/slave ActiveMQ topology with journalled JDBC, you might end up loosing messages since you might have messages in journal that are not yet into the DB!
If you have a redelivery plugin policy in place and use a master/slave setup, the scheduler is used for the redelivery.
As of today, the scheduler can only be setup on a file database, not on the JDBC. If you do not pay attention to that, you will take all messages that are in redelivery out of the HA scenario and local to the broker.
https://issues.apache.org/jira/browse/AMQ-5238 is an issue in Apache issue tracker that asks for a JDBC persistence adapter for schedulerdb. You can place a vote for it to make it happen.
Actually, even on the top AMQ HA solution, LevelDB+ZooKeeper, the scheduler is taken out of the game and documented to create issues (http://activemq.apache.org/replicated-leveldb-store.html at end of page).
In a JDBC scenario, therefor it can be considered unsafe and unsupported, but at least not clearly documented, how to setup the datastore for the redelivery policy.