HornetQ clustered queue and failing node: are messages lost? - java

I'm facing a design issue in which I would like to have only one JMS producer sending messages to two consumers. There are only two servers, and the producer will start generating messages that will be load balanced (with round robin) to both consumers.
In the hypothetical case of one server failing, I do have a mechanism so a new producer will be activated in the remaining server. But what will happen to the messages that were being processed in the server that went down?
Will they be reassigned to the remaining server thus being processed by the remaining consumer? or they will be lost?
If the last case is true there will be another problem. The producer creates messages based on files in a NAS so when a server goes down, the newly activated producer will start creating messages based on the contents of the NAS and that may replicate messages (but that case is handled) the problem is that if the server that goes down is not the server with the active producer then when the server goes up again it will not have messages to consume and also no messages will replace the ones lost.
How can I achieve a design so that no messages are lost?
Note: When one server goes down, the journal and bindings are lost.

Once the message is transferred to a particular node it belongs to that node.
If a node goes down, you would have to activate that node with its journal and the message state would be recovered from disk. You could eventually have messages being redistributed if you don't have more consumers (that will depend on redistribution configuration of course).
Or the best approach would be to have a backup node for each node.
We have been advising the use of collocated topologies, where one VM has an active instance and a backup instance for the other Server... That way each alive server would also have a backup config. That's being improved on 2.4.0 as we speak as you need a lot of manual configuration at the moment.
So, in summary either:
Restart the node
configure backup nodes

Related

jms Queue vs in memory java Queue

I have a situation where I need to read a(on going) messages from a topic and put them on another Queue . I have doubts do I need jms Queue or I can be satisfied with an in memory java Queue . I will do the reading from the Queue by other thread(s) in same jvm and will do client acknowledge of the message to the topic after reading the message from the (in memory) queue and process it as necessary (send it to remote IBM MQ) .So if my client crash the messages that were exist in the in memory queue will be lost but will still exist on topic and will be redeliver to me . Am I right ?
Some of this depends on how you have set up the queue/topic and the connection string you are using to read from IBM's MQ but if you are using the defaults you WILL lose messages if you're reading it to an in-memory queue.
I'd use ActiveMQ, either in the same JVM as a library so you have it taking care of receipt, delivery and persistence.
Also if you are listening to a topic you're not going to be sent missed messages after a crash even if you reconnect afterwards unless you've
configured your client as a durable subscriber
reconnect in the time (before the expireMessagesPeriod is reached)
The ActiveMQ library is not large and worth using if ensure delivery of every message is important, especially in an asynchronous environment.
Main difference is that in-memory loses data when the application goes down; JMS queue loses data when the server goes down IF the topic/queue is not persistent. The former is much more likely than the latter, so I'd also say go with JMS.

JMS Message tracking across clusters

I have JMS implementation based on JBoss (to be precise, JBossMQ on JBoss4.2). There are 5 clusters, with each cluster having few nodes. One node in each of the cluster acts as master node. Out of the 5 clusters, one of the cluster is supposed to publish messages to a persistent Topic, and the other 4 clusters consumes those messages. The publishing and consuming is done by only the master node of each cluster.
I want to device a mechanism where the publisher knows that the message was consumed by all the subscribers or a subscriber knows that it has consumed all the messages produced by the publisher. How can this be achieved?
In principle, you use a JMS system in order to not care about that and only configure it the way you need. You could save state information in a shared resource like a database, but I wouldn't do it. Better use monitor features of the JMS system in order to track that. In case your application really needs to know about the successful processing of a message, then you could have a queue where process acknowledge go back to the sender.
For HornetQ, which you might use with JBoss, you'll find an example of a clustered topic here.

JMS: local broker + HA

There is a cluster of tomcats, each tomcat node generates "tasks" which can be performed by any other node. I'd prefer task to be performed by the node which created it.
I thought that it would be good idea to use an embedded broker for each tomcat and configure it as a store-and-forward network. The problem is that a node can go down and the tasks/messages should then be performed by other tomcat instead of waiting for current one to get up.
On the other hand - when using master/slave cluster how to prioritize the node which sent the message?
How to configure it in activemq?
The priority of a local consumer should be default. In AMQ Docs:
ActiveMQ uses Consumer Priority so that local JMS consumers are always
higher priority than remote brokers in a store and forward network.
However, you will not really achive what you want. If one tomcat node goes down, so will the embedded ActiveMQ (and any messages still attached to that instance). A message will not automatically get copied to all other brokers.
But you ask something about Master/slave cluster. Do you intend to have a network of brokers OR a master/slave setup? Or do you intend to have a combo?

JMS and Weblogic Clustering

Weblogic application servers that I am using are clustered. I have a created a JMS queue and it has a JNDI name. When a consumer looks up the jndi name and publishes the event on a queue , would it be published in the queue created in both the app servers? The same MDB will be running on both the servers - which one will get the message posted on to the queue? In case I need to delete the message put on the queue , should I iterate through all the nodes and delete the message?
Thanks.
Using a queue means the message is guaranteed to be consumed exactly once. Meaning, that the message will be delivered to both nodes, but it will only be processed once globally by one of the nodes. WebLogic handles the synchronization and coordination between nodes in your cluster to simultaneously guarantee the delivery, but assure that it is processed exactly once globally.
This is contrast to a topic, where each subscriber gets a copy of the message. Each message will be processed once by each subscriber.
You don't need to iterate through the nodes to delete the message... just grab a jndi reference to the queue and delete the message before any consumer consumes it.
You don't say what type of queue you're creating within Weblogic for this. For a clustered environment it's better to use a DistributedQueue, rather than a standard Queue. I believe it allows Weblogic to better handle how to deal with messages on the queue when one of the nodes in the cluster goes down. There is also the option to view the contents of a queue and delete messages from the Weblogic Admin Console.

JMS queue is full

My Java EE application sends JMS to queue continuously, but sometimes the JMS consumer application stopped receiving JMS. It causes the JMS queue very large even full, that collapses the server.
My server is JBoss or Websphere. Do the application servers provide strategy to remove "timeout" JMS messages?
What is strategy to handle large JMS queue? Thanks!
With any asynchronous messaging you must deal with the "fast producer/slow consumer" problem. There are a number of ways to deal with this.
Add consumers. With WebSphere MQ you can trigger a queue based on depth. Some shops use this to add new consumer instances as queue depth grows. Then as queue depth begins to decline, the extra consumers die off. In this way, consumers can be made to automatically scale to accommodate changing loads. Other brokers generally have similar functionality.
Make the queue and underlying file system really large. This method attempts to absorb peaks in workload entirely in the queue. This is after all what queuing was designed to do in the first place. Problem is, it doesn't scale well and you must allocate disk that 99% of the time will be almost empty.
Expire old messages. If the messages have an expiry set then you can cause them to be cleaned up. Some JMS brokers will do this automatically while on others you may need to browse the queue in order to cause the expired messages to be deleted. Problem with this is that not all messages lose their business value and become eligible for expiry. Most fire-and-forget messages (audit logs, etc.) fall into this category.
Throttle back the producer. When the queue fills, nothing can put new messages to it. In WebSphere MQ the producing application then receives a return code indicating that the queue is full. If the application distinguishes between fatal and transient errors, it can stop and retry.
The key to successfully implementing any of these is that your system be allowed to provide "soft" errors that the application will respond to. For example, many shops will raise the MAXDEPTH parameter of a queue the first time they get a QFULL condition. If the queue depth exceeds the size of the underlying file system the result is that instead of a "soft" error that impacts a single queue the file system fills and the entire node is affected. You are MUCH better off tuning the system so that the queue hits MAXDEPTH well before the file system fills but then also instrumenting the app or other processes to react to the full queue in some way.
But no matter what else you do, option #4 above is mandatory. No matter how much disk you allocate or how many consumer instances you deploy or how quickly you expire messages there is always a possibility that your consumer(s) won't keep up with message production. When this happens your producer app should throttle back, or raise an alarm and stop or do anything other than hang or die. Asynchronous messaging is only asynchronous up to the point that you run out of space to queue messages. After that your apps are synchronous and must gracefully handle that situation, even if that means to (gracefully) shut own.
Sure!
http://download.oracle.com/docs/cd/E17802_01/products/products/jms/javadoc-102a/index.html
Message#setJMSExpiration(long) does exactly what you want.

Categories