I have a MDB deployed on Jboss that gets messages from a Websphere MQ queue,looking in each message header for GroupId and Sequence information.Once it gets all sequences for a group,puts together the payload of each message it received to form one big message and sends it to another system.
Now the MDB will be deployed in a Websphere Application Server 7 clustered environment and I don't know for sure if there is any caching/configuration available to gather all message sequences for a group by one instance of the cluster(otherwise,if one instance receives some message parts and another instance the rest,in the end the MDB won't be able to put together one big message)
I read that the jms-ra resource adaptor can be configured with con.sun.genericra.loadbalancing.selector= (e.g JMSType = 'Instance1' and so on for the other instances)
The JMSType header should be present in the message and should be 'Instance1', for instance 1 to process this message.
But I'm not sure if the system that will put the messages in the queue from where the MDB picks them up will send such information in their message headers.
Is there a way to configure the cluster to achieve this?
When working in cluster environment, MDBs work independently. There are several ways to achieve syncronization.
1) You can use selectors to divide message flows between cluster nodes. Here is the docs http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp?topic=%2Fcom.ibm.mq.csqzal.doc%2Ffg20180_.htm
The main problem is that selectors need some info in message properties to make their job. Somebody must put them there.
2) You can make syncronization on "shared" data collector, such as DB. You will put received messages there. Further processing can be made async or on last message come basis.
3) You can make a "proxy" yourself. You can make additional "internal" queue. Take messages by several MDB from external queue, anylize them and put properties needed for point 1. Then put messages in internal queue and read them as in point 1 using selectors by different nodes.
Related
I have a chat app that will create a queue for each user that is online and I'm trying to get all the queued messages for this user and the problem is that I only know the name of the queue through the message that comes through and therefore I cant use #RabbitListener to give it a queue name.
Is there any way that I can get all the message queued for a user other than using rabbitTemplate
convert ? since it only gives me one single message other than all of them
I would say it is better to look into a STOMP over WebSocket protocol which is supported as a plugin on RabbitMQ. It indeed creates an individual queue for every user and there is a mechanism to consume all the messages sent to that user.
See WebSocket support in Spring Framework: https://docs.spring.io/spring-framework/docs/current/reference/html/web.html#websocket
If you can't do that, you probably should look into a custom solution where you send a queue name to some static exchange for the consumer to be aware of such a new queue which you can add to the ListenerContainer at runtime (and remove respectively later). See more info in Spring AMQP: https://docs.spring.io/spring-amqp/docs/current/reference/html/#listener-queues
Hy all,
in the software I'm developing, I have different camel routes that work on data, that is (in this case) loaded from an imap server using the camel-mail component.
Each of those routes does something with the data and then gives the data to the next route. They are dynamically configured at runtime.
In between those routes is an embedded ActiveMQ server which is used by each route to load the data from and save the data to (for the next route to pick it up).
Because of this structure I'm having a special case with the camel-mail consumer.
When loading a mail and sending it to the first ActiveMQ queue, it is immediatelly deleted/marked as read (depending on the settings on the mail consumer), but the actual processing of the mail has not concluded yet, as the next routes still have to process it.
This is a simplified view:
from("imaps://imap.server.com?...")
// Format mail in a way the other routes understand
.to("activemq:queue1"); // After this the mail is delete on the imap server
from("activemq:queue1")
// do some processing
.to("activemq:queue2");
from("activemq:queue2")
// Do some final processing
.to("..."); // NOW the mail should be delete on the imap server
This issue is even more a problem with the error handling I do.
Ever route in this "chain" sends failed exchanges to a deadLetterQueue on the ActiveMQ server. This way there is one error handling route, which picks up the failed exchanges and deals with them, on matter where it crashed.
In case there is a problem I want the email on the imap server to be handled differently (maybe even do nothing an try again on the next poll)
As camels InOut MEP returns the exchange to the (mail)consumer when the route ends i.e. when the exchange is given to the queue, I can't use the consumer to delete the mails after the whole process has ended.
Unfortunatelly I also don't see a delete option on the mail producer (which makes sense I guess, because its not how imap works).
I could also use smtp for this if thats necessary.
Does anybody have an idea how I could achieve this using no other connector then the camel component to connect to the mail server?
Greets and thanks in advance
Chris
Edit:
Adding the parameter "exchangePattern=InOut" to the jms queues (.to("activemq:queue1?exchangePattern=InOut")) lets the mail component wait for the whole process to finish.
The problem with that is, that we lose the big advantage with ActiveMQ that all routes are independent of each other. This is important so we are don't run into issues with consuming the mail when a later route takes a long time to process, which is very likely to happen.
So idealy we find a solution, where the mail is deleted without any component waiting for something to finish
I have JMS implementation based on JBoss (to be precise, JBossMQ on JBoss4.2). There are 5 clusters, with each cluster having few nodes. One node in each of the cluster acts as master node. Out of the 5 clusters, one of the cluster is supposed to publish messages to a persistent Topic, and the other 4 clusters consumes those messages. The publishing and consuming is done by only the master node of each cluster.
I want to device a mechanism where the publisher knows that the message was consumed by all the subscribers or a subscriber knows that it has consumed all the messages produced by the publisher. How can this be achieved?
In principle, you use a JMS system in order to not care about that and only configure it the way you need. You could save state information in a shared resource like a database, but I wouldn't do it. Better use monitor features of the JMS system in order to track that. In case your application really needs to know about the successful processing of a message, then you could have a queue where process acknowledge go back to the sender.
For HornetQ, which you might use with JBoss, you'll find an example of a clustered topic here.
I'm facing a design issue in which I would like to have only one JMS producer sending messages to two consumers. There are only two servers, and the producer will start generating messages that will be load balanced (with round robin) to both consumers.
In the hypothetical case of one server failing, I do have a mechanism so a new producer will be activated in the remaining server. But what will happen to the messages that were being processed in the server that went down?
Will they be reassigned to the remaining server thus being processed by the remaining consumer? or they will be lost?
If the last case is true there will be another problem. The producer creates messages based on files in a NAS so when a server goes down, the newly activated producer will start creating messages based on the contents of the NAS and that may replicate messages (but that case is handled) the problem is that if the server that goes down is not the server with the active producer then when the server goes up again it will not have messages to consume and also no messages will replace the ones lost.
How can I achieve a design so that no messages are lost?
Note: When one server goes down, the journal and bindings are lost.
Once the message is transferred to a particular node it belongs to that node.
If a node goes down, you would have to activate that node with its journal and the message state would be recovered from disk. You could eventually have messages being redistributed if you don't have more consumers (that will depend on redistribution configuration of course).
Or the best approach would be to have a backup node for each node.
We have been advising the use of collocated topologies, where one VM has an active instance and a backup instance for the other Server... That way each alive server would also have a backup config. That's being improved on 2.4.0 as we speak as you need a lot of manual configuration at the moment.
So, in summary either:
Restart the node
configure backup nodes
Weblogic application servers that I am using are clustered. I have a created a JMS queue and it has a JNDI name. When a consumer looks up the jndi name and publishes the event on a queue , would it be published in the queue created in both the app servers? The same MDB will be running on both the servers - which one will get the message posted on to the queue? In case I need to delete the message put on the queue , should I iterate through all the nodes and delete the message?
Thanks.
Using a queue means the message is guaranteed to be consumed exactly once. Meaning, that the message will be delivered to both nodes, but it will only be processed once globally by one of the nodes. WebLogic handles the synchronization and coordination between nodes in your cluster to simultaneously guarantee the delivery, but assure that it is processed exactly once globally.
This is contrast to a topic, where each subscriber gets a copy of the message. Each message will be processed once by each subscriber.
You don't need to iterate through the nodes to delete the message... just grab a jndi reference to the queue and delete the message before any consumer consumes it.
You don't say what type of queue you're creating within Weblogic for this. For a clustered environment it's better to use a DistributedQueue, rather than a standard Queue. I believe it allows Weblogic to better handle how to deal with messages on the queue when one of the nodes in the cluster goes down. There is also the option to view the contents of a queue and delete messages from the Weblogic Admin Console.