I have a JBoss-6 server with HornetQ and a single queue:
<queue name="my.queue">
<entry name="/queue/test"/>
</queue>
There a different consumers (on different machines) connected to this queue, but only a single consumer is active at a time. If I shut down this consumer, the messages are immediately processed by one of the other consumers.
Since my messages have some time consuming processing, I want multiple consumer process their unique messages concurrently.
I remember a similar in earlier versions of JBoss where this setup worked without problems. Here in Jboss-6 the messaging system is working well -- except of the issue described above. This question is similar to Are multiple client consumers possible in hornetq?, but the scenario is not similar to mine.
Update 1: If I close (STRG+C) one consumer there is a short timeout (until the server recognized the lost consumer) until the next consumer gets the message.
Update 2: Code Snippet
VoidListener ml = new VoidListener();
QueueConnectionFactory qcf = (QueueConnectionFactory)
ctx.lookup("ConnectionFactory");
QueueConnection conn = qcf.createQueueConnection();
Queue queue = (Queue) ctx.lookup(queueName);
QueueSession session = conn.createQueueSession(false,
QueueSession.AUTO_ACKNOWLEDGE);
QueueReceiver recv = session.createReceiver(queue,"");
recv.setMessageListener(ml);
conn.start();
And the MessageListerner:
public class OlVoidListener implements MessageListener
{
public void onMessage(Message msg)
{
counter++;
logger.debug("Message ("+counter+") received");
try {Thread.sleep(15*1000);} catch (InterruptedException e) {}
}
}
With multiple consumers on a queue, messages are load balanced between the consumers.
As you have some time consuming the message, you should disable buffering by setting consumer-window-size.
On hornetQ there's an example on the distribution, about how to disable client buffering and give a better support for slow consumers. (a slow consumer is a consumer that will have some time processing the message)
message systems will pre-fetch/read-ahead messages to the client buffer to speed up processing and avoid network latency. This is not an issue if you have fast processing queues and a single consumer.
JBoss Messaging offered the slow-consumer option at the connection factory and hornetq offers the consumer window size.
Most Message systems will provide you a way to enable or disable client pre-fetching.
I am sorry but I cannot understand what exactly the problem is. We've used hornetq in 2.0.0.GA version and 2.2.2.Final. In both cases, queue-based load balancing works fine. If you will define multiple consumers for one queue and all of them are active, messages will be distributed between them automatically. First message to consumer A, second to consumer B, third to consumer C and so on. This is how queues with multiple consumers works - it's free load balancing :) That's normal that when you shut down one consumer, others would receive more messages.
Related
When using JmsTemplate to get the list of activemq queues, the number of queues reported changes
private Set<String> queues = new HashSet<>();
try(ActiveMQConnection connection = (ActiveMQConnection)
jmsTemplate.getConnectionFactory().createConnection()){
connection.start();
for(ActiveMQQueue queue : connection.getDestinationSource().getQueues()){
queues.add(queue.getQueueName());
}
queues.remove(defaultReplyToQueue);
log.info("findAllQueues found {}", queues.size());
return queues;
}
This is hard to full answer with the limited detail given but I'd guess the issue comes down to the way the queues are populated in the destination source. They arrive in an asynchronous manner as the broker enumerates the existing queues. This implies that just opening a connection and immediately asking for all the queues will likely report random results as not all have arrived from the broker.
I am Using Amazon Mq as my Mqtt broker and when around 1000 requests are received simultaneously the mqtt broker breaks and disconnects. Can Anyone tell me how to use Amazon Mq as my broker & simultaneously solve the scaling problem also.
I'm assuming that you have created ActiveMQ as a singleton class. Right?
-For producing a message, you create an instance of PooledConnectionFactory like
-------//some code here
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(MQTT_END_POINT);
connectionFactory.setUserName();
connectionFactory.setPassword();
PooledConnectionFactory pooledConnectionFactory = getActiveMQInstance().configurePooledConnectionFactory(activeMQConnectionFactory);
-------
This pooledConnectionFactory is used to create a connection then session and then destination is entered (as mentioned on AmazonMQ documentation). You send the message using MessageProducer object and close the MessageProducer, session and connection
-For consumption, there will be an always-alive-listener that is always ready for message to arrive. The consumer part, it follows the same process like consumerConnection, then session and then destination queue to listen on.
As far as I remember, this part is also mentioned in amazonMQ documentation.
There is one problem that the connection to broker is lost for consumer sometimes, (since producer reopens the connections, produces and closes, it is not observed in it). Remember, you will have to reestablish the connection for consumer.
If there is any variance from the above approach please mention. Also, add your amazonMQ broker picture showing the connection, queue, active consumers.
Just out of curiosity, what are the maximum connections you have set for the PooledConnectionFactory?
I have a problem when reading messages from multiple JMS Queues in a single transaction using WebLogic JMS client (wlthin3client.jar) from WebLogic 11g (WebLogic Server 10.3.6.0). I am trying to read first one message from queue Q1 and then, if this message satisfy some requirements, read other message (if available at that time) from queue Q2.
I expect that after committing the transaction both messages should disappear from Q1 and Q2. In case of rollback - messages should remain in both Q1 and Q2.
My first approach was to use an asynchronous queue receiver to read from Q1 and then synchronously read from Q2 when it is needed:
void run() throws JMSException, NamingException {
QueueConnectionFactory cf = (QueueConnectionFactory) ctx.lookup(connectionFactory);
// create connection and session
conn = cf.createQueueConnection();
session = conn.createQueueSession(true, Session.SESSION_TRANSACTED);
Queue q1 = (Queue) ctx.lookup(queue1);
// setup async receiver for Q1
QueueReceiver q1Receiver = session.createReceiver(q1 );
q1Receiver.setMessageListener(this);
conn.start();
// ...
// after messages are processed
conn.close();
}
#Override
public void onMessage(Message q1msg) {
try {
QueueReceiver q2receiver = session.createReceiver(queue2);
if(shouldReadFromQ2(q1msg)){
// synchronous receive from Q2
Message q2msg = q2receiver.receiveNoWait();
process(q2msg);
}
session.commit();
} catch (JMSException e) {
e.printStackTrace();
} finally {
q2receiver.close();
}
}
Unfortunately even though I issue a session.commit() the message from Q1 remains uncommitted. It is in receive state until the connection or receiver is closed. Then is seems to be rolled back as it gets delayed state.
Other observations:
Q1 message is correctly committed if Q2 is empty and there is nothing to read from it.
The problem does not occur when I am using synchronous API in similar, nested way for both Q1 and Q2. So if I use q1Receiver.receiveNoWait() everything is fine.
If I use asynchronous API in similar, nested way for Q1 and Q2, then only Q1 message listener is called and commit works on Q1. But Q2 message listener is not called at all and Q2 is not committed (message stuck in receive / delayed).
Am I misusing the API somehow? Or is this a WLS JMS bug? How to combine reading from multiple queues with asynchronous API?
It turns out this is an WLS JMS bug 28637420.
The bug status says it is fixed, but I wouldn't rely on this - a WLS 11g patch with this fix doesn't work (see bug 29177370).
Oracle says that this happens because two different delivery mechanisms (synchronous messages vs asynchronous messages) were not designed to work together on the same session.
Simplest way to work around the problem is just use synchronous API (polling) for cases when you need to work on multiple queues in a single session. I decided on this approach.
Another option suggested by oracle is to to use UserTransactions with two different sessions, one session for the async consumer and another session for the synchronous consumer. I didn't test that though.
According to this config page on the ActiveMQ site, the connection.sendTimeout property is:
Time to wait on Message Sends for a Response, default value of zero indicates to wait forever. Waiting forever allows the broker to have flow control over messages coming from this client if it is a fast producer or there is no consumer such that the broker would run out of memory if it did not slow down the producer. Does not affect Stomp clients as the sends are ack'd by the broker. (Since ActiveMQ-CPP 2.2.1)
I'm having difficulty interpreting what this means (and what the sendTimeout property really is/what it does):
What is a "Message Sends" object?
Why would ActiveMQ be waiting for a response? Isn't it on the server-side of a JMS connection? Shouldn't it be waiting for a request?
What does it actually timeout? When should it be used?
Thanks in advance!
The timeout affects the send of a Message by the client to the Broker. In the case where a send is not async then the client waits for the Broker to return a response indicating that the Message was received and added to the Message store. In some cases this can block for a long time if the Broker has engaged producer flow control because one of its preset memory limits has been reached. If the client application can't tolerate a long wait on send it could configure this timeout so that MessageProducer::send doesn't indefinitely block.
Messages are sent in synchronous mode either because the Connection was configured with alwaysSyncSend=true or because the MessageProducer is sending with the delivery mode set to Persistent.
In general this setting shouldn't need to be used if you've configured your Broker with limits that match your use case.
TL;DR; I need to know if there's a lib with persistent blocking queue that performatic.
I hava a classic producer/consumers program. They share a LinkedBlockingQueue to share the data, and I use BlockingQueue#take method in the Consumers, as I need them to live forever waiting for new elements.
The problem is I have LOTS of data and I can't lose them. Even after the consumers stops, the producer can persist to generate some data. I am thinking about implementing my BlockingQueue ta uses H2 behind to store/get the data after some threshold is reached. My main problem is that I need performance and I need to consume the elements in the order they are created.
Is there an implementation of persistent blocking queue that I can use for something like this? If it doesn't, any sugestions for me to achieve something like this?
I would use ActiveMQ lib and Spring JMS, here is a usage example
start broker
BrokerService broker = new BrokerService();
broker.addConnector("tcp://localhost:61616");
broker.start();
read msg
ConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616");
JmsTemplate t = new JmsTemplate(cf);
Message msg = t.receive();
send message
ConnectionFactory cf = new ActiveMQConnectionFactory("tcp://localhost:61616");
JmsTemplate t = new JmsTemplate(cf);
t.send("test", new MessageCreator() {
public Message createMessage(Session session) throws JMSException {
return session.createTextMessage("test");
}
});
You can try ActiveMQ. The ActiveMQ can write to your file system so if the producer is generating many more elements than the consumer can take you either have a lot of blocking (on whatever the upper bound of the queue is) or excessive data (if there is no upper bound to the queue).
Have you come across Amazon SQS its an unbounded queue and very fast, It guarantees order.How long do you want to persist the data for?
You can use any JMS implementation for supporting excess incoming data. This is a producer consume problem and jms is designed for this.