I have a queue with 50 consumers and the prefetch count of around 100. All the consumer is from single JVM instance. So, when the application goes down, the messages were in READY state. The number of messages in ready state is equal to (prefetch_count * no. of consumers).
Now the question is, what will happen to the messages in READY state? Will it redelivered or will it get dead lettered?
They will be redelivered.
Messages will only be dead-lettered if the consumer specifically rejects (nacks) the message with requeue=false (and the queue is configured for dead-lettering).
Related
I have a consumer process which has to do heavy lifting, it is a process that takes more than 30 sec to complete. And we acknowledge the message when process is completed successfully. However, looks like queue is waiting for acknowledgement and as it does not receive the acknowledgement within time, it's putting the message back to queue and same message is consumed by other consumer instance. Are there any config that I can tweak?? I don't want to auto acknowledge the message as it's an important flow and autoscaling down of the cluster may cause message loss.
I am looking if there is any config that can help me with it or is my understanding incorrect? I don't want same message getting consumed by more than one consumer. We're using IBM MQ in this instance.
However, looks like queue is waiting for acknowledgement and as it
does not receive the acknowledgement within time, it's putting the
message back to queue and same message is consumed by other consumer
instance.
Neither the queue nor the queue manager by itself puts the message back to the queue. There is one exception to that rule and that is if the client application crashes. If the queue manager determines that the application has crashed then it will rollback the message to the queue.
Or are you saying that if the sending application does not receive an acknowledgement within a specified amount of time then it resends the same message? If that is the case, then tell the sending application to double or triple the wait time.
A case where senders are sending messages to a Queue, for example message1 is sent by sender1 to a queue. Now a consumer named consumer1 connects to queue and reads the message message1.
There is another consumer named consumer2. But the message message1 is already consumed by consumer1 so it will not be available for consumer2.
When a next message arrives in queue, consumer2 might receive that message if it reads the queue before consumer1.
Does it mean that it all is a case whether one consumer reads the queue before the other in order to get the first message available from the queue?
This is the nature of a Queue in JMS, messages are sent to one consumer and once ack'd they are gone, the next consumer can get the next message and so on. This is often referred to as competing consumers or load balancing. The consumers can share the work as jobs or work items are enqueued which allows for higher throughput when the work associated with the items in the Queue can take significant time.
There are options depending on the messaging broker to make a consumer exclusive such that only that consumer can read messages from the queue while the other consumers sit and wait for the exclusive consumer to leave which makes them backups of a sort.
Other options are to use something like Apache Camel to route a given message to more than one queue, or to use AcitveMQ Virtual Topics to send messages to a Topic and have that message then enqueue onto specific consumer Queues.
The solution depends on the broker you are using and the problem you are trying to solve, none of which you've really made clear in the question.
Consumer is listening on queue(FIFO or standard queue ),Producer produces the message on queue.
Does Amazon SQS queue deletes the message from queue automatically once it gets acknowledgement from consumer ? Is there a way/configuration where queue keeps the message instead of deleting it and ensures it is not delivered again.
Producer produces the message on queue. Consumer becomes offline because of network issue. After some time he/she get backs to online. will queue deliver the message
to consumer when he gets online ? I think yes as queue has not received ACK from consumer.
I believe you are asking from rabbitmq perspective. There is some difference. There is no ack in sqs. Messages are not automatically deleted, they stay in queue even after a consumer accepts it. The messages need to be explictly deleted by the consumer after it has done processing it.
Sqs does not bother about the online offline status of a consumer. The consumer periodically polls sqs for new items. If a message is available, it is handed out. Once consumer is done, it calls sqs to delete that message. Then again poll for new message.
In your scenario, once the consumer is done processing a message, it can make two requests: one to enqueue the message in a different queue and second to delete the message from original queue.
If you have multiple consumers listning on the same queue, then a concept of message-invisibility-period comes to play. If you have such setup, ask in comments and i will update with more info.
Hope it helps.
we're using activemq as message queue of our Java stand-alone application. my problem is that based on the activemq web console, the queue has this certain number of messages enqueued and dequeued. however, based on sysout statements i added in the code, it seems that the application is consuming less than the number of messages displayed on the activemq web console. for example, on the activemq console, no. of messages enqueued and dequeued is around 1800. however, the number of messages dequeued as displayed on console (i increment a counter per message received) is only around 1700.
i really don't know where the approx. 100 messages went. so i'm thinking maybe i'll have some idea if i know how to make activemq log the message enqueued by the producer and dequeued by the consumer. is this possible? if yes, how can this be done?
enqueued == numbers of messages put into the queue since the last restart
dequeued == number of messages successfully processed by the consumers
the difference in the two numbers == number of messages in-flight, usually tracked by the "dispatched" counter. "in-flight" means sent to the consumer, but not yet ack'd.
Is there a message queue implementation that allows breaking up work into 'batches' by inserting 'message barriers' into the message stream? Let me clarify. No messages after a message barrier should be delivered to any consumers of the queue, until all messages before the barrier are consumed. Sort of like a synchronization point. I'd also prefer if all consumers received notification when they reached a barrier.
Anything like this out there?
I am not aware of existing, widely-available implementations, but if you'll allow me I'd propose a very simple, generic implementation using a proxy, where:
producers write to the proxy queue/topic
the proxy forwards to the original queue/topic until a barrier message is read by the proxy, at which point:
the proxy may notify topic subscribers of the barrier by forwarding the barrier message to the original topic, or
the proxy may notify queue subscribers of the barrier by:
periodically publishing barrier messages until the barrier has been cleared; this does not guarantee that all consumers will receive exactly one notification, although all will eventually clear the barrier (some may receive 0 notifications, others >1 notifications -- all depending on the type of scheduler used to distribute messages to consumers e.g. if non-roundrobin)
using a dedicated topic to notify each consumer exactly once per barrier
the proxy stops forwarding any messages from the proxy queue until the barrier has been cleared, that is, until the original queue has emptied and/or all consumers have acknowledged all queue/topic messages (if any) leading up to the barrier
the proxy resumes forwarding
UPDATE
Thanking Miklos for pointing out that under JMS the framework does not provide acknowledgements for asynchronous deliveries (what is referred to as "acknowledgements" in JMS are purely a consumer side concept and are not proxiable as-such.)
So, under JMS, the existing implementation (to be adapted for barriers) may already provide application-level acknowledgements via an "acknowledgement queue" (as opposed to the original queue -- which would be a "request queue".) The consumers would have to acknowledge execution of requests by sending acknowledgement messages to the proxy acknowledgement queue; the proxy would use the acknowledgement messages to track when the barrier has been cleared, after having also forwarded the acknowledgement messages to the producer.
If the existing implementation (to be adapted for barriers) does not already provide application-level acknowledgements via an "acknowledgement queue", then you could either:
have the proxy use the QueueBrowser, provided that:
you are dealing with queueus not events, that
you want to synchronize on delivery not acknowledgement of execution, and
it is OK to synchronize on first delivery, even if the request was actually aborted and has to be re-delivered (even after the barrier has been cleared.) I think Miklos already pointed this problem out IIRC.
otherwise, add an acknowledgment queue consumed by the proxy, and adapt the consumers to write acknowledgements to it (essentially the JMS scenario above, except it is not necessary for the proxy to forward acknowledgement messages to the producer unless your producer needs the functionality.)
You could achieve this using a topic for the 'Barrier Message' and a queue for the 'batched items' which are consumed with selective receivers.
Publishing the Barrier Message to a topic ensures that all consumers receive their own copy of the Barrier Message.
Each consumer will need two subscriptions:
To the Barrier Topic
A selective receiver against the batch queue, using selection criteria defined by the Barrier Message.
The Barrier Message will need to contain a batch key that must be applied to the queue consumers selection criteria.
e.g. batchId = n
or JMSMessageID < 100
or JMSTimestamp < xxx
Whenever a barrier message is received,
the current queue consumer must be closed
the queue selection criteria must be modified using the content of the Barrier Message
a new selective consumer must be started using the modified selection criteria
If you are going to use a custom batch key for the selection criteria such as 'batchId' above, then the assumption is that all message producers are capable of setting that JMS property or else a proxy will have to consume the messages set the property and republish to the queue where the selective consumers are listening.
For more info on selective receivers see these links:
http://java.sun.com/j2ee/1.4/docs/api/javax/jms/Message.html
http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/jms/QueueSession.html#createReceiver(javax.jms.Queue,%20java.lang.String)