I am adding two JMS messages in the same destination sequentially. Will both of these messages be received in the same order in which I have added them or is there a chance for reverse ordering, that is, which ever the message is received first in the destination will be retrieved first.
I am adding into a destination as:
producer.send(Msg1);
producer.send(Msg2);
Msg1 and Msg2 will be added sequentially in all the cases (like network failures and latency. etc.)?
Message ordering is not guaranteed (and not mandated by the specification) and Total JMS Message ordering explains the details of why. Also see the Stack Overflow post How to handle order of messages in JMS?.
As per JMS2 specs
JMS defines that messages sent by a session to a destination must be received
in the order in which they were sent. This defines a partial ordering
constraint on a session’s input message stream.
JMS does not define order of message receipt across destinations or across
a destination’s messages sent from multiple sessions. This aspect of a
session’s input message stream order is timing-dependent. It is not under
application control.
Also
Although clients loosely view the messages they produce within a session
as forming a serial stream of sent messages, the total ordering of this stream
is not significant. The only ordering that is visible to receiving clients is
the order of messages a session sends to a particular destination.
Several things can affect this order like message priority,
persistent/non persistent etc.
So to answer your question messages will be received in the same order they were sent with above information. The order in which messages are delivered to the server however will be constrained by limitations like message priority, persistent/non persistent etc.
Related
the following is from the documentation (akka):
Delivery guarantees
Stream refs utilise normal actor messaging for their trainsport, and therefore provide the same level of basic delivery guarantees. Stream refs do extend the semantics somewhat, through demand re-delivery and sequence fault detection. In other words:
messages are sent over actor remoting
which relies on TCP (classic remoting or Artery TCP) or Aeron UDP for basic redelivery mechanisms
messages are guaranteed to to be in-order
messages can be lost, however:
a dropped demand signal will be re-delivered automatically (similar to system messages)
a dropped element signal will cause the stream to fail
(link -> https://doc.akka.io/docs/akka/current/stream/stream-refs.html)
After reading this, i am curious. Does akka stream provide guaranteed delivery then.
For eg. A bunch of actors store events in journal that feeds a stream that batches messages (with flow of lets say 1 second for max 1000 messages) to the other actor. Does this guarantee delivery?
Also, as a side question. If the system messages re-delivers droped messages automaticly, does this mean that the event stream guarantees delivery?
StreamRefs do not currently (Akka 2.6.1) implement any reliability other than a sequence numbering for the element and demand re-signalling:
If a gap or out of order delivery is detected from the element sequence number the streams are failed. There is no redelivery of elements.
If a demand request from the receiving side is lost it is resent after a timeout.
The receiving side has a buffer and in case of stream failure all elements in it will be lost in addition to any elements in flight before the sending side sees the failure signal from the receiving side (which goes across network so is not immediate).
I have below configuration for rabbitmq
prefetchCount:1
ack-mode:auto.
I have one exchange and one queue is attached to that exchange and one consumer is attached to that queue. As per my understanding below steps will be happening if queue has multiple messages.
Queue write data on a channel.
As ack-mode is auto,as soon as queue writes message on channel,message is removed from queue.
Message comes to consumer,consumer start performing on that data.
As Queue has got acknowledgement for previous message.Queue writes next data on Channel.
Now,my doubt is,Suppose consumer is not finished with previous data yet.What will happen with that next data queue has written in channel?
Also,suppose prefetchCount is 10 and I have just once consumer attached to queue,where these 10 messages will reside?
The scenario you have described is one that is mentioned in the documentation for RabbitMQ, and elaborated in this blog post. Specifically, if you set a sufficiently large prefetch count, and have a relatively small publish rate, your RabbitMQ server turns into a fancy network switch. When acknowledgement mode is set to automatic, prefetch limiting is effectively disabled, as there are never unacknowledged messages. With automatic acknowledgement, the message is acknowledged as soon as it is delivered. This is the same as having an arbitrarily large prefetch count.
With prefetch >1, the messages are stored within a buffer in the client library. The exact data structure will depend upon the client library used, but to my knowledge, all implementations store the messages in RAM. Further, with automatic acknowledgements, you have no way of knowing when a specific consumer actually read and processed a message.
So, there are a few takeaways here:
Prefetch limit is irrelevant with automatic acknowledgements, as there are never any unacknowledged messages, thus
Automatic acknowledgements don't make much sense when using a consumer
Sufficiently-large prefetch when auto-ack is off, or any use of autoack = on will result in the message broker not doing any queuing, and instead doing routing only.
Now, here's a little bit of expert opinion. I find the whole notion of a message broker that "pushes" messages out to be a little backwards, and for this very reason- it's difficult to configure properly, and it is unclear what the benefit is. A queue system is a natural fit for a pull-based system. The processor can ask the broker for the next message when it is done processing the current message. This approach will ensure that load is balanced naturally and the messages don't get lost when processors disconnect or get knocked out.
Therefore, my recommendation is to drop the use of consumers altogether and switch over to using basic.get.
I am trying to solve the following case:
I am consuming messages, but take an outage in a system I am depending on for proper message processing (say a Database for example)
I am using CLIENT_ACKNOWLEDGE, and only calling the .acknowledge() method when no exception is thrown.
This works fine when I throw an exception, messages are not acknowledged, and I can see the unacknowledged queue building up. However, these messages have all already been delivered to the consumer.
Suppose now the Database comes back online, and any new message is processed successfully. So I call .acknowledge on them. I read that calling .acknowledge() acknowledges not only that message, but also all previously received messages in the consumer.
This is not what I want! I need these previously unacknowledged messages to be redelivered / retried. I would like to keep them on the queue and let JMS handle the retry, since maintaining a Collection in the consumer of "messages to be retried" might put at risk losing those messages ( since .acknowledge already ack'ed all of them + say the hardware failed).
Is there a way to explicitly acknowledge specific messages and not have this "acknowledge all prior messages" behavior?
Acknowledging specific message is not defined by JMS specification. Hence some JMS implementers provide per messaging acknowledging and some don't. You will need to check your JMS provider documentation.
Message queues generally will have an option on how the messages are delivered to a client, either First in first out (FIFO) or Priority based. Choose FIFO option so that all messages are delivered in the same order they came into a queue. When database goes offline and comes back, call recover method to redeliver all messages in the same order again.
You need to call recover on your session after the failure to restart message delivery from the first unacked message. From the JMS 1.1 spec section 4.4.11
When CLIENT_ACKNOWLEDGE mode is used, a client may build up a large
number of unacknowledged messages while attempting to process them. A
JMS provider should provide administrators with a way to limit client
over-run so that clients are not driven to resource exhaustion and
ensuing failure when some resource they are using is temporarily
blocked.
A session’s recover method is used to stop a session and restart it
with its first unacknowledged message. In effect, the session’s series
of delivered messages is reset to the point after its last
acknowledged message. The messages it now delivers may be different
from those that were originally delivered due to message expiration
and the arrival of higher-priority messages.
Are messages received on a Websphere MQ topic that you are subscribed to strictly ordered?
In other words, in similar fashion to a queue, given that your connection is maintained are you guaranteed to receive the topic messages in the same order as they were sent?
As per JMS specs
JMS defines that messages sent by a session to a destination must be received
in the order in which they were sent. This defines a partial ordering
constraint on a session’s input message stream.
JMS does not define order of message receipt across destinations or across
a destination’s messages sent from multiple sessions. This aspect of a
session’s input message stream order is timing-dependent. It is not under
application control.
Also
Although clients loosely view the messages they produce within a session
as forming a serial stream of sent messages, the total ordering of this stream
is not significant. The only ordering that is visible to receiving clients is
the order of messages a session sends to a particular destination.
Several things can affect this order like message priority,
persistent/non persistent etc.
So to answer your question it is not really JMS provider specific what order messages are received. They will be received in the same order they were sent with above information. The order in which messages are delivered to the server however will be constrained by limitations like message priority, persistent/non persistent etc.
Say I load messages in a queue from multiple nodes.
Then, one or many nodes are pulling messages from the queue.
Is it possible (or is this normal usage?) that the queue guarantees to not hand out a message to more than one server/node?
And does that server/node have to tell the queue it has completed the operation and the queue and delete the message?
A message queuing system that did not guarantee to hand out a given message to just one recipient would not be worth the using. Some message queue systems have transactional controls. In that case, if a message is collected by one receiver as part of a transaction, but the receiver does not then commit the transaction (and the message queue can identify that the original recipient is no longer available), then it would be reissued. However, the message would not be made available to two processes concurrently.
What messaging/queuing technology are you using ? AMQP can certainly guarantee this behaviour (amongst many others, including pub/sub models)
If you want this in Java - then a JMS compliant messaging system will do what you want - and most messaging systems have a JMS client. You can Use Spring's JmsTemplate for real ease of use too.
With JMS - a message from a Queue will only be consumed by one and only one client - and once it is consumed (acknowledged) - it will be removed from the messaging system. Also when you publish a message using JMS - if its persistent - it will be sent synchronously, and the send() method won't return until the message is stored on the broker's disk - this is important - if you don't want to run the risk of loosing messages in the event of failure.