ActiveMQ Consumer OutOfMemoryException - java

Our ActiveMQ consuming process runs out of memory and dies.
We have an ActiveMQ topic with one sender and two receivers. Superficially, all works fine---messages are sent and picked up by both receivers, but eventually we exhaust all memory. Heap dump shows 1.362 million instances each of LinkedList$Node, AtomicReference, ActiveMQObjectMessage, MessageId, and MessageDispatch. Meanwhile the client side message queue is empty or nearly empty throughout. I think the 1.362M may be on a list that tracks unacknowledged messages pending ack. The topic is specified to have AUTO_ACKNOWLEDGE set, so we're trying to ack, but possibly failing. (jmsSession = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);)
The heap dump shows garbage that appears to be associated with the client's incoming message buffering exists in modest numbers (a few thousand). This seems consistent with the numbers of these objects that accumulate in a toy program that we set up to send and consume similar objects. They accumulate for a while, then get GC'd and the memory never grows significantly in the toy or in the failing program.
Is it correct to surmise that the five object types are associated with ACKS, and if so, what can cause the objects to remain on this structure despite being apparently fully consumed by both consumers? Is there some way we could be canceling the AUTO_ACKNOWLEDGE that we think we have set? BTW, one consumer is synchronous, using receive() and the other is asynchronous, using onMessage().
One possibly misleading symptom is that the ActiveMQ GUI shows the objects only being dequeued once, despite the presence of two Consumers. The toy shows two dequeues for each enqueue. However, the program itself says they are read the expected number of times.
// creating the async consumer.
connAmq = createActiveMqConnection();
connAmq.start();
session = connAmq.createSession(true, Session.AUTO_ACKNOWLEDGE);
Destination topic = session.createTopic(appProperties.getActiveMqTopicQuotesName());
MessageConsumer consumer = session.createConsumer(topic);
consumer.setMessageListener(this);
public void onMessage(Message message) {
...
try {
if (message instanceof ObjectMessage) {
ObjectMessage msg = (ObjectMessage)message;
if (msg instanceof Foo) {
Foo quote = (Foo)msg.getObject();
...
}
}
}
...
}
// creating the sync consumer
jmsConnection = mActiveMQConnectionFactory.createTopicConnection();
jmsConnection.start();
jmsSession = jmsConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
jmsDestination = jmsSession.createTopic(name);
jmsMessageConsumer = jmsSession.createConsumer(jmsDestination);
//the code for consuming looks like this for the synchronous consumer
while(true)
ObjectMessage m = (ObjectMessage) jmsMessageConsumer.receive();
if (m != null)
Process(m.getObject());
}

At least in the code snippet given you have created a session that is transacted for the async consumer but I see no commit call on the session. The transaction bits stay in memory on the broker and you will eventually exhaust the Broker's memory.

Related

Ack pubSub message outside of the MessageReciever

I am using async Pull to pull messages from a pupSub topic, do some processing and send messages to ActiveMQ topic.
With the current configuration of pupSub I have to ack() the messages upon recieval. This however, does not suit my use case, as I need to ONLY ack() messages after they are successfully processed and sent to the other Topic. this means (per my understanding) ack()ing the messages outside the messageReciver.
I tried to save the each message and its AckReplyConsumer to be able to call it later and ack() the messages, this however does not work as expected. and not all messages are correctly ack() ed.
So I want to know if this is possible at all. and if Yes how
my subscriber configs
public Subscriber getSubscriber(CompositeConfigurationElement compositeConfigurationElement, Queue<CustomPupSubMessage> messages) throws IOException {
ProjectSubscriptionName subscriptionName = ProjectSubscriptionName.of(compositeConfigurationElement.getPubsub().getProjectid(),
compositeConfigurationElement.getSubscriber().getSubscriptionId());
ExecutorProvider executorProvider =
InstantiatingExecutorProvider.newBuilder().setExecutorThreadCount(2).build();
// Instantiate an asynchronous message receiver.
MessageReceiver receiver =
(PubsubMessage message, AckReplyConsumer consumer) -> {
messages.add(CustomPupSubMessage.builder().message(message).consumer(consumer).build());
};
// The subscriber will pause the message stream and stop receiving more messages from the
// server if any one of the conditions is met.
FlowControlSettings flowControlSettings =
FlowControlSettings.newBuilder()
// 1,000 outstanding messages. Must be >0. It controls the maximum number of messages
// the subscriber receives before pausing the message stream.
.setMaxOutstandingElementCount(compositeConfigurationElement.getSubscriber().getOutstandingElementCount())
// 100 MiB. Must be >0. It controls the maximum size of messages the subscriber
// receives before pausing the message stream.
.setMaxOutstandingRequestBytes(100L * 1024L * 1024L)
.build();
//read credentials
InputStream input = new FileInputStream(compositeConfigurationElement.getPubsub().getSecret());
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(ServiceAccountCredentials.fromStream(input));
Subscriber subscriber = Subscriber.newBuilder(subscriptionName, receiver)
.setParallelPullCount(compositeConfigurationElement.getSubscriber().getSubscriptionParallelThreads())
.setFlowControlSettings(flowControlSettings)
.setCredentialsProvider(credentialsProvider)
.setExecutorProvider(executorProvider)
.build();
return subscriber;
}
my processing part
jmsConnection.start();
for (int i = 0; i < patchSize; i++) {
var message = messages.poll();
if (message != null) {
byte[] payload = message.getMessage().getData().toByteArray();
jmsMessage = jmsSession.createBytesMessage();
jmsMessage.writeBytes(payload);
jmsMessage.setJMSMessageID(message.getMessage().getMessageId());
producer.send(jmsMessage);
list.add(message.getConsumer());
} else break;
}
jmsSession.commit();
jmsSession.close();
jmsConnection.close();
// if upload is successful then ack the messages
log.info("sent " + list.size() + " in direction " + dest);
list.forEach(consumer -> consumer.ack());
There is nothing that requires messages to be acked within the MessageReceiver callback and you should be able to acknowledge messages asynchronously. There are a few things to keep in mind and look for:
Check to ensure that you are calling ack before the ack deadline expires. By default, the Java client library does extend the ack deadline for up to 1 hour, so if you are taking less time than that to process, you should be okay.
If your subscriber is often flow controlled, consider reducing the value you pass into setParallelPullCount to 1. The flow control settings you pass in are passed to each stream, not divided among them, so if each stream is able to receive the full value passed in and your processing is slow enough, you could be exceeding the 1-hour deadline in the client library without having even received the message yet, causing the duplicate delivery. You really only need to use setParallelPullCount to a larger value if you are able to process messages much faster than a single stream can deliver them.
Ensure that your client library version is at least 1.109.0. There were some improvements made to the way flow control was done in that version.
Note that Pub/Sub has at-least-once delivery semantics, meaning messages can be redelivered, even if ack is called properly. Note that not acknowledging or nacking a single message could result in the redelivery of all messages that were published together in a single batch. See the "Message Redelivery & Duplication Rate
" section of "Fine-tuning Pub/Sub performance with batch and flow control settings."
If all of that still doesn't fix the issue, then it would be best to try to create a small, self-contained example that reproduces the issue and open up a bug in the GitHub repo.

Making sure a message published on a topic exchange is received by at least one consumer

TLDR; In the context of a topic exchange and queues created on the fly by the consumers, how to have a message redelivered / the producer notified when no consumer consumes the message?
I have the following components:
a main service, producing files. Each file has a certain category (e.g. pictures.profile, pictures.gallery)
a set of workers, consuming files and producing a textual output from them (e.g. the size of the file)
I currently have a single RabbitMQ topic exchange.
The producer sends messages to the exchange with routing_key = file_category.
Each consumer creates a queue and binds the exchange to this queue for a set of routing keys (e.g. pictures.* and videos.trending).
When a consumer has processed a file, it pushes the result in a processing_results queue.
Now - this works properly, but it still has a major issue. Currently, if the publisher sends a message with a routing key that no consumer is bound to, the message will be lost. This is because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Consumer code (python):
channel.exchange_declare(exchange=exchange_name, type='topic', durable=True)
result = channel.queue_declare(exclusive = True, durable=True)
queue_name = result.method.queue
topics = [ "pictures.*", "videos.trending" ]
for topic in topics:
channel.queue_bind(exchange=exchange_name, queue=queue_name, routing_key=topic)
channel.basic_consume(my_handler, queue=queue_name)
channel.start_consuming()
Loosing a message in this condition is not acceptable in my use case.
Attempted solution
However, "loosing" a message becomes acceptable if the producer is notified that no consumer received the message (in this case it can just resend it later). I figured out the mandatory field could help, since the specification of AMQP states:
This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method.
This is indeed working - in the producer, I am able to register a ReturnListener :
rabbitMq.confirmSelect();
rabbitMq.addReturnListener( (int replyCode, String replyText, String exchange, String routingKey, AMQP.BasicProperties properties, byte[] body) -> {
log.info("A message was returned by the broker");
});
rabbitMq.basicPublish(exchangeName, "pictures.profile", true /* mandatory */, MessageProperties.PERSISTENT_TEXT_PLAIN, messageBytes);
This will as expected print A message was returned by the broker if a message is sent with a routing key no consumer is bound to.
Now, I also want to know when the message was correctly received by a consumer. So I tried registering a ConfirmListener as well:
rabbitMq.addConfirmListener(new ConfirmListener() {
void handleAck(long deliveryTag, boolean multiple) throws IOException {
log.info("ACK message {}, multiple = ", deliveryTag, multiple);
}
void handleNack(long deliveryTag, boolean multiple) throws IOException {
log.info("NACK message {}, multiple = ", deliveryTag, multiple);
}
});
The issue here is that the ACK is sent by the broker, not by the consumer itself. So when the producer sends a message with a routing key K:
If a consumer is bound to this routing key, the broker just sends an ACK
Otherwise, the broker sends a basic.return followed by a ACK
Cf the docs:
For unroutable messages, the broker will issue a confirm once the exchange verifies a message won't route to any queue (returns an empty list of queues). If the message is also published as mandatory, the basic.return is sent to the client before basic.ack. The same is true for negative acknowledgements (basic.nack).
So while my problem is theoretically solvable using this, it would make the logic of knowing if a message was correctly consumed very complicated (especially in the context of multi threading, persistence in a database, etc.):
send a message
on receive ACK:
if no basic.return was received for this message
the message was correctly consumed
else
the message wasn't correctly consumed
on receive basic.return
the message wasn't correctly consumed
Possible other solutions
Have a queue for each file category, i.e. the queues pictures_profile, pictures_gallery, etc. Not good since it removes a lot of flexibility for the consumers
Have a "response timeout" logic in the producer. The producer sends a message. It expects an "answer" for this message in the processing_results queue. A solution would be to resend the message if it hasn't been answered to after X seconds. I don't like it though, it would create some additional tricky logic in the producer.
Produce the messages with a TTL of 0, and have the producer listen on a dead-letter exchange. This is the official suggested solution to replace the 'immediate' flag removed in RabbitMQ 3.0 (see paragraph Removal of "immediate" flag). According to the docs of the dead letter exchanges, a dead letter exchange can only be configured per-queue. So it wouldn't work here
[edit] A last solution I see is to have every consumer create a durable queue that isn't destroyed when he disconnects, and have it listen on it. Example: consumer1 creates queue-consumer-1 that is bound to the message of myExchange having a routing key abcd. The issue I foresee is that it implies to find an unique identifier for every consumer application instance (e.g. hostname of the machine it runs on).
I would love to have some inputs on that - thanks!
Related to:
RabbitMQ: persistent message with Topic exchange (not applicable here since queues are created "on the fly")
Make sure the broker holds messages until at least one consumer gets it
RabbitMQ Topic Exchange with persisted queue
[edit] Solution
I ended up implementing something that uses a basic.return, as mentioned earlier. It is actually not so tricky to implement, you just have to make sure that your method producing the messages and the method handling the basic returns are synchronized (or have a shared lock if not in the same class), otherwise you can end up with interleaved execution flows that will mess up your business logic.
I believe that an alternate exchange would be the best fit for your use case for the part regarding the identification of not routed messages.
Whenever an exchange with a configured AE cannot route a message to any queue, it publishes the message to the specified AE instead.
Basically upon creation of the "main" exchange, you configure an alternate exchange for it.
For the referenced alternate exchange, I tend to go with a fanout, then create a queue (notroutedq) binded to it.
This means any message that is not published to at least one of the queues bound to your "main" exchange will end up in the notroutedq
Now regarding your statement:
because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Seems that you have configured your queues with auto-delete set to true.
If so, in case of disconnect, as you stated, the queue is destroyed and the messages still present on the queue are lost, case not covered by the alternate exchange configuration.
It's not clear from your use case description whether you'd expect in some cases for a message to end up in more than one queue, seemed more a case of one queue per type of processing expected (while keeping the grouping flexible). If indeed the queue split is related to type of processing, I do not see the benefit of setting the queue with auto-delete, expect maybe not having to do any cleanup maintenance when you want to change the bindings.
Assuming you can go with durable queues, then a dead letter exchange (would again go with fanout) with a binding to a dlq would cover the missing cases.
not routed covered by alternate exchange
correct processing already handled by your processing_result queue
problematic processing or too long to be processed covered by the dead letter exchange, in which case the additional headers added upon dead lettering the message can even help to identify the type of actions to take

Sending large number of messages using spring jmsTemplate

Is there a best practice or guidance for sending persistent messages with asyncSend set to true.
We don't have transaction manager configured
We have ~40k-50k messages which are sent using jmsTemplate configured with
org.apache.activemq.pool.PooledConnectionFactory
We have a for loop which iterates over messages list and send them using
jmsTemplate.convertAndSend(destination, msg)
We see lot of message loss on frequent basis, when we turn off asyncSend we get the reliability but the producer performance drops by 95%
A bit of speculation as the question is not very detailed but anyway.
Depending on configuration, ActiveMQ might have memory limits on queues (might as well differ between persistent and non persistent messages). So when memory is up, your asyncSend calls will ignore warnings and continue to deliver messages to the "black hole" until memory is freed by the consumer.
There is no silver bullet to allow max performance and max reliability. Unfortunately.
However, I would try setting a producerWindowSize on the connection factory to allow some specified amount of data before a broker ack is received. Exact value is something you need to try out and depends on scenario as well as broker config/resources.
I solved this using a ProducerCallback
List<String> messageTexts = prepareListOfMessaeTexts();
ProducerCallback producerCallback = (session, producer) -> {
Topic destination = session.createTopic(myTopicName);
for (String messageText : myMessagmessageTextseBodies) {
producer.send(destination, session.createTextMessage(messageText));
}
return null;
};
jmsTemplate.execute(producerCallback);

Is it a good practice to use JMS Temporary Queue for synchronous use?

If we use JMS request/reply mechanism using "Temporary Queue", will that code be scalable?
As of now, we don't know if we will supporting 100 requests per second, or 1000s of requests per second.
The code below is what I am thinking of implementing. It makes use of JMS in a 'Synchronous' fashion. The key parts are where the 'Consumer' gets created to point a 'Temporary Queue' that was created for this session. I just can't figure out whether using such Temporary Queues is a scalable design.
destination = session.createQueue("queue:///Q1");
producer = session.createProducer(destination);
tempDestination = session.createTemporaryQueue();
consumer = session.createConsumer(tempDestination);
long uniqueNumber = System.currentTimeMillis() % 1000;
TextMessage message = session
.createTextMessage("SimpleRequestor: Your lucky number today is " + uniqueNumber);
// Set the JMSReplyTo
message.setJMSReplyTo(tempDestination);
// Start the connection
connection.start();
// And, send the request
producer.send(message);
System.out.println("Sent message:\n" + message);
// Now, receive the reply
Message receivedMessage = consumer.receive(15000); // in ms or 15 seconds
System.out.println("\nReceived message:\n" + receivedMessage);
Update:
I came across another pattern, see this blog
The idea is to use 'regular' Queues for both Send and Receive. However for 'Synchronous' calls, in order to get the desired Response (i.e. matching the request), you create a Consumer that listens to the Receive queue using a 'Selector'.
Steps:
// 1. Create Send and Receive Queue.
// 2. Create a msg with a specific ID
final String correlationId = UUID.randomUUID().toString();
final TextMessage textMessage = session.createTextMessage( msg );
textMessage.setJMSCorrelationID( correlationId );
// 3. Start a consumer that receives using a 'Selector'.
consumer = session.createConsumer( replyQueue, "JMSCorrelationID = '" + correlationId + "'" );
So the difference in this pattern is that we don't create a new temp Queue for each new request.
Instead all responses come to only one queue, but use a 'selector' to make sure each request-thread receives the only the response that is cares about.
I think the downside here is that you have to use a 'selector'. I don't know yet if that is less preferred or more preferred than earlier mentioned pattern. Thoughts?
Regarding the update in your post - selectors are very efficient if performed on the message headers, like you are doing with the Correlation ID. Spring Integration also internally does this for implementing a JMS Outbound gateway.
Interestingly, the scalability of this may actually be the opposite of what the other responses have described.
WebSphere MQ saves and reuses dynamic queue objects where possible. So, although use of a dynamic queue is not free, it does scale well because as queues are freed up, all that WMQ needs to do is pass the handle to the next thread that requests a new queue instance. In a busy QMgr, the number of dynamic queues will remain relatively static while the handles get passed from thread to thread. Strictly speaking it isn't quite as fast as reusing a single queue, but it isn't bad.
On the other hand, even though indexing on CORRELID is fast, performance is inverse to the number of messages in the index. It also makes a difference if the queue depth begins to build. When the app goes a GET with WAIT on an empty queue there is no delay. But on a deep queue, the QMgr has to search the index of existing messages to determine that the reply message isn't among them. In your example, that's the difference between searching an empty index versus a large index 1,000s of times per second.
The result is that 1000 dynamic queues with one message each may actually be faster than a single queue with 1000 threads getting by CORRELID, depending on the characteristics of the app and of the load. I would recommend testing this at scale before committing to a particular design.
Using selector on correlation ID on a shared queue will scale very well with multiple consumers.
1000 requests / s will however be a lot. You may want to divide the load a bit between different instances if the performance turns out to be a problem.
You might want to elaborate on the requests vs clients numbers. If the number of clients are < 10 and will stay rather static, and the request numbers are very high, the most resilient and fast solution might be to have static reply queues for each client.
Creating temporary queues isn't free. After all it is allocating resources on the broker(s). Having said that, if you have a unknown (before hand) potentially unbound number of clients (multiple JVMs, multiple concurrent threads per JVM, etc) you may not have a choice. Per-allocating client queues and assigning them to clients would get out of hand fast.
Certainly what you've sketched is the simplest possible solution. And if you can get real numbers for transaction volume and it scales enough, fine.
Before I'd look at avoiding temporary queues, I'd look more at limiting the number of clients and making the clients long lived. That is to say create a client pool on the client side, and have the clients in the pool create the temporary queue, session, connection, etc. on startup, reuse them on subsequent requests, and tear them down on shutdown. Then the tuning problem become one of max/min size on the pool, what the idle time is to prune the pool, and what the behavior is (fail vs block) when the pool is maxed. Unless you're creating an arbitrarily large number of transient JVMs (in which case you've got bigger scaling issues just from JVM startup overhead), that ought to scale as well as anything. After all, at that point the resources you are allocating reflect the actual usage of the system. There really is no opportunity to use less than that.
The thing to avoid is creating and destroying a large gratuitous number of of queues, sessions, connections, etc. Design the server side to allow streaming from the get go. Then pool if/when you need to. Like as not, for anything non-trivial, you will need to.
Using temporary queue will cost creating relyToProducers each every time. Instead of using a cached producers for a static replyToQueue, the method createProducer will be more costly and impact performance in a highly concurrent invocation environment.
Ive been facing the same problem and decided to pool connections myself inside a stateless bean. One client connection has one tempQueue and lays inside JMSMessageExchanger object (which contains connectionFactory,Queue and tempQueue), which is bind to one bean instance. Ive tested it in JSE/EE environments. But im not really sure about Glassfish JMS pool behaviour.
Will it actually close JMS connections, obtained "by hand" after bean method ends?Am I doing something terribly wrong?
Also Ive turned off transaction in client bean (TransactionAttributeType.NOT_SUPPORTED) to send request messages immediately to the request queue.
package net.sf.selibs.utils.amq;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.DeliveryMode;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.TemporaryQueue;
import lombok.Getter;
import lombok.Setter;
import net.sf.selibs.utils.misc.UHelper;
public class JMSMessageExchanger {
#Setter
#Getter
protected long timeout = 60 * 1000;
public JMSMessageExchanger(ConnectionFactory cf) {
this.cf = cf;
}
public JMSMessageExchanger(ConnectionFactory cf, Queue queue) {
this.cf = cf;
this.queue = queue;
}
//work
protected ConnectionFactory cf;
protected Queue queue;
protected TemporaryQueue tempQueue;
protected Connection connection;
protected Session session;
protected MessageProducer producer;
protected MessageConsumer consumer;
//status
protected boolean started = false;
protected int mid = 0;
public Message makeRequest(RequestProducer producer) throws Exception {
try {
if (!this.started) {
this.init();
this.tempQueue = this.session.createTemporaryQueue();
this.consumer = this.session.createConsumer(tempQueue);
}
//send request
Message requestM = producer.produce(this.session);
mid++;
requestM.setJMSCorrelationID(String.valueOf(mid));
requestM.setJMSReplyTo(this.tempQueue);
this.producer.send(this.queue, requestM);
//get response
while (true) {
Message responseM = this.consumer.receive(this.timeout);
if (responseM == null) {
return null;
}
int midResp = Integer.parseInt(responseM.getJMSCorrelationID());
if (mid == midResp) {
return responseM;
} else {
//just get other message
}
}
} catch (Exception ex) {
this.close();
throw ex;
}
}
public void makeResponse(ResponseProducer producer) throws Exception {
try {
if (!this.started) {
this.init();
}
Message response = producer.produce(this.session);
response.setJMSCorrelationID(producer.getRequest().getJMSCorrelationID());
this.producer.send(producer.getRequest().getJMSReplyTo(), response);
} catch (Exception ex) {
this.close();
throw ex;
}
}
protected void init() throws Exception {
this.connection = cf.createConnection();
this.session = this.connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
this.producer = this.session.createProducer(null);
this.producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
this.connection.start();
this.started = true;
}
public void close() {
UHelper.close(producer);
UHelper.close(consumer);
UHelper.close(session);
UHelper.close(connection);
this.started = false;
}
}
The same class is used in client (stateless bean) and server (#MessageDriven).
RequestProducer and ResponseProducer are interfaces:
package net.sf.selibs.utils.amq;
import javax.jms.Message;
import javax.jms.Session;
public interface RequestProducer {
Message produce(Session session) throws Exception;
}
package net.sf.selibs.utils.amq;
import javax.jms.Message;
public interface ResponseProducer extends RequestProducer{
void setRequest(Message request);
Message getRequest();
}
Also I`ve read AMQ article about request-response implementation over AMQ:
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
Maybe I'm too late but I spent some hours this week to get sync Request/Reply working within JMS. What about extending the QueueRequester with timeout. I did and at least testing on one single machine (running broker, requestor and replyer) showed that this solution outperforms the discussed ones. On the other side it depends on using a QueueConnection and that means you may be forced to open multiple Connections.

RabbitMQ basic.get and acknowledgement

I'm invoking:
GetResponse response = channel.basicGet("some.queue", false); // no auto-ack
....
channel.basicAck(deliveryTag, ...);
However, when I invoke basicGet, the messages in the queue stay in "Ready", rather than in "Unacknowledged". I want them to be in unacknowledged, so that I can either basic.ack them (thus discarding them from the queue), or basic.nack them
I'm doing the following to mimic Delaying the ack:
At consumption time
Get(consume) the message form the initial Queue.
Create a "PendingAck_123456" Queue.
123456 is a unique id of the message.
Set the following properties
x-message-ttl (to requeue after
timeout)
x-expires (to make sure the temp queue will be deleted)
x-dead-letter-exchange and x-deal-letter-routing-key to requeue to
the initial Queue upon TTL expiration.
Publish the message Pending ack to this "PendingAck_123456" Queue
Ack the message to delete it from the initial queue
At Acknowledge time
Calculate Queue Name from Message Id and Get from the "PendingAck_123456" Queue
Acknowledge it (no need to call .getBody() ).
That'll delete it from this pending queue, preventing the TTL to requeue it
Remarks
A Queue for only 1 message.. Is that an issue if there are a lot of such Queues ?
A requeued message will be sent at the queue input side.. not at the queue output (as would do a real ack).. There is an impact on the messages order.
Message is copied by the application to the Pending Queue.. This is an additional step that may have impacts on the overall performance.
To mimic a Nack/Reject, you you may want to Copy the message to the Initial Queue, and Ack it from the PendingAck queue. By default, the TTL would do it (later).
When doing ack immediately after the get it works fine. However, in my case, they were separated by a request. And spring's template closes the channel and connection on each execution. So there are three options:
keep one channel and connection open throughout the whole lifetime of the application
have some kind of conversation-scope (or worst-case: use the session) to store the same channel and reuse it.
use one channel per request, acknowledge receipt immediately, and store the messages in memory.
In the former two cases you can't do it with spring's RabbitTemplate

Categories