I have a route reading from a queue from("jms:xx"). On the route I call an external webservice. I also configure a CircuitBreaker in case the webservice is temporary unavailable.
When the CircuitBreaker opens, all new incoming requests are blocked (status OPEN).
Something that puzzles me: in case of an error, the message needs to be put back on the queue, preferably with a delay (thanks to redelivery policies).
I assume the CircuitBreaker is not in touch with the messagelistener so even when the CircuitBreaker is OPEN, it will still regularly receive all messages from the input queue.
To reduce this overhead I would like to :
move all incoming messages to the DLQ when a RejectedExecutionException is thrown. Once the connection problem is resolved (status becomes CLOSED again) all these messages (header filter) can be moved again to the processing queue.
I understand I have a risk here: It will only retry when new messages arrive.
My Question: Is it possible to get informed when the status changes from HALF_OPEN to CLOSED (f.e. onClosed) ?
Related
I am using:
spring-boot 2.2.10
spring-cloud-gcp-pubsub 1.2.5
google-cloud-pubsub 1.108.0
google-cloud-core 1.93.7
gax 1.57.1
grpc-core 1.30.2
I am consuming messages of different sized from a GCP subscription. When a "big" message is sent to my client library:
1º It never reaches my listener code (I put a dumb logger)
2º I can see "Received data on closed stream"
3º Message is never acked, never dequed, never dlq-ed
4º Message is sent to my service over and over (sent count metric keeps going)
I know gRPC max size problem was already solved time ago, so gRPC keepAlive ... so I am lost on leads to investigate.
I am reading the documentation about Channel.basicCancel operation in rabbitmq https://www.rabbitmq.com/consumer-cancel.html . The docs says that one of possible cancellation case is when consumer sends cancel signal on the same channel on which it is listening.
Is this the only possibility? Can you cancel remote consumer running on different channel/connection/process?
I am trying to send the cancel request from another another process. When I do it ends with an exception java.io.IOException: Unknown consumerTag just like such operation was restricted to cancelling local consumers (on own channel or connection).
UPDATE:
I noticed that this "Unknown consumerTag" exception is a result of initial validation inside com.rabbitmq.client.impl.ChannelN.basicCancel(String):
Consumer originalConsumer = (Consumer)this._consumers.get(consumerTag);
if (originalConsumer == null) {
throw new IOException("Unknown consumerTag");
}
...
But still there might be some rpc call which does the trick...
The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
The documentation is correct, you must cancel a consumer from its own channel/connection.
Other options include making your consumers aware of "cancellation messages" that will cause them to stop themselves, or using the API to close an entire connection, which will close all channels associated with it.
I'm debugging some Java code that uses Apache POI to pull data out of Microsoft Office documents. Occasionally, it encounter a large document and POI crashes when it runs out of memory. At that point, it tries to publish the error to RabbitMQ, so that other components can know that this step failed and take the appropriate actions. However, when it tries to publish to the queue, it gets a com.rabbitmq.client.AlreadyClosedException (clean connection shutdown; reason: Attempt to use closed channel).
Here's the error handler code:
try {
//Extraction and indexing code
}
catch(Throwable t) {
// Something went wrong! We'll publish the error and then move on with
// our lives
System.out.println("Error received when indexing message: ");
t.printStackTrace();
System.out.println();
String error = PrintExc.format(t);
message.put("error", error);
if(mime == null) {
mime = "application/vnd.unknown";
}
message.put("mime", mime);
publish("IndexFailure", "", MessageProperties.PERSISTENT_BASIC, message);
}
For completeness, here's the publish method:
private void publish(String exch, String route,
AMQP.BasicProperties props, Map<String, Object> message) throws Exception{
chan.basicPublish(exch, route, props,
JSONValue.toJSONString(message).getBytes());
}
I can't find any code within the try block that appears to close the RabbitMQ channel. Are there any circumstances in which the channel could be closed implicitly?
EDIT: I should note that the AlreadyClosedException is thrown by the basicPublish call inside publish.
An AMQP channel is closed on a channel error. Two common things that can cause a channel error:
Trying to publish a message to an exchange that doesn't exist
Trying to publish a message with the immediate flag set that doesn't have a queue with an active consumer set
I would look into setting up a ShutdownListener on the channel you're trying to use to publish a message using the addShutdownListener() to catch the shutdown event and look at what caused it.
Another reason in my case was that by mistake I acknowledged a message twice. This lead to RabbitMQ errors in the log like this after the second acknowledgment.
=ERROR REPORT==== 11-Dec-2012::09:48:29 ===
connection <0.6792.0>, channel 1 - error:
{amqp_error,precondition_failed,"unknown delivery tag 1",'basic.ack'}
After I removed the duplicate acknowledgement then the errors went away and the channel did not close anymore and also the AlreadyClosedException were gone.
I'd like to add this information for other users who will be searching for this topic
Another possible reason for Receiving a Channel Closed Exception is when Publishers and Consumers are accessing Channel/Queue with different queue declaration/settings
Publisher
channel.queueDeclare("task_queue", durable, false, false, null);
Worker
channel.queueDeclare("task_queue", false, false, false, null);
From RabbitMQ Site
RabbitMQ doesn't allow you to redefine an existing queue with different parameters and will return an error to any program that tries to do that
Apparently, there are many reasons for the AMQP connection and/or channels to close abruptly. In my case, there was too many unacknowledged messages on the queue because the consumer didn't specify the prefetch_count so the connection was getting terminated every ~1min. Limiting the number of unacknowledged messages by setting the consumer's prefetch count to a non-zero value fixed the problem.
channel.basicQos(100);
For those who wonder why their consuming channels are closing, check if you try to Ack or Nack a delivery more than once.
In the rabbitmq log you would see messages like:
operation basic.ack caused a channel exception precondition_failed:
unknown delivery tag ...
I also had this problem. The reason for my case was that, first I built the queue with durable = false and in the log file I had this error message when I switched durable to true:
"inequivalent arg 'durable' for queue 'logsQueue' in vhost '/':
received 'true' but current is 'false'"
Then, I changed the name of the queue and it worked for me. I assumed that the RabbitMQ server keeps the record of the built queues somewhere and it cannot change the status from durable to non-durable and vice versa.
Again I made durable=false for the new queue and this time I got this error
"inequivalent arg 'durable' for queue 'logsQueue1' in vhost '/':
received 'false' but current is 'true'"
My assumption was true. When I listed the queues in rabbitMQ server by:
rabbitmqctl list_queues
I saw both queues in the server.
To summarize, 2 solutions are:
1. renaming the name of the queue which is not a good solution
2. resetting rabbitMQ by:
rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app
I'm invoking:
GetResponse response = channel.basicGet("some.queue", false); // no auto-ack
....
channel.basicAck(deliveryTag, ...);
However, when I invoke basicGet, the messages in the queue stay in "Ready", rather than in "Unacknowledged". I want them to be in unacknowledged, so that I can either basic.ack them (thus discarding them from the queue), or basic.nack them
I'm doing the following to mimic Delaying the ack:
At consumption time
Get(consume) the message form the initial Queue.
Create a "PendingAck_123456" Queue.
123456 is a unique id of the message.
Set the following properties
x-message-ttl (to requeue after
timeout)
x-expires (to make sure the temp queue will be deleted)
x-dead-letter-exchange and x-deal-letter-routing-key to requeue to
the initial Queue upon TTL expiration.
Publish the message Pending ack to this "PendingAck_123456" Queue
Ack the message to delete it from the initial queue
At Acknowledge time
Calculate Queue Name from Message Id and Get from the "PendingAck_123456" Queue
Acknowledge it (no need to call .getBody() ).
That'll delete it from this pending queue, preventing the TTL to requeue it
Remarks
A Queue for only 1 message.. Is that an issue if there are a lot of such Queues ?
A requeued message will be sent at the queue input side.. not at the queue output (as would do a real ack).. There is an impact on the messages order.
Message is copied by the application to the Pending Queue.. This is an additional step that may have impacts on the overall performance.
To mimic a Nack/Reject, you you may want to Copy the message to the Initial Queue, and Ack it from the PendingAck queue. By default, the TTL would do it (later).
When doing ack immediately after the get it works fine. However, in my case, they were separated by a request. And spring's template closes the channel and connection on each execution. So there are three options:
keep one channel and connection open throughout the whole lifetime of the application
have some kind of conversation-scope (or worst-case: use the session) to store the same channel and reuse it.
use one channel per request, acknowledge receipt immediately, and store the messages in memory.
In the former two cases you can't do it with spring's RabbitTemplate
What would be a nice and good way to temporarily disable a message listener? The problem I want to solve is:
A JMS message is received by a message listener
I get an error when trying to process the message.
I wait for my system to get ready again to be able to process the message.
Until my system is ready, I don't want any more messages, so...
...I want to disable the message listener.
My system is ready for processing again.
The failed message gets processed, and the JMS message gets acknowledged.
Enable the message listener again.
Right now, I'm using Sun App Server. I disabled the message listener by setting it to null in the MessageConsumer, and enabled it again using setMessageListener(myOldMessageListener), but after this I don't get any more messages.
How about if you don't return from the onMessage() listener method until your system is ready to process messages again? That'll prevent JMS from delivering another message on that consumer.
That's the async equivalent of not calling receive() in a synchronous case.
There's no multi-threading for a given JMS session, so the pipeline of messages is held up until the onMessage() method returns.
I'm not familiar with the implications of dynamically calling setMessageListener(). The javadoc says there's undefined behavior if called "when messages are being consumed by an existing listener or sync consumer". If you're calling from within onMessage(), it sounds like you're hitting that undefined case.
There are start/stop methods at the Connection level, if that's not too coarse-grained for you.
Problem solved by a workaround replacing the message listener by a receive() loop, but I'm still interested in how to disable a message listener and enable it shortly again.
That looks to me like the messages are being delivered but nothing is happening with them because you have no listener attached. It's been a while since I've done anything with JMS but don't you want to have the message sent to the dead letter queue or something while you fix the system, and then move the messages back onto the original queue once you're ready for processing again?
On WebLogic you can set up max retries, an error queue to handle messages that exceed the max retry limit, and other parameters. I'm not certain off the top of my head, but you also might be able to specify a wait period. All this is available to you in the admin console. I'd look at the admin for the JMS provider you've got and see if it can do something similar.
In JBoss the following code will do the trick:
MBeanServer mbeanServer = MBeanServerLocator.locateJBoss();
ObjectName objName = new ObjectName("jboss.j2ee:ear=MessageGateway.ear,jar=MessageGateway-EJB.jar,name=MessageSenderMDB,service=EJB3");
JMSContainerInvokerMBean invoker = (JMSContainerInvokerMBean) MBeanProxy.get(JMSContainerInvokerMBean.class, objName, mbeanServer);
invoker.stop(); //Stop MDB
invoker.start(); //Start MDB
I think you can call
messageConsumer.setMessageListener(null);
inside your MessageListener implementation and schedule the reestablishment task (for example in ScheduledExecutorService). This task should call
connection.stop();
messageConsumer.setMessageListener(YOUR_NEW_LISTENER);
connection.start();
and it will be working. start() and stop() methods are used for restarting delivery structrues (not TCP connection).
Read the Javadoc https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html#stop--
Temporarily stops a connection's delivery of incoming messages. Delivery can be restarted using the connection's start method. When the connection is stopped, delivery to all the connection's message consumers is inhibited: synchronous receives block, and messages are not delivered to message listeners.
For temporarily stops a connection's delivery of incoming messages you need to use stop() method from Connection interface: https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html#stop--
Just don't call connection.stop() from MessageListener because according to JMS spec. you will get deadlock or exception. Instead you can call connection.stop() from different thread, you just need to synchronize MessageListener and thread that going to suspend connection with function connection.stop()