Analysing messages residing in a queue RabbitMQ - java

I have several Java Clients sending messages to a queue using RabbitMQ default exchange.
I want to be able to analyse the messages inside of the queue (if they are not consumed obviously), identify the sender of the message and if a Client is sending too many messages then bind a special queue so that this client sends messages to that queue .
I am new with RabbitMQ so What I would like to know is :
Is it possible to append a client ID as a header in RabbitMQ?
How to create a special queue for clients with much workload?
Is it possible to analyse messages residing in a queue?
Any help would be much appreciated.

Is it possible to append a client ID as a header in RabbitMQ?
you can use the headers field on BasicProperties
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("myclientid", 22222);
channel.basicPublish(exchangeName, routingKey,
new AMQP.BasicProperties.Builder()
.headers(headers)
.build()),
messageBodyBytes);
How to create a special queue for clients with much workload?
There are no "special" queues, you can add more consumers to the same queue, in case you have an high workload. See
https://www.rabbitmq.com/tutorials/tutorial-two-java.html
Is it possible to analyse messages residing in a queue?
You can use the API basic_get without ACK the message, in this way you can read a messages without remove them from the queue

Related

Set ReplyToQMgr and ReplyToQ in Apache Camel reout to receive COA using IBM MQ

I am using Apache Camel and IBM MQ to send messages. I need to receive COA when a message gets delivered to a remote queue. The general picture looks like this:
When the message reaches msg_q2 queue, I should receive the COA back. So, the problem is that I am not able to set the QMGR_REM as reply-to queue manager, which is supposed to produce COA.
https://www.ibm.com/docs/en/ibm-mq/8.0?topic=messages-reply-queue-queue-manager
I tried setting JMS_IBM_MQMD_xxx headers, but for some reason those headers either get omitted or ignored (by Camel?), and the message fails to be put on the queue with the reason that the reply-to queue is not specified. Also, I tried setting JMSReplyTo header as queue://reply-to-qmgr/reply-to-q. In this case the queue:// part gets removed, and the rest is simply set as a reply-to queue name.
I am relatively new to Apache Camel, and IBM MQ, so any input would be very appreciated. Thank you in advance!
In your application, just provide the name of your ReplyToQ as replyToQ1 and leave the ReplyToQMgr field blank. The queue manager will fill it in with the local queue manager name QMGR_LOC for you.
And on QMGR_REC do one of the following:-
If your transmission queue for the channel from QMGR_REM to QMGR_LOC is named exactly QMGR_LOC, you have nothing further to do. When QMGR_REM comes to put the COA onto queue replyToQ1 on queue manager QMGR_LOC, it will resolve it to the transmission queue that has the name QMGR_LOC and the channel will deliver it.
If your transmission queue for the channel from QMGR_REM to QMGR_LOC is not named exactly QMGR_LOC, then make the following definition on QMGR_REM:
DEFINE QREMOTE(QMGR_LOC) RNAME(' ') RQMNAME(QMGR_LOC) +
XMITQ(your-transmission-queue-going-to-QMGR_LOC)
So, basically by trial and error I figured out that adding mdWriteEnabled=true property onCamelJmsDestinationName Camel header made it working as I need.
The code is something like this:
route.setHeader("CamelJmsDestinationName", "queue:///msg_q1?targetClient=1&mdWriteEnabled=true")
Then I set reply-to queue manager via MQMD property
route.setHeader("JMS_IBM_MQMD_ReplyToQMgr", "QMGR_REM")
and reply-to queue
route.setHeader("JMSReplyTo", "replyToQ2")

How to delete a message from Queue in RabbitMQ

I am using Rabbit MQ to replicate what Jenkins does.
The only issue I am facing is, lets say, when 10 messages are in queue. And there are some duplicate messages which are in unacknowledged state.
And I need to delete those messages from queue, how do I achieve this?
My rabbitmq configuration is as follows, where each queue only has one consumer. So if I have 10 messages, all will get processed through same consumer's thread.
Queue queue = new Queue(sfdcConnectionDetails.getGitRepoId() + "_" + sfdcConnectionDetails.getBranchConnectedTo(), true);
rabbitMqSenderConfig.amqpAdmin().declareQueue(queue);
rabbitMqSenderConfig.amqpAdmin().declareBinding(BindingBuilder.bind(queue).to(new DirectExchange(byRepositoryRepositoryId.getRepository().getRepositoryId())).withQueueName());
RabbitMqConsumer container = new RabbitMqConsumer();
container.setConnectionFactory(rabbitMqSenderConfig.connectionFactory());
container.setQueueNames(queue.getName());
container.setConcurrentConsumers(1);
container.setMessageListener(new MessageListenerAdapter(new ConsumerHandler(****, ***), new Jackson2JsonMessageConverter()));
container.startConsumers();
You can use any plugin (e.g this) for deduplicating messages on the rabbit side.
Use cache on your consumer for detecting if the same message was processing recently.
As already suggested by #ekiryuhin, One of the approach you could take is assign a request_id tag it to the payload before producing message to RabbitMQ & on your consumer's end cache the request_id. Look out for the request_id if already present ignore payload and delete it.
This request_id might work as deduplication-id for your payloads.

Making sure a message published on a topic exchange is received by at least one consumer

TLDR; In the context of a topic exchange and queues created on the fly by the consumers, how to have a message redelivered / the producer notified when no consumer consumes the message?
I have the following components:
a main service, producing files. Each file has a certain category (e.g. pictures.profile, pictures.gallery)
a set of workers, consuming files and producing a textual output from them (e.g. the size of the file)
I currently have a single RabbitMQ topic exchange.
The producer sends messages to the exchange with routing_key = file_category.
Each consumer creates a queue and binds the exchange to this queue for a set of routing keys (e.g. pictures.* and videos.trending).
When a consumer has processed a file, it pushes the result in a processing_results queue.
Now - this works properly, but it still has a major issue. Currently, if the publisher sends a message with a routing key that no consumer is bound to, the message will be lost. This is because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Consumer code (python):
channel.exchange_declare(exchange=exchange_name, type='topic', durable=True)
result = channel.queue_declare(exclusive = True, durable=True)
queue_name = result.method.queue
topics = [ "pictures.*", "videos.trending" ]
for topic in topics:
channel.queue_bind(exchange=exchange_name, queue=queue_name, routing_key=topic)
channel.basic_consume(my_handler, queue=queue_name)
channel.start_consuming()
Loosing a message in this condition is not acceptable in my use case.
Attempted solution
However, "loosing" a message becomes acceptable if the producer is notified that no consumer received the message (in this case it can just resend it later). I figured out the mandatory field could help, since the specification of AMQP states:
This flag tells the server how to react if the message cannot be routed to a queue. If this flag is set, the server will return an unroutable message with a Return method.
This is indeed working - in the producer, I am able to register a ReturnListener :
rabbitMq.confirmSelect();
rabbitMq.addReturnListener( (int replyCode, String replyText, String exchange, String routingKey, AMQP.BasicProperties properties, byte[] body) -> {
log.info("A message was returned by the broker");
});
rabbitMq.basicPublish(exchangeName, "pictures.profile", true /* mandatory */, MessageProperties.PERSISTENT_TEXT_PLAIN, messageBytes);
This will as expected print A message was returned by the broker if a message is sent with a routing key no consumer is bound to.
Now, I also want to know when the message was correctly received by a consumer. So I tried registering a ConfirmListener as well:
rabbitMq.addConfirmListener(new ConfirmListener() {
void handleAck(long deliveryTag, boolean multiple) throws IOException {
log.info("ACK message {}, multiple = ", deliveryTag, multiple);
}
void handleNack(long deliveryTag, boolean multiple) throws IOException {
log.info("NACK message {}, multiple = ", deliveryTag, multiple);
}
});
The issue here is that the ACK is sent by the broker, not by the consumer itself. So when the producer sends a message with a routing key K:
If a consumer is bound to this routing key, the broker just sends an ACK
Otherwise, the broker sends a basic.return followed by a ACK
Cf the docs:
For unroutable messages, the broker will issue a confirm once the exchange verifies a message won't route to any queue (returns an empty list of queues). If the message is also published as mandatory, the basic.return is sent to the client before basic.ack. The same is true for negative acknowledgements (basic.nack).
So while my problem is theoretically solvable using this, it would make the logic of knowing if a message was correctly consumed very complicated (especially in the context of multi threading, persistence in a database, etc.):
send a message
on receive ACK:
if no basic.return was received for this message
the message was correctly consumed
else
the message wasn't correctly consumed
on receive basic.return
the message wasn't correctly consumed
Possible other solutions
Have a queue for each file category, i.e. the queues pictures_profile, pictures_gallery, etc. Not good since it removes a lot of flexibility for the consumers
Have a "response timeout" logic in the producer. The producer sends a message. It expects an "answer" for this message in the processing_results queue. A solution would be to resend the message if it hasn't been answered to after X seconds. I don't like it though, it would create some additional tricky logic in the producer.
Produce the messages with a TTL of 0, and have the producer listen on a dead-letter exchange. This is the official suggested solution to replace the 'immediate' flag removed in RabbitMQ 3.0 (see paragraph Removal of "immediate" flag). According to the docs of the dead letter exchanges, a dead letter exchange can only be configured per-queue. So it wouldn't work here
[edit] A last solution I see is to have every consumer create a durable queue that isn't destroyed when he disconnects, and have it listen on it. Example: consumer1 creates queue-consumer-1 that is bound to the message of myExchange having a routing key abcd. The issue I foresee is that it implies to find an unique identifier for every consumer application instance (e.g. hostname of the machine it runs on).
I would love to have some inputs on that - thanks!
Related to:
RabbitMQ: persistent message with Topic exchange (not applicable here since queues are created "on the fly")
Make sure the broker holds messages until at least one consumer gets it
RabbitMQ Topic Exchange with persisted queue
[edit] Solution
I ended up implementing something that uses a basic.return, as mentioned earlier. It is actually not so tricky to implement, you just have to make sure that your method producing the messages and the method handling the basic returns are synchronized (or have a shared lock if not in the same class), otherwise you can end up with interleaved execution flows that will mess up your business logic.
I believe that an alternate exchange would be the best fit for your use case for the part regarding the identification of not routed messages.
Whenever an exchange with a configured AE cannot route a message to any queue, it publishes the message to the specified AE instead.
Basically upon creation of the "main" exchange, you configure an alternate exchange for it.
For the referenced alternate exchange, I tend to go with a fanout, then create a queue (notroutedq) binded to it.
This means any message that is not published to at least one of the queues bound to your "main" exchange will end up in the notroutedq
Now regarding your statement:
because even if the queue created by the consumers is durable, it is destroyed as soon as the consumer disconnects since it is unique to this consumer.
Seems that you have configured your queues with auto-delete set to true.
If so, in case of disconnect, as you stated, the queue is destroyed and the messages still present on the queue are lost, case not covered by the alternate exchange configuration.
It's not clear from your use case description whether you'd expect in some cases for a message to end up in more than one queue, seemed more a case of one queue per type of processing expected (while keeping the grouping flexible). If indeed the queue split is related to type of processing, I do not see the benefit of setting the queue with auto-delete, expect maybe not having to do any cleanup maintenance when you want to change the bindings.
Assuming you can go with durable queues, then a dead letter exchange (would again go with fanout) with a binding to a dlq would cover the missing cases.
not routed covered by alternate exchange
correct processing already handled by your processing_result queue
problematic processing or too long to be processed covered by the dead letter exchange, in which case the additional headers added upon dead lettering the message can even help to identify the type of actions to take

RabbitMQ RPC tutorial query

I was going through the tutorial shared by RabbitMQ here
I am assuming that the client code below
while (true)
{
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
if (ea.BasicProperties.CorrelationId == corrId)
{
return Encoding.UTF8.GetString(ea.Body);
}
}
Would receive all messages on the queue and will unnecessarily iterate through messages not designated for it. Is their anyway we can avoid it i.e we can modify the client to only receive the messages intended for it only.
The basic work that i intend to achieve through RabbitMQ is Request-Response pattern where a request would be received by web-service which will send data in a queue the data object would have a unique reference number . This would be received by an asynchronous tcp-client which will send data on a tcp/ip layer based on message it had received.
On receiving reply from the asynchronous channel of tcp/ip the channel would parse the data and respond back on the queue with the corresponding request reference number.
The RPC approach is well suited for it but the client code shared have this shortcoming would appreciate feedback on it.
Actually I didn’t understand well your aim, but when you create an RPC model, you have to create an “reply queue”, this queue is bound only to the client.
It means that you will receive back only the client messages, and not all messages.
Since the Rabbitmq RPC model is asynchronous you can execute more than one request without wait the responses and replies could not have the same publish order.
The correlation id is necessary to map your client requests with the replies, so there are not "unnecessarily" messages
hope it helps

RabbitMQ - Get total count of messages enqueued

I have a Java client which monitors RabbitMQ queue. I am able to get the count of messages currently in queue with this code
#Resource
RabbitAdmin rabbitAdmin;
..........
DeclareOk declareOk = rabbitAdmin.getRabbitTemplate().execute(new ChannelCallback<DeclareOk>() {
public DeclareOk doInRabbit(Channel channel) throws Exception {
return channel.queueDeclarePassive("test.pending");
}
});
return declareOk.getMessageCount();
I want to get some more additional details like -
Message body of currently enqueued items.
Total number of messages that was enqueued in the queue since the queue was created.
Is there any way to retrieve these data in Java client?
With AMQP protocol (including RabbitMQ implementation) you can't get such info with 100% guarantee.
The closest number to messages count is messages count returned with queue.declare-ok (AMQP.Queue.DeclareOk in java AMQP client library).
Whilst messages count you receive with queue.declare-ok may match exact messages number enqueues, you can't rely on it as it doesn't count messages which waiting acknowledges or published to queue during transaction but not committed yet.
It really depends what kind of precission do you need.
As to enqueued messages body, you may want to manually extract all messages in queue, view their body and put them back to queue. This is the only way to do what you want.
You can get some information about messages count with Management Plugin, RabbitMQ Management HTTP API and rabbitmqctl util (see list_queues, list_channels).
You can't get total published messages count since queue was created and I think nobody implement such stats while it useless (FYI, with messages flow in average 10k per second you will not even reach uint64 in a few thousand years).
AMQP.Queue.DeclareOk dok = channel.queueDeclare(QUEUE_NAME, true, false, false, queueArgs);
dok.getMessageCount();
To access queue details via http api,
http://public-domain-name:15672/api/queues/%2f/queue_name
To access queue details via command from localhost cli promt,
curl -i -u guest_uname:guest_password http://localhost:15672/api/queues/%2f/queue_name
Where,
%2f is default vhost "/"

Categories