How to consume multiple message on rabbitMq consumer - java

How to consume bulk (something like prefetchCount = 10) messages in one shot in spring framework of rabbitMq?
Note - I am implementing Consumer not Listener
As of now I'm using
Message message = amqpTemplate.receive("Queue_Name");
But the problem with the above solution is, it fetches only one message in one shot

I'm curious why you want to process 10 at a time. Typically messages are discreet and processed individually. That's why RabbitMQ will only pass a single message to a given instance of a consumer at a time. A PrefetchCount of 10 will call the consumer 10 times, with one message each. If you have to process 10 messages at once for some reason, you would need to receive messages individually, acknowledge each one and store them in a collection as they're received. Then when your count = 10, start processing them.

Related

2 Spring #JmsListeners on 1 queue

I have 2 #JmsListener instances on 1 queue, and I want to take a fixed number of messages from the queue and then hold the rest in pending for some time for bulk processing. I have added the condition to check the number of pending message, but due to 2 listeners it is failing. Also, I have to add this condition only inside #JmsListener.
Please suggest how to add the logic of taking fixed messages from queue and holding the rest in pending for achieving throttling.
I don't believe you will be able to use Spring's #JmsListener to do what you want because you simply don't have the control of the consumer which you need to fetch multiple messages and then process them all at once. A listener only gets one message at time and it is invoked as messages arrive so you have no control over when and how you fetch the messages in contrast to a normal JMS MessageConsumer which you can use to manually invoke receive() as many times as you like.
Also, ActiveMQ will do its best to treat each consumer fairly and therefore distribute the same amount of messages to each. Generally speaking, it is bad for one consumer to get all (or most) the messages as it can starve the other consumers and waste resources. That said, you could potentially use consumer priority if you really needed some consumers to get more messages than others.

How to delay retry by 4 hours on SQS?

TL;DR: how to mimic rabbitMQ's scheduling functionality keeping the consumer:
stateless
free from managing scheduled messages
free from useless retries from scheduled messages between receiving the message and finally consuming it the correct scheduled time
I have a single SQS queue with default properties on creation. The average time a consumer takes to process a message is 1~2s. But a few messages needs to be processed twice, between a 4h window. These messages are called B, and the others are called A.
Suppose I have my queue with the following messages: A1, A2, B1, A3, B2 (5 messages, max 10s to consume them all) at the start of these table:
time | what should happen
---------|-------------------
now | consumer connected to queue
now+10s | all As were consumed successfully and deleted from queue
Bs had their unsuccessful first try and now they are waiting for their retry in 4h
between | nothing happens since no new messages arrived and old ones are waiting
now+4h4s | Bs successfully consumed during second retry and due that, deleted from queue
I have a Spring application where I can throw exceptions when I find a type B message. Due simplicity and scalability, I want to have one single thread consuming messages taking 1~2s to consume each message.
This way, I cannot hang message processing as this answer suggested. I also don't need SQS' Delivery delay since it postpones just the messages arriving at queue and not retries. If possible, I would like to keep using long polling #JmsListener and avoid at all keeping any state on my memory's application. I want to avoid this if possible
I would write a small AWS Lambda function that gets invoked every ~minute. That function would get a message (off the hopefully FIFO-type SQS queue) and check the time it was added. If it was added >= 4 hours, it would delete it off the incoming queue and add it to the delayed by 4 hour queue, which your application could listen to. If it moved a message, continue to do so until the next message isn't 4 hours old. Increase/decrease the frequency of the lambda to increase the granularity of how 'tight' to 4 hours you are, but at the added expense of running the lambda more often.
Here is a quick link to an example of an AWS Lambda function using SQS: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs-example.html
You could send message B to a Step Functions state machine and put a wait state in to wait for 4 hours before sending it to the queue. The state machine would keep the state for you, and you can send messages directly to SQS from Step Functions so you don't need to write any code.
Since I was using JmsListener with setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE), I decided to run this at the end of the consumer of re-processable messages:
myAmazonSqsInstance.sendMessage(
new SendMessageRequest()
.withQueueUrl("queueName")
.withMessageBody(myMessageWithText)
.withDelaySeconds(900) // 900s = 15min
);
This way this message will be consumed successfully but a new message with the same body will be produced on the queue. This message will be consumed in 15min, and due to my business logic, fail again. There will be 16 fails (16*15min=4h) til it finally is consumed without producing new messages.
Although this is not what I asked for, and it's similar to the other answers (only the tech stack is different), I decided to write it down here to make a java solution available

How to control the number of messages that being emitted by Apache Kafka per a specific time?

I am new to Apache Kafka and I am trying to configure Apache Kafka that it receives messages from the producer as much as possible but it only sends to the consumer configured number of messages per specific time.
In other words How to configure Apache Kafka to send only "50 messages for example" per "30 seconds"
to the consumer regardless of the number of the messages, and in the next 30 seconds it takes another 50 messages from the cashed messages and so on.
If you have control over the consumer
You could use max.poll.records property to limit max number of records per poll() method call. And then you only need to ensure that poll() is called once in 30 seconds.
In general you can take a look at all available configuration properties here.
If you cannot control consumer
Then the only option for you is to write messages as per your demand - write at most 50 messages in 30 seconds. There are no configuration options available. Only your application logic can achieve that.
updated - how to control ensure call to poll
The simplest way is to:
while (true) {
consumer.poll()
// .. do your stuff
Thread.sleep(30000);
}
You can make things more complex with measuring time for processing (i.e. starting after poll call up to Thread.sleep() to not wait more then 30 seconds at all.
The problem that producer really doesn't send messages to the consumer. There is that persistent Kafka topic in between where producer places its messages. And it really doesn't care if there is any consumer on the other side. Same from the consumer perspective: it just subscribers for data from the topic and doesn't care if there is some producer on the other side. So, thinking about a back-pressure from the consumer down to producer where there is a messaging middle ware is wrong direction.
On the other hand it is not clear how those consumed messages may impact your third party service. The point is that Kafka consumer is single-threaded per partition. So, all the messages from one partition is going to be (must) processed one by one in the same thread. This way you cannot send more than one messages to your service: the next one can be sent only when the previous has been replied. So, think about it: how it is even possible in your consumer application to excess rate limit?
However if you have enough partitions and high concurrency on the consumer side, so you really may end up with several requests to your service in parallel from different threads. For this purpose I would suggest to take a look into a Rate Limiter pattern. This library provides a good implementation: https://resilience4j.readme.io/docs/ratelimiter. It is much better to keep messages in the topic then try to limit producer somehow.
To conclude: even if the consumer side is not your project, it is better to discuss with that team how to improve their consumer. You did your part well: the producer sends messages to Kafka topic. What else you can do over here?
Interesting use case and not sure why you need it, but two possible solutions: 1. To protect the cluster, you could use quotas, not for amount of messages but for bandwidth throughput: https://kafka.apache.org/documentation/#design_quotas . 2. If you need an exact amount of messages per time frame, you could put a buffering service (rate limiter) in between where you consume and pause, publishing messages to the consumed topic. Rate limiter could consume next 50 then pause until minute passes. This will increase space used on your cluster because of duplicated messages. You also need to be careful of how to pause the consumer, hearbeats need to be sent else you will rebalance your consumer continuously, ie you can't just sleep till next minute. This is obviously if you can't control the end consumer.

can multiple producers send message to Queue at the same time from different java applications

I have 2 applications A and B, trying to send messages from both to one queue.
Placed a while loop at both places which is sending message to queue.
if i start application A and start while loop it starts sending message to queue and consumer consumes message sent from A, now at same time if i start while loop from B application it doesn't publish messages to queue as consumer doesn't consumes any message sent from B.
So can someone clear the doubt if messages are being sent at same time from multiple producers to single queue or not.
PS- using IBM queue and using a single consumer.
Yes, we can have multiple producers for single queue.
Multiple producers can also publish messages at the same time.

Reading n messages from JMS Queue at a time

I am looking for a solution for below mentioned problem.
There is JMS Queue which stores message, lets assume messages are m1, m2,...m10k.
When single consumer start consuming messages from the Queue
It need to read 1000 messages at a time (We are using QueueBrowser)
pass m1 to m1k
pass m1001 to m2k..
Looking for some suggestions how it can be achieved.

Categories