2 Spring #JmsListeners on 1 queue - java

I have 2 #JmsListener instances on 1 queue, and I want to take a fixed number of messages from the queue and then hold the rest in pending for some time for bulk processing. I have added the condition to check the number of pending message, but due to 2 listeners it is failing. Also, I have to add this condition only inside #JmsListener.
Please suggest how to add the logic of taking fixed messages from queue and holding the rest in pending for achieving throttling.

I don't believe you will be able to use Spring's #JmsListener to do what you want because you simply don't have the control of the consumer which you need to fetch multiple messages and then process them all at once. A listener only gets one message at time and it is invoked as messages arrive so you have no control over when and how you fetch the messages in contrast to a normal JMS MessageConsumer which you can use to manually invoke receive() as many times as you like.
Also, ActiveMQ will do its best to treat each consumer fairly and therefore distribute the same amount of messages to each. Generally speaking, it is bad for one consumer to get all (or most) the messages as it can starve the other consumers and waste resources. That said, you could potentially use consumer priority if you really needed some consumers to get more messages than others.

Related

How to control the number of messages that being emitted by Apache Kafka per a specific time?

I am new to Apache Kafka and I am trying to configure Apache Kafka that it receives messages from the producer as much as possible but it only sends to the consumer configured number of messages per specific time.
In other words How to configure Apache Kafka to send only "50 messages for example" per "30 seconds"
to the consumer regardless of the number of the messages, and in the next 30 seconds it takes another 50 messages from the cashed messages and so on.
If you have control over the consumer
You could use max.poll.records property to limit max number of records per poll() method call. And then you only need to ensure that poll() is called once in 30 seconds.
In general you can take a look at all available configuration properties here.
If you cannot control consumer
Then the only option for you is to write messages as per your demand - write at most 50 messages in 30 seconds. There are no configuration options available. Only your application logic can achieve that.
updated - how to control ensure call to poll
The simplest way is to:
while (true) {
consumer.poll()
// .. do your stuff
Thread.sleep(30000);
}
You can make things more complex with measuring time for processing (i.e. starting after poll call up to Thread.sleep() to not wait more then 30 seconds at all.
The problem that producer really doesn't send messages to the consumer. There is that persistent Kafka topic in between where producer places its messages. And it really doesn't care if there is any consumer on the other side. Same from the consumer perspective: it just subscribers for data from the topic and doesn't care if there is some producer on the other side. So, thinking about a back-pressure from the consumer down to producer where there is a messaging middle ware is wrong direction.
On the other hand it is not clear how those consumed messages may impact your third party service. The point is that Kafka consumer is single-threaded per partition. So, all the messages from one partition is going to be (must) processed one by one in the same thread. This way you cannot send more than one messages to your service: the next one can be sent only when the previous has been replied. So, think about it: how it is even possible in your consumer application to excess rate limit?
However if you have enough partitions and high concurrency on the consumer side, so you really may end up with several requests to your service in parallel from different threads. For this purpose I would suggest to take a look into a Rate Limiter pattern. This library provides a good implementation: https://resilience4j.readme.io/docs/ratelimiter. It is much better to keep messages in the topic then try to limit producer somehow.
To conclude: even if the consumer side is not your project, it is better to discuss with that team how to improve their consumer. You did your part well: the producer sends messages to Kafka topic. What else you can do over here?
Interesting use case and not sure why you need it, but two possible solutions: 1. To protect the cluster, you could use quotas, not for amount of messages but for bandwidth throughput: https://kafka.apache.org/documentation/#design_quotas . 2. If you need an exact amount of messages per time frame, you could put a buffering service (rate limiter) in between where you consume and pause, publishing messages to the consumed topic. Rate limiter could consume next 50 then pause until minute passes. This will increase space used on your cluster because of duplicated messages. You also need to be careful of how to pause the consumer, hearbeats need to be sent else you will rebalance your consumer continuously, ie you can't just sleep till next minute. This is obviously if you can't control the end consumer.

What is the best way to parallelize Pulsar consumer workload?

I want to use Pulsar as a message queue using shared consumers and the Java client. For the moment being, there are no strict ordering requirements, and also no partitions. The tasks triggered by the messages usually take up to 2 seconds. Is there any clear preference which of the following two methods of splitting the work between threads in a single application instance should be picked:
1 consumer with receive queue size 100 and 10 threads in a threadpool calling consumer.receive() in a loop.
10 consumers with receive queue size 10 each, using the MessageListener interface and running the task inside the original MessageListener.receive() call.
The best answer should be - just measure it :) Saying that, the first approach should be more efficient since no broker communication overhead is involved.

How to limit the active JMS listeners for a lot queues?

In my case, let's say that there are 50 JMS queues receiving different type of messages.
If I implemented 50 JMS listeners (one for each queue), it is working pretty good.
However, when all the 50 queues had many pending messages there, all my 50 JMS listeners were working at the same time (i.e. there would be 50 JAVA threads were working). This made my server overloaded (if it has very limited RAM resource and easily got out-of-memory).
So I am thinking whether I can limit the number of active listeners. Let's says, limit to only maximum 10 active listeners at a time. Sometimes listener 01 ~10 work on queue 01~10, and sometimes listener 11~20 can work on queue 11~20 etc.
And even there are new messages coming into queue 01~10, listener 01~10 should be able to sleep for a while and let other listeners to work.
How can I achieve this case?
Usually it's one listener per queue, so unless you're going to manage listeners being active/inactive, you'll get a running thread each time a message is delivered to the queue.
What you need is a way to manage the scaling, regardless of where the messages are delivered. Two ideas come to mind:
1) Does the message processing require some memory-intensive resource that could be shared somehow? For example, database connections are often shared/pooled to avoid creating too many (though the too-many connections is often a server-side issue, maybe there's another resource that you need to share).
2) Using semaphores, limit the number of concurrent threads allowed by having each thread get a permit from the semaphore before starting, returning it at the end (very important!). Then, if you get a lot of concurrent messages coming in, only n messages are processed concurrently and the other queue up in the listener handler for the message.
3) You could aggregate messages into new queues that have listeners that do the processing. Listeners for queues 1-10 post the message to newQueue1, queues 11-20 post the message to newQueue2, etc., and then you have listeners working on newQueue1, newQueue2, etc.

How to balance publishers' requests with RabbitMQ?

Suppose you have multiple producers and one consumer which wants to receive persistent messages from all publishers available.
Producers work with different speed. Let's say that system A produces 10 requests/sec and system B 1 request/sec. So if you use the only queue you will process 10 messages from A then 1 message from B.
But what if you want to balance load and process one message from A then one message from B etc.? Consuming from multiple queues is not a good option because we can't use wildcard binding in this case.
Update:
Queue per producer seems as the best approach. Producers don't know their speed which changes constantly. Having one queue per consumer I can subscribe to one topic and receive messages from all publishers available. But having a queue per producer I need to code the logic by myself:
Get all available queues through management plugin (AMQP doesn't allow to list queues).
Filter by queue name.
Implement round robin strategy.
Implement notification mechanism to subscribe to new publishers that can appear at any moment.
Remove unnecessary queue when publisher had disappeared and client read all messages.
Well, it seems pretty easy but I thought that broker could provide all of this functionality without any coding. In case with one queue I just create one persistent queue, bind it to a topic exchange then start any number of publishers that send messages to the topic. This option works almost out of the box.
I know I'm late for the party, but still.
In Azure Service Bus terms it's called "partitioning" and it's based on the partition key. The best part is in Azure SB the receiving client is not aware of the partitioning, it simply subscribes to the single queue.
In RabbitMQ there is a X-Consistent-Hashing plugin ("rabbitmq_consistent_hash_exchange") but unfortunately it's not that convenient. The consumers must be explicitly configured to consume from specific queues. If you have ten queues then you need to setup your consumers so that all ten are covered.
Another two options:
Random Exchange Type
Sharding Plugin
Bear in mind that with the Sharding Plugin even though it creates "one logical queue to consume" you'll have to have as many subscribers as there are virtual queues, otherwise some of the queues will be left unconsumed.
You can use the Priority Queue Support and associate a priority according to the producer speed. With the caveat that the priority must be set with caution (for example, if the consumer speed is below the system B, the consumer will only consume messages from B) and producers must be aware of their producing speed.
Another option to consider is creating 3 types of queues according to the producing speed: HIGH, MEDIUM, LOW. The three queues are binded to the exchange with the binding key set according to the producing speed. It could be done using.
Consumer will consume messages from these 3 queues using a round robin strategy. With the caveat that producers must be aware of their producing speed.
But the best option may be a queue per producer especially if producers speed is not stable and cannot be categorized. Thus, producers do not need to know their producing speed.

jms call to message.acknowledge() after exiting onMessage()

I want to concurrently consume jms messages from multiple queues. All the messages should go to the DB after long running processing and I have no right to lose them.
Question: Is it possible to save messages for future acknowledgement and call oldMessage.acknowledge() when another message is being processed?
My first guess is that this is impossible since it is deep in the jms processing unit and I have to process message and acknowledgement within an onMessage(...) method.
Second guess is to split onMessage() concurrently and allow long running processing for many messages. But this is not a good option since I have to ensure that all messages are coming ordered!
2nd question: Is there any way to ensure the incoming order while concurrency processing?
1: JMS has a flag on Session that is *CLIENT_ACKNOWLEDGE* you can see it here. I never used it but seems to do what you want.
2:
2.1: You have N consumers for the same queue: You can explore the Exclusive Consumer that some implementations have support. (for AtiveMQ: here).
2.2 You have 1 consumer per queue but you want to order all messages from all queues.
You can use the concept of an ordered SlackBuffer.
You can explore another possibilities like: Redirect all messages to an output queue that maintains the order of messages and you will only consume messages from that single output queue. The order of messages and the redirection are accomplished by the MQ server. It is only a valid idea if you can control the MQ server.
I hope this can help

Categories