I have an application which consumes a message from RabbitMQ using Spring AMQP.
I have to implement Threads in consumer to handle the request. If my threads pool are available , i am consuming the message using threads i will process the message.
I have a question where what happens when all the threads are busy. I
dont have a threads to process it. Will the message be consumed from
RabbitMQ ? Will it wait till my thread pool becomes available. How do
handle this using spring amqp?
Is there any thread logic to be implemented from Spring AMQP side as well?
Please suggest.
You should not add your own threading in your listener, it will cause messages to be ack'd early and potentially lost. Instead, use the container's concurrentConsumers property to determine how many threads to use.
I suggest you read about all the configuration options before asking questions like this here.
Related
I am running a Spring Cloud Stream application, consuming messages from RabbitMq. I want to implement a behaviour where a given queue, having three consumers instances, delivers exactly one message to any of them, and wait for the next to be delivered until the current is acked (some sort of synchronization between all consumers).
I think that this can be done using this https://www.rabbitmq.com/consumer-prefetch.html with global option enabled, but I can't find a way of achieving this using spring cloud stream. Any help will be appreciated.
Spring uses a separate channel for each consumer so global channel prefetch won't have any effect.
Bear in mind that, even if it was supported, it would only work if the consumers were all in the same instance.
I am trying to use Spring Integration for a batch process. There are certain steps that are time consuming and hence would benefit with a QueueChannel with multiple consumers each running on a separate thread.
The problem with this approach is that there is no clean way to shut-down the application after all the messages have been consumed. I have tried using a control bus and shutting down the task executor but that only works if you can guess by what time all messages would have been consumed and none are in flight which is impossible.
Is there a clean way to do this for a batch process or is this just a wrong use case to use Spring Integration in ?
EDIT:
Essentially it would be nice if there was a way for me to send a special message which signifies one of the lifecycle events like start or stop which is automatically carried through all the spring integration components. This way the stop message is guaranteed to reach last and if there is a way to trigger shutdown() on the lifecycle aware beans when stop message reaches them.
shut-down the application after all the messages have been consumed
If you know the number of messages in the beginning, you can have an AtomicInteger bean and increment it every time a message is processed. Separately you have an inbound channel adapter to poll the current state of that AtomicInteger and make a decision to shutdown your app.
Alternatively you can use an aggregator to gather results for all the messages and shutdown the app in the downstream flow after that aggregator.
I am using ActiveMQ classic as a queue manager. My message consumer (#JmsListener using Spring) writes to MongoDB. If MongoDB is unavailable, then it sends the message to a different queue, lets call it a redelivery queue.
So, imagine after mongoDB been down for many hours, its finally up. What is the best way to now read the message from this redelivery queue?
I am thinking if there is a possibility to do this by creating a batch job that runs once a day? If so, what are the options that can be used to create a job like that or if there are any other better options available.
There is no "batch" mode for JMS. A JMS consumer can only receive one message at a time. Typically the best way boost message throughput to deal with lots of messages is by increasing the number of consumers. This should be fairly simple to do with a Spring JmsListener using the concurrency setting.
You can, of course, use something like cron to schedule a job to deal with these messages or you use something like the Quartz Job Scheduler instead.
It's really impossible to give you the "best" way to deal with your situation on Stack Overflow. There are simply too many unknown variables.
I am planning to write code that's similar to producer and consumer using ExecutorService with a fixed threadpool and IBM MQ messaging.
Suppose as consumer I created 10 fixed threads. How it will handle it if I place 10 messages in consumer queue? How 10 consumer worker threads will cover below scenarios?
each worker thread take single message synchronously and process the message?
each consumer worker threads take all these 10 message like 1 worker thread per one message?
After reading this message as second scenario above ,How each thread call executor service.Is it done concurrenly or synchronously.
if there are 20 messages in queue,how consumer worker thread takes these message,each thread takes 2 messages? If it takes one message per one thread what will happen to
other 10 messages?
While processing the above scenarios there is webservice call and internal api method calls but those are synchronous methods. So is there any use if i implement this class to process the code concurrently?
If you're running in an application server, like WebSphere for example, then you can simply deploy a Message Driven Bean (MDB) onto the JMS queue, and it will do pretty much exactly what you're describing.
If you're just building a Java application, then using the ExecutorService would work. Start by placing a MessageListener onto the Session, and have the onMessage() of that listener submit() a processor (e.g. a Runnable) of the message to the ExecutorService. Once that processor has done its work, it should acknowledge() the message.
I don't think it's a good idea to implement producer/consumer scheme when you have good tools like JMS already implemented, debugged, support many features and supported by frameworks (like Spring for instance).
Don't invent the wheel!
I need to lock one of the instances to allow scheduling for multiinstances web application
Right now we have two started instances of application. Each has scheduler of work. I need to avoid double run the same process because both of instances send message of the same processing
Don't try and do distributed locking, it's a really hard problem to try and solve.
Instead, just set up your two applications to consume from the same queue and have RabbitMQ round-robin messages between them, and then neither will conflict with the work the other one is doing.