Consuming multiple messages from RabbitMQ in java with AKKA actors - java

I am pretty new to RabbitMQ, I want to to consume multiple messages from RabbitMQ so that work can be done parallely also sending acknowledgement only when any of the actor has finished it's task so as not to loose messages. How should I proceed, I want to use spring support for AKKA.
Can I use an actor as a consumer or it should be a plain consumer that can consume multiple messages without sending acknowledgement for any of the message or it should be that I have multiple classes/threads working as consumer instantiated to listen a single message at a time than calling actor (but that would be as if it had no actor or parallelism via AKKA model).

I haven't worked with RabbitMQ per se, but I would probably designate one actor as a dispatcher, that would:
Handle RabbitMQ connection.
Receive messages (doesn't matter if one-by-one or in a batch for efficiency).
Distribute work between worker actors, either by creating a new worker for each message, or by sending messages to a pre-created pool of workers.
Receive confirmation from worker once task is completed and results are committed, and send acknowledgement back to RabbitMQ. Acknowledgment token may be included as a part of worker's task, so no need to track the mappings inside the dispatcher.
Some additional things to consider are:
Supervision strategy: who supervises the dispatcher? Who supervises the workers? What happens if they crash?
Message re-sends when running in a distributed environment.

Related

Reordering AWS Standard SQS messages with Spring Integration

I am currently working with SNS and SQS to integrate disparate remote systems. The producer sends messages to an AWS SNS with a SQS subscribed. The consumer is a Spring Boot application with spring integration enabled that polls the SQS with an #SqsListener (default configuration with no tweaking). All this works fine.
The requirement is to process those messages in the right order mostly driven by the chronological creation time from the producer perspective. And as some of they could be dependent I have to process them one by one taking into account the original order.
The problem is that I am aware that SQS does not guarantees that those messages arrive in order when the Listener polls the SQS. I have probe this by programmatically sending a couple of messages to the SNS in the right order I want them to be processed and receive those messages in a slightly different order within the SqsListener.
To try to deal with this unwanted effect I put in place a Priority Channel right after the SqsListener to buffer those messages and let this channel reorder the messages.
Would this be the right approach to process standard SQS messages in order? Should I tweak the Listener config, for example to change it for a Long Polling?

RabbitMq java client: subscribe to queue with rate-limiting

I'm using rabbitmqclient for RabbitMQ (from Scala). I subscribe to a queue via DefaultConsumer and consume the messages from few instances concurrently.
The problem is that when the first consumer starts, it immediately takes all existing messages from the queue, so other nodes will consume only newer messages. I'd like to configure the consumers to take, say, not more than 10 messages at a time. It's definitely possible to rewrite it using pull-based API and manage back pressure manually, but I'd like to avoid it.

Pentaho JMS consumer - multiple producer to a single consumer

I have 2 JMS queues where my application can publish the message to any one of the queue based on the node to which the request is being received. Pentaho should actively look to both the queues and should be able to process as soon as the message arrives in any one of both the queue.
Currently, I have implemented a job to actively listen to one queue and process the message and post a response for the same.
How do we configure pentaho to actively listen to two queues at the same time and perform the same action when any of the queue is posted with a message?
EDIT I am not aware of any such direct feature available in Pentaho for such intra service communication.
Will Clustering help this cause?
Finally cracked it
Job1
Start (Run next entries in Parallel) -> Transformation1 (JMS Consumer1)
|-> Transformation2 (JMS Consumer2)
With the task and message prioritisation logic put on the application side.

ActiveMQ messages not dequeued untill accessing the Web admin list

Our architecture is based on a ActiveMQ 5.10.0 backbone that consists of about 70 queues. Different applications send messages to queues and different applications consumes messages from queues.
In details, only 5 queues have more than one consumer while the remaining have one single consumer per queue.
Everything works fine except for the queues with multiple consumers. For these queues, messages are correctly queued but they are not dequeued untill we access to the ActiveMQ Web portal and click on the queue name thus enlisting the full message list. When we do this, suddendly pending messages are dequeued.
Some additional notes:
the queue only contains TEXT messages
we have 10 consumers registered to that queue. Every consumer defines a proper selector in order to consume only some of the published messages.
every message set a timeout since there are messages that doesn't match any selector rule and we don't want to keep messages in the queue indefinitely.
every consumer defines a connection pool via BiTronix pool. According to what suggested in another thread, for every consumer we set the prefetch to 0
Can someone give us any advice? Why accessing the ActiveMQ Web message list unlock the unqueued messages?

is there a java pattern for a process to constantly run to poll or listen for messages off a queue and process them?

planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.

Categories