I'm using rabbitmqclient for RabbitMQ (from Scala). I subscribe to a queue via DefaultConsumer and consume the messages from few instances concurrently.
The problem is that when the first consumer starts, it immediately takes all existing messages from the queue, so other nodes will consume only newer messages. I'd like to configure the consumers to take, say, not more than 10 messages at a time. It's definitely possible to rewrite it using pull-based API and manage back pressure manually, but I'd like to avoid it.
Related
I "monitor" the number of consecutive failures in my Camel processing pipeline with a Camel RoutePolicy.
When a threshold of failures is reached, I want to pause the processing for a configured amount of time because it probably means that the data from another system is not yet ready and therefore every message fails.
Since the source of my pipeline is a Kafka topic, I should not just stop the whole route because the broker would assume my consumer died and rebalance.
The best way to "pause" topic consumption seems to be to pause the [KafkaConsumer][3] (the native, not the one of Camel). Like this, the consumer continues to poll the broker, but it does not fetch any messages. Exactly what I need.
But can I access the native [KafkaConsumer][3] from the RoutePolicy context to call the pause and resume methods?
The spring-kafka listener containers expose these methods, it would be nice to use them from Camel too.
This is not yet supported, the two methods must be added to the camel-kafka consumer first.
There is also an existing issue for it: https://issues.apache.org/jira/browse/CAMEL-15106
I am currently working with SNS and SQS to integrate disparate remote systems. The producer sends messages to an AWS SNS with a SQS subscribed. The consumer is a Spring Boot application with spring integration enabled that polls the SQS with an #SqsListener (default configuration with no tweaking). All this works fine.
The requirement is to process those messages in the right order mostly driven by the chronological creation time from the producer perspective. And as some of they could be dependent I have to process them one by one taking into account the original order.
The problem is that I am aware that SQS does not guarantees that those messages arrive in order when the Listener polls the SQS. I have probe this by programmatically sending a couple of messages to the SNS in the right order I want them to be processed and receive those messages in a slightly different order within the SqsListener.
To try to deal with this unwanted effect I put in place a Priority Channel right after the SqsListener to buffer those messages and let this channel reorder the messages.
Would this be the right approach to process standard SQS messages in order? Should I tweak the Listener config, for example to change it for a Long Polling?
I am pretty new to RabbitMQ, I want to to consume multiple messages from RabbitMQ so that work can be done parallely also sending acknowledgement only when any of the actor has finished it's task so as not to loose messages. How should I proceed, I want to use spring support for AKKA.
Can I use an actor as a consumer or it should be a plain consumer that can consume multiple messages without sending acknowledgement for any of the message or it should be that I have multiple classes/threads working as consumer instantiated to listen a single message at a time than calling actor (but that would be as if it had no actor or parallelism via AKKA model).
I haven't worked with RabbitMQ per se, but I would probably designate one actor as a dispatcher, that would:
Handle RabbitMQ connection.
Receive messages (doesn't matter if one-by-one or in a batch for efficiency).
Distribute work between worker actors, either by creating a new worker for each message, or by sending messages to a pre-created pool of workers.
Receive confirmation from worker once task is completed and results are committed, and send acknowledgement back to RabbitMQ. Acknowledgment token may be included as a part of worker's task, so no need to track the mappings inside the dispatcher.
Some additional things to consider are:
Supervision strategy: who supervises the dispatcher? Who supervises the workers? What happens if they crash?
Message re-sends when running in a distributed environment.
I have a simple test case where I start a HornetQ server (V2.4.7.Final) as part of a Spring context. This works quite well and I have access to a queue via JMS, the HornetQ API and/or JMX.
Testcase
The test case is supposed to empty the queue at start, check that it is empty and then add 10 messages to the queue. As long as there are no consumers on this queue, this works using either the management queue or JMSQueueControl. Even doing some operation on the queue via JMX is working well.
Problem description
As soon as I add a message listener to this queue using Spring configuration - and the listener consumes the messages as expected - I cannot remove all messages from the queue. Neither method invocation via JMX, nor the management queue, nor JMSQueueControl is working, i.e. the methods are called without exception but they show no effect.
I thought that maybe I have to pause the queue before doing some modifications to its content but pausing does not work either. I can see that the queue is paused via JMX and the same is reported when using the API but the consumer still consumes messages from the very queue. Thus I think it has not been paused at all.
I know that it is difficult without the source code but from my point of view this is all pretty basic setup as you find it in many, many tutorials. Could anyone give advice what I am doing wrong. In case any source code is needed, please leave a comment and I will add the revelant parts.
HornetQ supports removal of messages which are in the queue on the broker side. Once the messages are dispatched to the consumer and buffered on the consumer, it is not possible to remove the messages from the consumer buffer using any management API.
One way to solve this (if you must) is to disable consumer buffering by setting the consumer-window-size to 0, but be aware of the potential performance degradation.
Otherwise, you need to handle it programmatically; by adding some validity checks before processing the message.
You can read more about HornetQ Flow control here https://docs.jboss.org/hornetq/2.2.5.Final/user-manual/en/html/flow-control.html
Our architecture is based on a ActiveMQ 5.10.0 backbone that consists of about 70 queues. Different applications send messages to queues and different applications consumes messages from queues.
In details, only 5 queues have more than one consumer while the remaining have one single consumer per queue.
Everything works fine except for the queues with multiple consumers. For these queues, messages are correctly queued but they are not dequeued untill we access to the ActiveMQ Web portal and click on the queue name thus enlisting the full message list. When we do this, suddendly pending messages are dequeued.
Some additional notes:
the queue only contains TEXT messages
we have 10 consumers registered to that queue. Every consumer defines a proper selector in order to consume only some of the published messages.
every message set a timeout since there are messages that doesn't match any selector rule and we don't want to keep messages in the queue indefinitely.
every consumer defines a connection pool via BiTronix pool. According to what suggested in another thread, for every consumer we set the prefetch to 0
Can someone give us any advice? Why accessing the ActiveMQ Web message list unlock the unqueued messages?