I need to build scalable application that have several Java applications (Spring RabbitMQ producers) that consume messages from other applications by HTTP protocol, calculating priority of message, and send them in to 'priority queue' that matches the params.
So far under load of hundreds of messages per second, one application works just fine, but there is need to scale up applications.
The problem is that I don't really understand how does RabbitMQ producers work with 'priority queues'. I've been searching information in RabbitMQ documentation and I found docs that says that every producer needs to get acks to make sure that messages have proceed successfully.
So the questions are
Docs says that priority of messages calculated under hood AMQP protocol, so do RabbitMQ will send acks to producer after the position for messages will be selected or before
How does messages will be treated if assume that we have 2 producers that produce 2 different messages with same priority to the same 'priority queue'
I will be appreciated for any hint that will help me with that!
Related
I am running a simple set up for Apache Kafka using the APIs for Producer and Consumer.
In order to simulate heavy load, I am
running multiple instances of the Producer (say 2),
all of which are sending the same message (message content is a don't care)
multiple times (say 1000 for each topic)
to a large number of topics (say 5)
I am running a single Consumer to read messages from all the topics and keep a count of the number of messages processed.
I would expect at the Consumer end to get (2 x 5 x 1000) = 10000 messages.
But the number of messages received is less than expected.
This behavior does not exist for a smaller set of messages (say 50 messages sent to each topic). So I know that it cannot be something wrong with my setup.
Are there some configurations that I am missing here? Or perhaps, Kafka did not receive some messages from the Producer instances and the API is not notifying me?
FYI: This is being run on a single VM hosted in my personal machine. Both Kafka and Zookeeper are on the same machine. I'm not really interested in the performance of the setup as of now. Performance metrics are not valid if some messages go missing.
Is it possible to validate / filter messages that are being sent to Kafka topic?
Like, I want to ensure that only valid clients / producers send messages to my topic. I can certainly perform validation on the consumer side by discarding invalid messages based on certain parameters / criteria. But, what if I want to do it before the messages are written into the topic.
Say, Kafka receives a message, performs some validation and accordingly decides if it needs to discard or write that message into a topic. Is this possible?
A short answer - current versions of Kafka has no support for such functionality out of the box. And since Kafka producers are designed to communicate with multiple brokers during single session, there is no easy way to implement such ad-hoc filtering.
There are couple of reasonable options still exists:
Use 2 topics: one "public" topic opened to everyone which will allow all messages, and another non-public "filtered" topic which will be populated by your own application with data from "public" after applying your filtering rules.
If you absolutely need to validate incoming messages before writing them down, then you could hide actual Kafka brokers behind some form of proxy application, which will do validation before writing messages into Kafka
Iam working on ActiveMQ application where iam using a consumer which uses Session.CLIENT_ACKNOWLEDGE.
Iam sending the messages received from queue in consumer to a webservice.Assume if i don do message.acknowledge() all the messages sent to webservice are back on the queue in enqueued state.
My question is how to retrieve the messages again from the queue and use it.I used retroactive=true and tried redelivery also but all of them are failing.
How to avoid this.
if you use message.acknowledge() all consumed messages are not available again in the same queueu because the are considered as delivered!
can you explain why you need to consume again the messages already consumed.
retroactive is for consumers who was offline and when starting connection to receive messages sent before the connection.
You need to setup the prefetch policy for the consumer to 400 in this case.
You can read to understand the concept http://activemq.apache.org/what-is-the-prefetch-limit-for.html
If you want to treat messages one by one with counter you need to set prefetch to 1 and acknowledge each message when you treat 200 you don't acknowledge.
Our architecture is based on a ActiveMQ 5.10.0 backbone that consists of about 70 queues. Different applications send messages to queues and different applications consumes messages from queues.
In details, only 5 queues have more than one consumer while the remaining have one single consumer per queue.
Everything works fine except for the queues with multiple consumers. For these queues, messages are correctly queued but they are not dequeued untill we access to the ActiveMQ Web portal and click on the queue name thus enlisting the full message list. When we do this, suddendly pending messages are dequeued.
Some additional notes:
the queue only contains TEXT messages
we have 10 consumers registered to that queue. Every consumer defines a proper selector in order to consume only some of the published messages.
every message set a timeout since there are messages that doesn't match any selector rule and we don't want to keep messages in the queue indefinitely.
every consumer defines a connection pool via BiTronix pool. According to what suggested in another thread, for every consumer we set the prefetch to 0
Can someone give us any advice? Why accessing the ActiveMQ Web message list unlock the unqueued messages?
We have a use case wherein we create just one consumer to process messages in the Queue. Message processor accumulates certain number of messages before acknowledging. Receiving messages in Asynchronous way and using Transacted session. Size of message is very small.
Active MQ stops sending further messages to sole consumer after certain number of messages and waits for acknowledgement. We have tried solutions like consumer.prefetchSize, consumer.maximumPendingMessageLimit; but nothing is working.
We tried similar use case with a durable topic with just one subscriber and it works fine.
Has anyone encountered similar activemq issue/behavior? We tried many things mentioned on different forums but none of them helped.
Activemq version : ActiveMQ 5.6.0
Queue configuration : Durable queue
Consumer : Asynchronous and uses transacted session as acknowledgement mode
Any help or suggestion will be greatly appreciated. Thanks.
I had tried out lot of different configurations to resolve this issue by setting different activemq attributes like prefetch policy, maxpagesize etc. but none of them really helped. By referring to #Jake's comment I got to know about monitoring activemq using JMX via
JConsole. This is a very handy tool to monitor and manage your activemq.
Here are few article which you may find useful.
1. Monitoring activemq
2. Connecting activemq JMX using JConsole
By monitoring the queue attributes I figured out that memoryLimit attribute had very low value assigned to it (just 1mb). Increasing the value of the attribute solved my issue. JMS started sending messages without waiting for acknowledgement.
For testing purpose I had changed the value for memoryLimit in the conf/activemq.xml configuration file.