I have 4 queues in ActiveMQ and messages from each queue should be sent out to external service, for picking up the messages from queue I am using Apache Camel and I am throttling the messages.
But my problem here is for different queues I have different social hours. For e.g.
Queue 1 messages should be sent only between 6 AM to 5 PM,
Queue 2 messages should be sent only between 10 AM to 10 PM like that.
So I want to know how can we handle this using Apache camel throttling. Or please suggest me some solution.
Let me know if anyone not cleared with my problem. Thanks in advance.
Camel allows you to associate route(s) with route policies. And we have an out of the box policy that is based on camel-quartz and is scheduled based. This allows you to setup policies for opening hours of your routes.
The doc starts here: http://camel.apache.org/routepolicy. And there is links from that page to the the scheduler based policies.
Mind there is a ticket - http://issues.apache.org/jira/browse/CAMEL-5929 - about if you restart the app server, then the route is not started if you start within the opening hours. eg your have 12pm-6pm. And you restart the app at 3pm (eg in between). Then the route i started on the next day. The ticket is there to allow you to configure to force start if being started within the opening window.
Set up one route per queue/interval.
Use Quartz timers triggered on those hours that should start/stop the routes.
You can let the Quartz routes use the control bus pattern to start/stop the queue routes.
Related
I am using a managed RabbitMQ cluster through AWS Amazon-MQ. If the consumers finish their work quickly then everything is working fine. However, depending on few scenarios few consumers are taking more than 30 mins to complete the processing.
In that scenarios, RabbitMQ deletes the consumer and makes the same messages visible again in the queue. Becasue of this another consumer picks it up and starts processing. It is happing in the loop. Therefore the same transaction is getting executed again and I am loosing the consumer as well.
I am not using any AcknowledgeMode so I believe it's AUTO by default and it has 30 mins limit.
Is there any way to increase the Delivery Acknowledgement Timeout for AUTO mode?
Or please let me know if anyone has any other solutions for this.
Reply From AWS Support:
Consumer timeout is now configurable but can be done only by the service team. The change will be permanent irrespective of any version.
So you may update RabbitMQ to latest, and no need to stick with 3.8.11. Provide your broker details and desired timeout, they should be able to do it for you.
This is the response from AWS support.
From my understanding, I see that your workload is currently affected by the consumer_timeout parameter that was introduced in v3.8.15.
We have had a number of reach outs due to this, unfortunately, the service team has confirmed that while they can manually edit the rabbitmq.conf, this will be overwritten on the next reboot or failover and thus is not a recommended solution. This will also mean that all security patching on the brokers where a manual change is applied, will have to be paused. Currently, the service does not support custom user configurations for RabbitMQ from this configuration file, but have confirmed they are looking to address this in future, however, is not able to an ETA on when this will available.
From the RabbitMQ github, it seems this was added for quorum queues in v3.8.15 (https://github.com/rabbitmq/rabbitmq-server/releases/tag/v3.8.15 ), but seems to apply to all consumers (https://github.com/rabbitmq/rabbitmq-server/pull/2990 ).
Unfortunately, RabbitMQ itself does not support downgrades (https://www.rabbitmq.com/upgrade.html )
Thus the recommended workaround and safest action form the service team, as of now is to create a new broker on an older version (3.8.11) and set auto minor version upgrade to false, so that it wont be upgraded.
Then export the configuration from the existing RabbitMQ instance and import it into new instance and use this instance going forward.
Application1: source system sending 10 requests per sec to apache camel(ActiveMQ).
Application2: Apache camel which receives request from Application1 and sends it to downstream system Application3(10 requests/sec).
Application3: The downstream system(post API) gets 10 requests per sec from apache camel and processes the request.
Problem statement: Application3 has DB updates and processing tasks to be handled because of the 10 requests at a time to application3 the duplicates are been generated during processing and DB updates.
Please suggest the way I can add a delay of 1sec between each request either at apache camel /the downstream system.
Thanks in advance.
There is a Delay EIP in Camel you can use to delay every message (probably in Application 2) for a fixed or even calculated amount of time. Make sure that you only have 1 Consumer in Application 2.
There is also a Throttle EIP in Camel, but it keeps all messages that are hold back in memory. In your case it is better to slow down the consumption of Application 2 to avoid overloading Application 3.
I guess that Application 3 get duplicates because it can't keep up with the requests and then Application 2 gets Timeouts and its error handling sends the requests again.
However, when you use messaging, you have to prepare for duplicates. In error cases you can easily receive duplicates and therefore you have to make Application 3 idempotent.
Of course, Camel can also help you with this thanks to the Idempotent Consumer.
I'm developing an application which processes asynchronous requests that takes on an average 10 minutes to finish. The server is written using Spring Boot and has 4 replicas and there's a load balancer. In case one of these server crashes while processing certain number of requests, I want these failed requests to restart on the remaining servers in a load balanced way.
Note: There's a common database in which we create a unique entry for every incoming request, and delete that entry when that request is processed successfully.
Constraints:
We can't wait for the server to restart.
There's no extra server to keep watch of these servers.
There's no leader/slave architecture among the servers.
Can someone please help me with this problem?
One solution would be to use a message queue to handle the requests. I would recommend using Apache Kafka (Spring for Apache Kafka) and propose the following solution:
Create 4 Kafka topics.
Whenever each of the 4 replicas receives a request, publish it on one of the 4 topics (randomly) instead of simply handling it.
Each replica will connect to Kafka and consume from one topic. If you let Kafka manage your topics, whenever one replica would crash, one of the other 3 will pick up its topic and start consuming requests in its place.
When the crashed replica restarts and connects to Kafka, it can start consuming again from its topic (this auto-balancing is already implemented in Kafka).
Another advantage of this solution is that you can, if you want to, stop using the database to store requests, as Kafka can act as your database in this case.
I need to create a job that will run every 5 minutes (5 minutes from its last run), receive some messages from a topic and process them. This has to be a standalone Java application.
I have considered two options. I am stuck with both of them -
Use Spring's JmsTemplate. I am not sure how to create a durable subscriber with JmsTemplate?
Use DefaultMessageListenerContainer which provides facility to create a durable subscriber. But I am not sure how to gracefully shutdown such an application after a given period of time say 2 minutes.
Any ideas on how to do this?
You need two pieces:
The scheduled job that runs every X minutes: connects to the queue and sends a message.
The listener, running on a JMS host of some kind, that takes messages off the queue/topic.
What JMS host do you plan to use? JBOSS? OpenJMS? RabbitMQ? Something else?
Will the client job be a Java main that executes a scheduled ExcecutorTask in a while loop?
planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.