Spring JMS - standalone durable topic subscriber - java

I need to create a job that will run every 5 minutes (5 minutes from its last run), receive some messages from a topic and process them. This has to be a standalone Java application.
I have considered two options. I am stuck with both of them -
Use Spring's JmsTemplate. I am not sure how to create a durable subscriber with JmsTemplate?
Use DefaultMessageListenerContainer which provides facility to create a durable subscriber. But I am not sure how to gracefully shutdown such an application after a given period of time say 2 minutes.
Any ideas on how to do this?

You need two pieces:
The scheduled job that runs every X minutes: connects to the queue and sends a message.
The listener, running on a JMS host of some kind, that takes messages off the queue/topic.
What JMS host do you plan to use? JBOSS? OpenJMS? RabbitMQ? Something else?
Will the client job be a Java main that executes a scheduled ExcecutorTask in a while loop?

Related

Listening to new ActiveMQ topic as they created in runtime

I am working on application (spring boot application)which is supposed to be listening from the topic created in runtime from some other application.
How can I know a new topic is created on the Active MQ Brocker in the runtime?
And how can I start to listening the newly created topic un the runtime?
Please note that I want to listen to topic created in runtime not at launch time (application startup, spring application context is built).
I don't know how many topic I may have to listen to when my application starts up. Topics are created in the runtime with no specific naming pattern.
Without 'durability' in place, this presents a race condition. If the producer sends a message before the consumer is online, the broker will not hold on to the message for the consumer. 'durability' is the broker holding on to messages for a consumer, even when the consumer is not connected.
However! ActiveMQ has this solved by using wild card names for consumers. It is recommended that you at least agree on a prefix to provide first level filtering. Note-- you'll also want this if you go to multi-broker architecture. Something like topic://ORDER.ABCD, topic://ORDER.XYZ where all the topics are created with the same format ie 'ORDER.$randomStuff'.
Option 1:
Have the consumer register a durable subscription using ActiveMQ's wild card character ">". ie. topic://ORDER.>
Option 2:
You can listen for events on the ActiveMQ.Advisory topics for when the destination is created, and then register a consumer. However, this has the race condition where messages could be missed.
Option 1 is probably your best bet, but can run into scaling problems if you start getting up to the message load in the 100M's of messages per day or 1000's of total number of topics.
Subscribe to the advisory topic ActiveMQ.Advisory.Topic (advisories need to be enabled -- see [1]). A message will be published to this topic every time a topic is created. The message will have as its body an object of class org.apache.activemq.command.DestinationInfo, from which the name of the new topic can be extracted.
The application can subscribe using this name, as it would to any other topic.
[1] http://activemq.apache.org/advisory-message.html

Pentaho JMS consumer - multiple producer to a single consumer

I have 2 JMS queues where my application can publish the message to any one of the queue based on the node to which the request is being received. Pentaho should actively look to both the queues and should be able to process as soon as the message arrives in any one of both the queue.
Currently, I have implemented a job to actively listen to one queue and process the message and post a response for the same.
How do we configure pentaho to actively listen to two queues at the same time and perform the same action when any of the queue is posted with a message?
EDIT I am not aware of any such direct feature available in Pentaho for such intra service communication.
Will Clustering help this cause?
Finally cracked it
Job1
Start (Run next entries in Parallel) -> Transformation1 (JMS Consumer1)
|-> Transformation2 (JMS Consumer2)
With the task and message prioritisation logic put on the application side.

Consuming multiple messages from RabbitMQ in java with AKKA actors

I am pretty new to RabbitMQ, I want to to consume multiple messages from RabbitMQ so that work can be done parallely also sending acknowledgement only when any of the actor has finished it's task so as not to loose messages. How should I proceed, I want to use spring support for AKKA.
Can I use an actor as a consumer or it should be a plain consumer that can consume multiple messages without sending acknowledgement for any of the message or it should be that I have multiple classes/threads working as consumer instantiated to listen a single message at a time than calling actor (but that would be as if it had no actor or parallelism via AKKA model).
I haven't worked with RabbitMQ per se, but I would probably designate one actor as a dispatcher, that would:
Handle RabbitMQ connection.
Receive messages (doesn't matter if one-by-one or in a batch for efficiency).
Distribute work between worker actors, either by creating a new worker for each message, or by sending messages to a pre-created pool of workers.
Receive confirmation from worker once task is completed and results are committed, and send acknowledgement back to RabbitMQ. Acknowledgment token may be included as a part of worker's task, so no need to track the mappings inside the dispatcher.
Some additional things to consider are:
Supervision strategy: who supervises the dispatcher? Who supervises the workers? What happens if they crash?
Message re-sends when running in a distributed environment.

is there a java pattern for a process to constantly run to poll or listen for messages off a queue and process them?

planning on moving a lot of our single threaded synchronous processing batch jobs to a more distributed architecture with workers. the thought is having a master process read records off the database, and send them off to a queue. then have a multiple workers read off the queue to process the records in parallel.
is there any well known java pattern for a simple CLI/batch job that constantly runs to poll/listen for messages on queues? would like to use that for all the workers. or is there a better way to do this? should the listener/worker be deployed in an app container or can it be just a standalone program?
thanks
edit: also to note, im not looking to use JavaEE/JMS, but more hosted solutions like SQS, a hosted RabbitMQ, or IronMQ
If you're using a JavaEE application server (and if not, you should), you don't have to program that logic by hand since the application server does it for you.
You then implement and deploy a message driven bean that listens to a queue and processes the message received. The application server will manage a connection pool to listen to queue messages and create a thread with an instance of your message driven bean which will receive the message and be able to process it.
The messages will be processed concurrently since the application server will have a connection pool and a thread pool available to listen to the queue.
All JavaEE-featured application servers like IBM Websphere or JBoss have configurations available in their admin consoles to create Message Queue listeners depending or the message queue implementation and then bind this message queue listeners to your Message Driven Bean.
I don't a lot about this, and I maybe don't really answer your question, but I tried something a few month ago that might interest you to deals with message queue.
You can have a look at this: http://www.rabbitmq.com/getstarted.html
I seems Work Queue could fix your requirements.

Throttling based on time interval

I have 4 queues in ActiveMQ and messages from each queue should be sent out to external service, for picking up the messages from queue I am using Apache Camel and I am throttling the messages.
But my problem here is for different queues I have different social hours. For e.g.
Queue 1 messages should be sent only between 6 AM to 5 PM,
Queue 2 messages should be sent only between 10 AM to 10 PM like that.
So I want to know how can we handle this using Apache camel throttling. Or please suggest me some solution.
Let me know if anyone not cleared with my problem. Thanks in advance.
Camel allows you to associate route(s) with route policies. And we have an out of the box policy that is based on camel-quartz and is scheduled based. This allows you to setup policies for opening hours of your routes.
The doc starts here: http://camel.apache.org/routepolicy. And there is links from that page to the the scheduler based policies.
Mind there is a ticket - http://issues.apache.org/jira/browse/CAMEL-5929 - about if you restart the app server, then the route is not started if you start within the opening hours. eg your have 12pm-6pm. And you restart the app at 3pm (eg in between). Then the route i started on the next day. The ticket is there to allow you to configure to force start if being started within the opening window.
Set up one route per queue/interval.
Use Quartz timers triggered on those hours that should start/stop the routes.
You can let the Quartz routes use the control bus pattern to start/stop the queue routes.

Categories