Multithreaded JMS receiving in Spring - java

I'm trying to write a multithreaded implementation for JMS message processing from a queue.
I've tried with DefaultMessageListenerContainer and SimpleMessageListenerContainer classes.
The problem I have is that it seems like just a single instance of the MessageListener class gets ever instantiated, no matter how I configure it. This forces me to unnecessarily write stateless or thread-safe MessageListener implementations, since I have the ListenerContainer configured to use multiple threads (concurrentConsumers=8).
Is there an obvious solution to this that I'm overlooking?

This is by design. The MessageListener is a dependency that you inject into Spring - it has no way of instantiating new ones.
This forces me to unnecessarily write stateless or thread-safe messageListener implementations
You make that sound like a bad thing. Making your MessageListener is a very good idea, Spring just removes the temptation to do otherwise.

Maybe this answer is too late, but it may benefit others who're searching for it. In short, the answer is using CommonsPoolTargetSource and ProxyFactoryBean.
Check out this link for details: http://forum.springsource.org/showthread.php?34595-MDB-vs-MDP-concurrency
If you want to do something similar for topic, check this: https://stackoverflow.com/a/12668538/266103

Configuring "concurrentConsumers" is just enough to process messages concurrently. This doesn't mean you will have "n" instances of MessageListenerContainer. The MessageListenerContainer may span "tasks" internally to process the messages. Optionally, you may have to configure your logging accordingly to see the information associated with the underlying tasks/threads.
See "Tuning JMS message consumption in Spring" for more details.

Related

elegant way of halting consumption of rabbitmq in java spring boot

What is the elegant way of halting consumption of messages when an exception happens in the consumer or listener,so the messages can be re-queued. The listener process is consuming messages from the queue and calling a different API. Now if the API is not available, we don't want to consume messages from the queue. Is there any way to stop consuming messages from the queue for a finite time and come back up again when the API is available.
any sample code snippet of how it can be done also will help.
When asking questions like this, it's generally best to show a snippet of your configuration so we can answer appropriately, depending on how you are using the framework.
You can simply call stop() (and start()) on the listener container bean.
If you are using #RabbitListener, the containers are not beans, but are available via the RabbitListenerEndpointRegistry bean.
Calling stop() on the registry stops all the containers.
Or, you can call registry.getListenerContainer(id).stop(), where id is the value of the #RabbitListener 's id property.

Circular dependency with constructor injection

Say I have the following components:
Producer produces numbers and sends messages to Consumer
Both Producer and Consumer send messages to Monitor
Monitor, say randomly, decides when the produce / consume process should stop and sends a message to Stopper
Stopper then stops both Producer and Consumer cleanly
I know this is easy to accomplish in a mutable language such as Java. I know also this can be resolved by allowing partial mutability with interfaces, such as described here.
However, it's not a good practice to have cyclic dependencies even if possible. So, let's assume all references are constructor-injected and final:
Producer has final Consumer and final Monitor
Consumer has final Monitor
Monitor has final Stopper
Stopper has final Producer and final Consumer
I found references such as this, but they don't seem to apply.
How would one go about un-cycling this case and cases such as this in general? In other words, I'm mostly interested in how to accomplish not forming the cycles from a design standpoint. Any hints?
You're right, this won't work if all dependencies are final and injected via the constructor.
But may I ask, why do they have to be injected via the constructor? There is nothing wrong at the end of the day to use setters to wire up beans.
In fact, in Spring, beans are usually instantiated first and injected afterwards. So you could look at that approach.
Other than that, you could look at a different way to model your problem (that does not have circular dependencies).
For example, since you are using queues already to send messages between the producer and consumer, why not also send messages on queues to the monitor? The stopper could also send messages to the producer and consumer.
Or, as Taylor suggests, an ESB.
There are probably many other ways to design it, have a read about (for example) Apache Camel Enterprise Integration Patterns for some ideas.

javax.jms.MessageListener: What additional threading concerns to take care of?

In this nice article on JMS, Bruce Snyder (Author of "ActiveMQ in Action") mentions:
[one] of the options for implementing a message listener to be used with the Spring DMLC is using javax.jms.MessageListener - It is a standardized interface from the JMS spec but handling threading is up to you.
He doesnt talk about threading in the other two options: Spring SessionAwareMessageListener and MessageListenerAdapter.
My question is: What additional threading concerns are to be addressed with the use of the plain javax.jms.MessageListener, compared to the other two approaches ?
I am thinking that regardless of what option I choose from the above 3, if my listener will be receiving messages on multiple threads, my listener implementation has to be thread-safe.
I went through the examples Bruce had created in github for all the three options.
I didnt see any specific handling for threads in any case.
The xmls of simple and session-aware consumers are almost the same.
As long as your not keeping any state in your MessageListener implementations(through say an instance variable), you don't have to worry about thread-safety with any of the three approaches. If you are keeping state, then like in any multi-threading scenario, you will have to take care of how you synchronize access to the state.

Implementing Spring Integration InboundChannelAdapter for Kafka

I am trying to implement a custom inbound channel adapter in spring integration to consume messages from apache kafka. Based on spring integration examples, I found that I need to create a class that implements MessageSource interface and implement receive() method that would return consumed Message from kafka. But based on consumer example in kafka, the message iterator in KafkaStream is backed by a BlockingQueue. So if no messages are in the queue, the thread will be blocked.
So what is the best way to implement a receive() method as this method can potentially block until there is something to consume.. ?
In more general sense, How do we implement a custom inbound channel for streaming message sources that blocks until there is something ready to consume..?
The receive() method can block (as long as the underlying operation responds properly to an interrupted thread), and from an inbound-channel-adapter perspective, depending on the expectations of the underlying source, it might be preferable to use a fixed-delay trigger. For example, "long polling" can simulate event-driven behavior when a very small delay value is provided.
We have a similar situation in our JMS polling MessageSource implementation. There, the underlying behavior is handled by one of the JmsTemplate's receive() methods. The JmsTemplate itself allows configuration of a timeout value. That means, as an example, you may choose to block for 5-seconds max but then have a very short delay trigger between each blocking receive call. Alternatively, you can specify an indefinite receive timeout. The decision ultimately depends on the expectations of the underlying resource, message throughput, etc.
Also, I wanted to let you know that we are exploring Kafka adapters ourselves. Perhaps you would like to collaborate on this within the spring-integration-extensions repository?
Regards,
Mark

With default bean scope as singleton, isn't it going to be bad when concurrent calls occur?

I have declared a Spring bean, which polls my email server every so and so seconds. If there is mail, it fetches it, and tries to extract any attached files in it. These files are then submitted to an Uploader which stores them safely. The uploader is also declared as a Spring bean. A third bean associates the email's sender with the file's filename and stores that in a DB.
It turned out that when a few people tried to send emails at the same time, a bunch of messy stuff happened. Records in the DB got wrong filenames. Some did not get filenames at all, etc.
I attributed the problem to the fact that beans are scoped to singleton by default. This means that a bunch of threads are probably messing up with one and the same instance at the same time. The question is how to solve this.
If I synchronize all the sensitive methods, then all threads will stack up and wait for each other, which is kind of against the whole idea of multithreading.
On the other hand, scoping the beans to "request" is going to create new instances of each of them, which is not really good either, if we speak about memory consumption, and thread scheduling
I am confused. What should I do?
Singleton-scoped beans should not hold any state - that solves the problem usually. If you only pass data as method parameters, and don't assign it to fields, you will be safe.
I agree with both #Bozho and #stivio answers.
The preferred options are to either pass store no state in a singleton scoped beans, and pass in a context object to the methods, or to use a prototype / request scoped beans that get created for every processing cycle. Synchronization can be usually avoided, by choosing one of these approaches, and you gain much more performance, while avoiding deadlocks. Just make sure you're not modifying any shared state, like static members.
There are pros and cons for each approach:
Singlton beans are act as a service-like class, which some may say are not a good Object-Oriented design.
Passing context to methods in a long chain of methods may make your code messy, if you're not careful.
Prototype beans may hold a lot of memory for longer than you intended, and may cause memory exhaustion. You need to be careful with the life cycle of these beans.
Prototype beans may make your design neater. Make sure you're not reusing beans by multiple threads though.
I tend to go with the service approach in most simple cases. You can also let these singleton beans create a processing object that can hold it's state for the computation. This is a solution that may serve you best for the more complexed scenarios.
Edit:
There are cases when you have a singleton bean depending on prototype scoped bean, and you want a new instance of the prototype bean for each method invocation. Spring supplies several solutions for that:
The first is using Method Injection, as described in the Spring reference documentation. I don't really like this approach, as it forces your class to be abstract.
The second is to use a ServiceLocatorFactoryBean, or your own factory class (which needs to be injected with the dependencies, and invoke a constructor). This approach works really well in most cases, and does not couple you to Spring.
There are cases when you also want the prototype beans to have runtime dependencies. A good friend of mine wrote a good post about this here: http://techo-ecco.com/blog/spring-prototype-scoped-beans-and-dependency-injection/.
Otherwise just declare your beans as request, don't worry about the memory consumption, the garbage collection will clear it up, as long there is enough memory it won't be a performance problem too.
Speaking abstractly: if you'e using Spring Integration, then you should build your code in terms of the messages themselves. Eg, all important state should be propagated with the messages. This makes it trivial to scale out by adding more spring Integration instances to handle the load. The only state (really) in Spring Integration is for components like the aggregator, which waits and collects messages that have a correllation. In this case, you can delegate to a backing store like MongoDB to handle the storage of these messages, and that is of course thread safe.
More generally, this is an example of a staged event driven architecture - components must statelessly (N(1) no matter how many messages) handle messages and then forward them on a channel for consumption by another component that does not know about the previous component from which the message came.
If you are encountering thread-safety issues using Spring Integration, you might be doing something a bit differently than intended and it might be worth revisiting your approach...
Singletons should be stateful and thread-safe.
If a singleton is stateless, it's a degenerate case of being stateful, and thread-safe is trivially true. But then what's the point of being singleton? Just create a new one everytime someone requests.
If an instance is stateful and not thread-safe, then it must not be singleton; each thread should exclusively have a different instance.

Categories