Can I make concurrent calls using Spring JMSTemplate?
I want to make 4 external service calls in parallel and am exploring using Spring's JMSTemplate to perform these calls in parallel and wait for the execution to complete.
The other option that I am looking at is to use ExecutorService.
Is there any advantage using one over the other?
JMSTemplate is thread-safe, so making parallel calls to it is not a problem.
Messaging services are usually fast enough for most tasks and can receive your messages with minimal latency, so adding an ExecutorService doesn't seem as the first thing you usually need. What you really need is to correctly configure your JMS connections pool and give it enough open connections (four in your case) so it could handle your parallel requests with no blocking.
You only need ExecutorService in case you don't care about guaranteed delivery and your program needs extremely high speed that your messaging service cannot deliver, which is highly unlikely.
As for receiving replies from your external service, you need to use JMS Request/Reply pattern (you can find examples in this article). Happily, as you're using Spring, you could make Spring Integration do lots of work for you. You need to configure outbound-gateway to send messages and inbound-gateway to receive responses. Since version 2.2 you can also use reply-listener to simplify things on your client side. All these components are covered in the official documentation (with examples as well).
So need to talk to more than two JMS queues (send and or receive) parallel using asynchronous methods. Best option is usng #Asynch at method level
This example contains RestTemplate , But in your case create JmsTemplate beans.
Prerequisites:- Please create proper JMS Beans to connect to the queue. Proper use of this will help to invoke two queues paralleley. Its works for sure because already I have implemented. I just giving the skeleton due to Copyright issues.
More details: Spring Boot + Spring Asynch
https://spring.io/guides/gs/async-method/
Step1: Create a Service Class where JMS Queue
#EnableAsynch
public class JMSApplication {
#Autowired
JmsService jmsService;
public void invokeMe(){
// Start the clock
long start = System.currentTimeMillis();
// Kick of multiple, asynchronous lookups
Future<Object> queue1= jmsService.findqueue1();
Future<Object> queue2= jmsService.findqueue2();
// Wait until they are all done
while (!(queue1.isDone() && queue2.isDone())) {
Thread.sleep(10); //10-millisecond pause between each check
}
// Print results, including elapsed time
System.out.println("Elapsed time: " + (System.currentTimeMillis() - start));
System.out.println(queue1.get());
System.out.println(queue2.get());
}
}
Step2: Write the Service Method which will contain the business logic
for Jms
#Service
public Class JmsService{
#Asynch
public Object findqueue1(){
//Logic to invoke the JMS queue
}
#Asynch
public Object findqueue2(){
//Logic to invoke the JMS queue
}
}
Related
What is the elegant way of halting consumption of messages when an exception happens in the consumer or listener,so the messages can be re-queued. The listener process is consuming messages from the queue and calling a different API. Now if the API is not available, we don't want to consume messages from the queue. Is there any way to stop consuming messages from the queue for a finite time and come back up again when the API is available.
any sample code snippet of how it can be done also will help.
When asking questions like this, it's generally best to show a snippet of your configuration so we can answer appropriately, depending on how you are using the framework.
You can simply call stop() (and start()) on the listener container bean.
If you are using #RabbitListener, the containers are not beans, but are available via the RabbitListenerEndpointRegistry bean.
Calling stop() on the registry stops all the containers.
Or, you can call registry.getListenerContainer(id).stop(), where id is the value of the #RabbitListener 's id property.
I come from a Perl background and am writing my first Java MVC web application using Spring.
My webapp allows users to submit orders which the app processes synchronously by calling a third-party SOAP service. The next phase of the project is to allow users to submit bulk orders (e.g. a CSV containing 500 rows) and process them asynchronously. Here is a snippet of my existing controller:
#Controller
#Service
#RequestMapping(value = "/orders")
public class OrderController {
#Autowired
OrderService orderService;
#RequestMapping(value="/new", method = RequestMethod.POST)
public String processNewOrder(#ModelAttribute("order") Order order, Map<String, Object> map) {
OrderStatus orderStatus = orderService.processNewOrder(order);
map.put("orderStatus", orderStatus);
return "new";
}
}
I plan to create a new #RequestMapping to deal with the incoming CSV and modify the OrderService to be able to break the CSV apart and persist the individual orders to the database.
My question is: what is the best approach to creating background workers in an MVC Spring app? Ideally I would have 5 threads processing these orders, and most likely from a queue. I have read about #Async or submitting a Runnable to a SimpleAsyncTaskExecutor bean and am not sure which way to go. Some examples would really help me.
I think Spring Batch is overkill and not really what you are looking for. It's more for batch processing like writing all the orders to a file then processing all at once, whereas this seems to be more like asynchronous processing where you just want to have a 'queue' of work and process it that way.
If this is indeed the case, I would look into using a pub/sub model using JMS. There are several JMS providers, for instance Apache ActiveMQ or Pivotal RabitMQ. In essence your OrderService would break the CSV into units of work, push them into a JMS Queue, and you would have multiple Consumers setup to read from the Queue and perform the work task. There are lots of ways to configure this, but I would simply make a class to hold your worker threads and make the number of threads be configurable. The other added benefits here are:
You can externalize the Consumer code, and even make it run on totally different hardware if you like.
MQ is a pretty well-known process, and there are a LOT of commercial offerings. This means you could easily write your order processing system in C# using MQ to move the messages over, or even use Spring Batch if you like. Heck, there is even MQ Series for host, so you could have your order processing occur on mainframe in COBOL if it suited your fancy.
It's stupidly simply to add more consumers or producers. Simply subscribe to the Queue and away they go!
Depending on the product used, the Queue maintains state so messages are not "lost". If all the consumers go offline, the Queue will simply backup and store the messages until the consumers come back.
The queues are also usually more robust. The Producer can go down and the consumers you not even flinch. The consumers can go down and the producer doesn't even need to know.
There are some downsides, though. You now have an additional point of failure. You will probably want to monitor the queue depths, and will need to provision enough space to store the messages when you are caching messages. Also, if timing of the processing could be an issue, you may need to monitor how quick things are getting processed in the queue to make sure it's not backing up too much or breaking any SLA that might be in place.
Edit: Adding example...
If I had a threaded class, for example this:
public class MyWorkerThread implements Runnable {
private boolean run = true;
public void run() {
while (run) {
// Do work here...
}
// Do any thread cooldown procedures here, like stop listening to the Queue.
}
public void setRunning(boolean runState) {
run = runState;
}
}
Then I would start the threads using a class like this:
#Service("MyThreadManagerService")
public class MyThreadManagerServiceImpl implements MyThreadManagerService {
private Thread[] workers;
private int workerPoolSize = 5;
/**
* This gets ran after any constructors and setters, but before anything else
*/
#PostConstruct
private void init() {
workers = new Thread[workerPoolSize];
for (int i=0; i < workerPoolSize; i++) {
workers[i] = new Thread(new MyWorkerThread()); // however you build your worker threads
workers[i].start();
}
}
/**
* This gets ran just before the class is destroyed. You could use this to
* shut down the threads
*/
#PreDestroy
public void dismantle() {
// Tell each worker to stop
for (Thread worker : workers) {
worker.setRunning(false);
}
// Now join with each thread to make sure we give them time to stop gracefully
for (Thread worker : workers) {
worker.join(); // May want to use the one that allows a millis for a timeout
}
}
/**
* Sets the size of the worker pool.
*/
public void setWorkerPoolSize(int newSize) {
workerPoolSize = newSize;
}
}
Now you have a nice service class you can add methods to to monitor, restart, stop, etc., all your worker threads. I made it an #Service because it felt more right than a simple #Component, but technically it can be anything as long as Spring knows to pick it up when you are autowiring. The init() method on the service class is what starts up the threads and the dismantle() is used to gracefully stop them and wait for them to finish. They use the #PostConstruct and #PreDestroy annotations, so you can name them whatever you want. You would probably have a constructor on your MyWorkerThread to setup the Queues and such. Also, as a disclaimer, this was all written from memory so there may be some mild compiling issues or method names may be slightly off.
There may be classes already available to do this sort of thing, but I have never seen one myself. Is someone knows of a better way using off-the-shelf parts I would love to get better educated.
If your order size can grow in future and You want a scalable solution I would suggest you to go with Spring-Batch framework. I find it very easy to integrate with spring-mvc and with minimal configuration you can easily achieve a very robust parallel processing architecture.Use Spring-batch-partitioning.Hope this helps you!! Feel free to ask if you need help regarding integration with Spring MVC.
I have a problem of efficiency in my project which uses Camel with the Esper component.
I have several external datasources feeding information to camel endpoints. Each Camel endpoint that receives data transfers it to a route that processes it and then delivers it at an Esper endpoint.
The image below illustrates this behavior:
The efficiency problem is that all of this is done by a single Java thread. Thus if I have many sources, there is a huge bottleneck.
The following code accurately illustrates what is going on with the image:
public final void configure() throws OperationNotSupportedException{
RouteDefinition route = from("xmpp://localhost:5222/?blablabla...");
// apply some filter
FilterDefinition filterDefinition = route.filter().method(...);
// apply main processor
ExpressionNode expressionNode = filterDefinition.process(...);
// set destination
expressionNode = filterDefinition.to("esper://session_X");
}
To fix this problem, I have to handle this situation with a pool of threads or using some sort of parallel processing. I cannot use patterns like multicast, recipient list, etc because all of those send the same message to multiple endpoints / clients, which is not the case in my examples.
A possible solution would be having 1 thread per each "Datasource endpoint -> Route -> Esper endpoint" combination, like the image bellow:
Another possible solution is to have 1 thread receive everything from the datasources, and then dispatch it to multiple threads handling the route processing together with the other endpoint:
PS: I am open to any other possible suggestions you may have.
To achieve one of these I have considered using the Camel SEDA component component, however, this one does not seem to allow me to have dynamic thread pools, because the concurrentConsumers property is static. Furthermore, I am not sure if I can use a SEDA endpoint at all, because I believe (although I am not completely sure) that the syntax for an endpoint like .to("seda:esper://session_X?concurrentConsumers=10") is invalid for Camel.
So, at this point I am quite lost and I don't know what to do:
- Is SEDA the solution I am looking for?
- If yes, how do I integrate it with the Esper endpoint given the syntax problem?
- Are there any other solutions / Camel components that could fix my problem?
You must define a separate seda route that is distributing your message to the esper engine such as (using the fluent style):
public final void configure() throws OperationNotSupportedException{
from("xmpp://localhost:5222/?blablabla...")
.filter().method(...)
.process(...)
.to("seda:sub");
from("seda:sub?concurrentConsumers=10)
.to("esper://session_X");
}
That said, seda should only be used if loosing messages is not a problem. Otherwise you should use a more robust protocol such as jms that allows to persist messages.
EDIT:
Beside seda, you could use threads(), where you could customize the threading behaviour by defining an ExecutorService:
public final void configure() throws OperationNotSupportedException{
from("xmpp://localhost:5222/?blablabla...")
.filter().method(...)
.process(...)
.threads()
.executorService(Executors.newFixedThreadPool(2))
.to("esper://session_X");
}
If you using seda or threads(), you may loose transaction safety in case of failures. For this case or if you need to balance the workload to several remote hosts, you may use jms. More information about this solution is found here.
If I have a rest service then, I know for sure that each request is treated by a separate thread and the threads can run in parallel.
What happens if I have a rest(http) service as inbound channel in spring integration. Will each request still be treated in parallel or the requests will be placed in queues... and it will be more like single threaded
Normal channels (DirectChannel) use the same execution thread as the object that put something into the channel (they are basically a way of abstracting a method call), so they are multi threaded.
From the docs:
In addition to being the simplest point-to-point channel option, one
of its most important features is that it enables a single thread to
perform the operations on "both sides" of the channel. For example, if
a handler is subscribed to a DirectChannel, then sending a Message to
that channel will trigger invocation of that handler's
handleMessage(Message) method directly in the sender's thread, before
the send() method invocation can return.
Edit
You have a very good point in your question. When you set a Queue element in a channel, spring automatically converts it to a QueueChannel (documentation), and as far as I can remember only one thread will be able to consume from the queue at at time. If you want "real" queue semantics (several producer and consumer threads) you can use an ExecutorChannel
When using Rest (http), the threading is managed by the servlet container; containers support multiple concurrent requests but setting the concurrency is done in the container, not Spring Integration.
With default Direct channels, the container threads will invoke the Spring Integration flow concurrently on the container threads.
I am trying to implement a custom inbound channel adapter in spring integration to consume messages from apache kafka. Based on spring integration examples, I found that I need to create a class that implements MessageSource interface and implement receive() method that would return consumed Message from kafka. But based on consumer example in kafka, the message iterator in KafkaStream is backed by a BlockingQueue. So if no messages are in the queue, the thread will be blocked.
So what is the best way to implement a receive() method as this method can potentially block until there is something to consume.. ?
In more general sense, How do we implement a custom inbound channel for streaming message sources that blocks until there is something ready to consume..?
The receive() method can block (as long as the underlying operation responds properly to an interrupted thread), and from an inbound-channel-adapter perspective, depending on the expectations of the underlying source, it might be preferable to use a fixed-delay trigger. For example, "long polling" can simulate event-driven behavior when a very small delay value is provided.
We have a similar situation in our JMS polling MessageSource implementation. There, the underlying behavior is handled by one of the JmsTemplate's receive() methods. The JmsTemplate itself allows configuration of a timeout value. That means, as an example, you may choose to block for 5-seconds max but then have a very short delay trigger between each blocking receive call. Alternatively, you can specify an indefinite receive timeout. The decision ultimately depends on the expectations of the underlying resource, message throughput, etc.
Also, I wanted to let you know that we are exploring Kafka adapters ourselves. Perhaps you would like to collaborate on this within the spring-integration-extensions repository?
Regards,
Mark