I have a Spring Boot / Spring Integration application running that makes use of #Poller in Spring Integration and also #Scheduled on another method in a mostly-unrelated class. The #Poller is for polling an FTP server for new files. However I've found that it seems like the #Poller is somehow interfering with my #Scheduled method.
The #Poller has maxMessagesPerPoll = -1 so that it will process as many files as it can get. However, when I first start my application, there are over 100 files on the FTP server, so it's going to process them all. What I have found is that, if these files are being processed, then the #Scheduler stops triggering at all.
For example, if I set my #Scheduled to have a fixedDelay = 1 to trigger every millisecond and then start my application, the #Scheduled method will trigger a few times, until the #Poller triggers and begins processing messages, at which point my #Scheduled method completely stops triggering. I assumed that simply there was some task queue that was being filled by the #Poller so I simply needed to wait for all of the messages to be processed, but even after the #Poller is completely done and has processed all of the files, the #Scheduled method still does not trigger at all.
My thoughts are that maybe there is some task queue that is being filled by the #Poller, which is breaking my #Scheduled method, but if so, I still don't see any way that I can use a separate task queue for the different methods, or any other possible options for customizing or fixing this issue.
Does anyone have any idea what might be happening to my #Scheduled method, and how can I fix this?
#Poller:
#Bean
#InboundChannelAdapter(channel = "ftpChannel", poller = #Poller(cron = "0/5 * * ? * *", maxMessagesPerPoll = "-1"))
public MessageSource<InputStream> myMessageSource() {
//Build my message source
return messageSource;
}
#Scheduled:
#Scheduled(fixedDelay = 6000)
public void myScheduledMethod(){
//Do Stuff
}
They do use the same bean name for their scheduler taskScheduler.
It should only be a problem if you have 10 or more pollers (the default scheduler bean configured by Spring Integration has a pool size of 10 by default). A common mistake is having many queue channels (which hold on to scheduler threads for a second at a time, by default).
If you only have one poller, and not a lot of queue channels, I can't explain why you would get thread starvation.
You can increase the pool size - see Configuring the Task Scheduler.
Or you can use a different scheduler in the ScheduledAnnotationBeanPostProcessor.
As already pointed out, the problem is linked to task schedulers having the same name, although it may occur even if there are fewer than 10 pollers. Spring Boot auto-configuration provides scheduler with default pool size of 1 and registration of this scheduler may happen before the registration of taskScheduler, provided by Spring Integration.
Configuring task scheduler via Spring Integration properties doesn't help as this bean doesn't get registered at all. But providing own instance of TaskScheduler with adjusted pool size, changing pool size of auto-configured scheduler via spring.task.scheduling.pool.size property or excluding TaskSchedulingAutoConfiguration should solve the issue.
In our case, the Poller was used by inbound-channel-adapter to access mail from the IMAP server - but when it polls for an email with large attachments, it blocks the thread used by #Scheduled as it only uses a single thread for scheduling the task.
So we set the Spring property spring.task.scheduling.pool.size=2 - which now allows the #Scheduled method to run in a different thread even if the poller gets blocked (in a different thread) when trying to fetch mail from IMAP server
Related
we have a larger multi service java spring app that declares about 100 exchanges and queues in RabbitMQ on startup. Some are declared explicitly via Beans, but most of them are declared implicitly via #RabbitListener Annotations.
#Component
#RabbitListener(
bindings = #QueueBinding(key = {"example.routingkey"},
exchange = #Exchange(value = "example.exchange", type = ExchangeTypes.TOPIC),
value = #Queue(name = "example_queue", autoDelete = "true", exclusive = "true")))
public class ExampleListener{
#RabbitHandler
public void handleRequest(final ExampleRequest request) {
System.out.println("got request!");
}
There are quite a lot of these listeners in the whole application.
The services of the application sometimes talk to each other via RabbitMq, so take a example Publisher that publishes a message to the Example Exchange that the above ExampleListener is bound to.
If that publish happens too early in the application lifecycle (but AFTER all the Spring Lifecycle Events are through, so after ApplicationReadyEvent, ContextStartedEvent), the binding of the Example Queue to the Example Exchange has not yet happend and the very first publish and reply chain will fail. In other words, the above Example Listener would not print "got request".
We "fixed" this problem by simply waiting 3 seconds before we start sending any RabbitMq messages to give it time to declare all queues,exchanges and bindings but this seems like a very suboptimal solution.
Does anyone else have some advice on how to fix this problem? It is quite hard to recreate as I would guess that it only occurs with a large amount of queues/exchanges/bindings that RabbitMq can not create fast enough. Forcing Spring to synchronize this creation process and wait for a confirmation by RabbitMq would probably fix this but as I see it, there is no built in way to do this.
Are you using multiple connection factories?
Or are you setting usePublisherConnection on the RabbitTemplate? (which is recommended, especially for a complex application like yours).
Normally, a single connection is used and all users of it will block until the admin has declared all the elements (it is run as a connection listener).
If the template is using a different connection factory, it will not block because a different connection is used.
If that is the case, and you are using the CachingConnectionFactory, you can call createConnection().close() on the consumer connection factory during initialization, before sending any messages. That call will block until all the declarations are done.
I have been working on an old project where both Spring scheduler is enabled (#Scheduled actively being used) also some native JDK thread pool instances active too. In the project configuration xml I see below;
<task:scheduler id="taskScheduler" pool-size="${task-scheduler.pool-size}"/>
<task:executor id="taskExecutor" pool-size="${task-executor.pool-size}" queue-capacity="${task-executor.queue-capacity}"/>
<task:annotation-driven executor="taskExecutor" scheduler="taskScheduler"/>
And them some quartz implementation comes arise with its own job definition, trigger definition stuff where the Jobs defines their own ThreadPoolExecutors as below,
BlockingQueue<Runnable> workerTaskQueue = new ArrayBlockingQueue<Runnable>(poolSize*3);
threadPoolExecutor = new
ThreadPoolExecutor(poolSize,poolSize,1000L,TimeUnit.MILLISECONDS,workerTaskQueue);
then starts to submit tasks (Runnables) into the pool.
threadPoolExecutor.execute(new ImcpWorker(task, this, workerTaskUtil));
But what I see is that at some point the Spring task rejection exception thrown for these tasks. This is nonsense (unless Spring intercepts using AOP the thread pool executors, even if they are created natively). Because there is no spring managed executor.
2021-06-21 11:51:58,679 ERROR [pool-151-thread-81] LisJobHandler -
Exception occured: Executor
[java.util.concurrent.ThreadPoolExecutor#5532827b[Running, pool size =
1000, active threads = 1000, queued tasks = 100000, completed tasks =
135592411]] did not accept task:
org.springframework.aop.interceptor.AsyncExecutionInterceptor$1#5a237108
msisdn:5363443640 org.springframework.core.task.TaskRejectedException:
Executor [java.util.concurrent.ThreadPoolExecutor#5532827b[Running,
pool size = 1000, active threads = 1000, queued tasks = 100000,
completed tasks = 135592411]] did not accept task:
org.springframework.aop.interceptor.AsyncExecutionInterceptor$1#5a237108
So again the question, does spring scheduler and executors (if configured) intercepts ThreadPoolExecutors in an application ?
Well the issue was out of nowhere about my initial assumption. As I go deeper in spring debugging I see that the queue task submission comes from one of my other bean. It has async task registrations and each error in the app calls this async method to trigger some custom actions. So that once an endpoing failed really bad and can not recover, this issue occurs. Because the current design keeps calling the async method and each call occupies a slot in the executor's pool.
I have a Spring RESTful service using a Tomcat web servlet that processes 2 different types of data and therefore has 2 rest controllers for each type of data. Controller #1 has the potential to perform an intensive task using lots of memory so I would like to allow up to, for instance, 10 connections on this controller. But if all 10 connections are processing on controller #1 I would also like controller #2 to have its own thread pool so it can continue processing while controller #1 is full.
The proper way to configure Tomcat is set its properties in the application.yml as described here in the spring docs.
To set the total number of max connection one would use:
server.tomcat.max-connections: 10
server.tomcat.max-threads: 10
But, this configures the maximum number of connections/threads for the entire application, both controllers combined. I would need each controller to have its own thread pool and its own number of maximum connections. Is this possible?
You can't *. Spring Boot sets up an embedded Tomcat servlet container and registers a DispatcherServlet. The entire Tomcat pool of threads is used to handle all requests going through the DispatcherServlet (or any other servlets/filters registered).
* You should create a ThreadPoolTaskExecutor or ExecutorService bean for each type of data, then inject them into your #Controller beans appropriately and dispatch all the work to them.
#Controlller
class FirstController {
private final ExecutorService threadPool;
public FirstController(#Qualifier("first-type-data") ExecutorService threadPool) {
this.threadPool = threadPool;
}
#RequestMapping("/endpoint1")
public CompletableFuture<Foo> handleEndpoint() {
CompletableFuture<Foo> foo = new CompletableFuture<>();
threadPool.submit(() -> {
// handle all your business logic
foo.complete(...);
});
return foo;
}
}
The Spring MVC "user space stack" doesn't really know about connections. You can pass around the HttpServletRequest and maintain your own count. Once you hit a threshold, you could send back an appropriate response directly without starting any of the business logic.
I'm having a strange issue.
In a class I have:
private final ScheduledExecutorService executor
= Executors.newSingleThreadScheduledExecutor();
public MyClass(final MyService service) {
executor.scheduleAtFixedRate(new Runnable() {
#Override
public void run() {
service.foo();
}
}, 0, 30, TimeUnit.SECONDS);
}
MyService is a spring bean that has #Transactional on its foo method. MyClass is instantiated only once (effectively singleton in the application)
After the first invocation of service.foo() (which works fine), on subsequent requests to the application I am randomly getting:
java.lang.IllegalStateException: Already value [SessionImpl(PersistenceContext[entityKeys=[],collectionKeys=[]];ActionQueue[insertions=[] updates=[] deletions=[] collectionCreations=[] collectionRemovals=[] collectionUpdates=[]])] for key [org.hibernate.impl.SessionFactoryImpl#2cd91000] bound to thread [http-bio-8080-exec-10]
A few observations:
when the exception is thrown, the session stored in the TransactionSynchronizationManager is closed
the transaction synchronization manager resource map for the manually scheduled thread is empty
the exception occurs in http-bio-8080-exec threads, but the manually scheduled one is a pool- thread - so there is no 'thread polution'
MyClass is instantiated on startup, in a thread named "Thread-5", i.e. it is not in any way related to the http-bio threads.
If I comment the invocation to service.foo(), or get rid of the #Transactioanl annotation, everything works (except, of course, that data is not inserted in the db)
Any clues what the issue might be?
(Note: I prefer not to use #Scheduled - I don't want MyClass to be a spring bean, and the runnable has to operate on some of its internal state before invoking the service)
Update: After a while I'm able to reproduce it even without the scheduling stuff. So probably a general spring problem with the latest snapshot I'm using.
I assume that exception comes from an invocation of the TransactionInterceptor or the like (some Spring infrastructure bean), or are you using the TransactionSynchronizationManager from your own code somewhere? It appears to me that something is binding sessions to a thread being managed by your container (is that Tomcat 7?) and failing to unbind them before they're returned to the container's thread pool. Thus when the same thread is used for another transactional request later, Spring can't bind the new Session to it because the old one wasn't cleaned up.
I don't actually see anything to make me think it's directly related to your custom scheduling with MyClass. Are you sure it's not just a coincidence that you didn't see the exception when you remove the service.foo() call?
If you could catch one of those threads in a debugger when it's being returned to the pool with a Session still bound to it, you might be able to backtrack to what it was used for. An omniscient debugger would theoretically be perfect for this, though I've never used one myself: ODB and TOD are the two I know of.
Edit: An easier way to find the offending threads: add a Filter (servlet filter, that is) to your app that runs "around" everything else. After chain.doFilter(), as the last act of handling a request before it leaves your application, check the value of TransactionSynchronizationManager.getResourceMap(). It should be an empty map when you're done handling a request. When you find one that isn't, that's where you need to backtrack from to see what happened.
One of the functionalities of app that I'm developing is that an email is sent every time user get's his invoice registered in our system. Sending an email from Java app easy especially if using Spring framework. I use JavaMailSenderImpl and SimpleMailMessage from Spring framework and it works okay.
But I need to send email in a new thread so that communication with SMTP server does not slow down the rest of apps processes. Problem is that when I call
MailSender.send()
method from a new thread, email message is not sent, as opposed when sending in a same thread.
I tried with spring's #Async annotation, spring Executor and plain old java.lang.Thread but it doesn't work.
Can email be send asynchronously in java with spring? Had anyone a similar issue with this?
I can post some code samples if needed.
Tnx
It should work.
You need to tell Spring that it should pay attention to your #Async Annotation by:
<task:annotation-driven />
And there are some limitations you need to pay respect to:
the annotated method must belong to a spring bean
the invocation of the annotated method must be executed from a different Spring Bean (if you are using standard Spring AOP).
1) Add task namespace into spring context. The following xsd is for Spring 3.0 release.
xmlns:task="http://www.springframework.org/schema/task"
http://www.springframework.org/schema/task
http://www.springframework.org/schema/task/spring-task-3.0.xsd
2) Declare the executor in your spring context file.
<!-- Executor for the methods marked wiht #async annotations -->
<task:executor id="asyncExecutor" pool-size="25" />
3) Configure this to Spring task
<!-- Configuration for the Runtime -->
<task:annotation-driven executor="asyncExecutor" />
These are all the configuration you need in the Spring context file.
The method you need to perform asynchronously annotate it with #Async annotaion.
Now, all the methods annotated with #async will be handled be spring task executor asynchronously.
One of the known issues of executing code in an asynchronous thread is that the exceptions thrown by that code are lost, unless you provide a specific handler to catch them. The effect you see (namely, the #Async method failing both to properly execute and to show a clue for that failure in the form of a log or stacktrace of some sort) is typically produced by such an exception, indeed thrown but swallowed by the asynchronous thread.
One of the many possible reasons why your #Async method works when synchronous is that you are doing some database operation from the method. It works when synchronous, because you are probably calling it from a #Transactional method of another #Service, so for that thread a Session or EntityManager is found; but it does not work when asynchronous, because in this case you are on a new thread, and if the #Async method is not #Transactional itself there is no Session or EntityManager that could perform the operation.
TL;DR Provide an exception handler to catch exceptions that would be swallowed by the asynchronous thread otherwise, or for the sake of debugging use a big try/catch for the body of the #Async method. You will probably see some exception popping up, then you will need to take the proper actions to avoid it.
You need to enable the feature in Spring:
#EnableAsync
public class MyApp {
public static void main(String[] args) {
SpringApplication.run(MyApp.class, args);
}
}