Spring ThreadPoolTaskExecutor shutdown with Async task - java

I'm excuting an Async task using spring task execution framework.
Doing so I annotated my method with the #Async annotation and added the following to my XML based application context:
<!-- async support -->
<task:annotation-driven executor="myAsyncExecutor" />
<task:executor id="myAsyncExecutor" pool-size="5-10" queue-capacity="100" />
Wondered in this case - how does the shutdown method of this executor gets invoked? I would like to make sure my app doesn't wait forever for this threadPool.
I could (instead of using the task namespace) define my executor as a bean and then set its destroy-method to "shutdown" but wondered regarding the task namespace definition style.
Any ideas?

Internally spring uses org.springframework.scheduling.concurrent.ThreadPoolTaskExecutorfor namespace of task:executor. If you look at the relevant source code (which is inherited) the shutdown on the executor is invoked at bean destroy; so no need to worry.

Related

UserTransaction jndi lookup failed when using CompletableFuture

I have a code which does context lookup to get UserTransaction JNDI as ctx.lookup("java:comp/UserTransaction").
When I run this code without using CompletableFuture, it works as expected.
When working with CompletableFuture in async thread, it gives exception saying jndi lookup failed.
I tried to check if I can get the required JNDI from global scope, but no luck.
CompletableFutures often run on the JDK's ForkJoinPool rather than application server managed threads, and so lack access to services provided by the application server. MicroProfile Context Propagation (available in Liberty) solves this problem by giving you a way to create CompletableFutures that run on the Liberty thread pool and with access to application component context.
In server.xml,
<featureManager>
<feature>mpContextPropagation-1.2</feature> <!-- 1.0 is also valid -->
<feature>jndi-1.0</feature>
<feature>jdbc-4.2</feature> <!-- or some other feature that participates in transactions -->
... other features
</featureManager>
In your application,
import org.eclipse.microprofile.context.ManagedExecutor;
import org.eclipse.microprofile.context.ThreadContext;
...
ManagedExecutor executor = ManagedExecutor.builder()
.propagate(ThreadContext.APPLICATION)
.build();
CompletableFuture<?> f = executor.supplyAsync(() -> {
UserTransaction tx = InitialContext.doLookup("java:comp/UserTransaction");
...
});
...
executor.shutdown();
If you don't want to construct a new ManagedExecutor, Liberty will also let you cast an EE Concurrency ManagedExecutorService to ManagedExecutor and use that. For example,
ManagedExecutor executor = InitialContext.doLookup("java:comp/DefaultManagedExecutor");
It should also be noted that with a ManagedExecutor, the application context is made available to dependent stages as well as the initial stage, allowing you to perform the lookup in a dependent stage such as the following if you prefer:
executor.supplyAsync(supplier).thenApplyAsync(v -> {
UserTransaction tx = InitialContext.doLookup("java:comp/UserTransaction");
...
});
The problem seems to be that the JNDI context is not propagated to the async thread, so when the CompletionStage attempts to execute the JNDI lookup, it has no context, so it doesn't know which component it is in and thus fails.
There is a very detailed explanation of context propagation and how to do it effectively in Open Liberty (which is the underlying product for WebSphere Liberty) at https://openliberty.io/docs/21.0.0.8/microprofile-context-propagation.html - I'd highly suggest reading it.
Certain Java/Jakarta/MicroProfile APIs will allow you to specify the async service (ExecutorService) to use for the async operation. If possible, you can pass it an instance of ManagedExecutorService which should propagate contexts (like JNDI, security, classloading, etc.) to the async thread. Otherwise, you may need to specify the managed executor service when constructing your CompletionStage.

why spring task scheduler not executing task simultaneously?

I have following configuration to run task--
<bean id="trendDataJob" class="com.ge.og.realtrack.scheduler.TrendDataJob"> </bean>
<task:scheduled-tasks>
<task:scheduled ref="trendDataJob" method="trendJob" cron="#{trendDataJob.configMap['corn_exp']}"></task:scheduled>
<task:scheduled ref="trendDataJob" method="metaDataTrendJob" cron="#{trendDataJob.configMap['metadata_corn_exp']}"></task:scheduled>
</task:scheduled-tasks>
cron expression for this is corn_exp=0 0/1 * * * ? to run every minute.
Here is problem as both method of trendDataJob schedule to run every minute but they are executing one after another first trendJob once its completed then its executing metaDataTrendJob i am not able to understand this behavior .
Also another problem is in case of method takes more than one minute to finish finish..its not triggering next call till current call finish and return.
By default the scheduler uses a ConcurrentTaskScheduler with a single thread. If you want another one configure it and pass it to the scheduled-tasks scheduler attribute.
The easiest way, in XML, is to use the scheduler element. (See this section in the reference guide).
<task:scheduler id="scheduler" pool-size="10"/>
Then simply register it on the other element.
<task:scheduled-tasks scheduler="scheduler"> ...
Have you used #EnableScheduling in your java code?
#EnableScheduling ensures that a background task executor is created. Without it, nothing gets scheduled.
For more, you can go through
Spring 3 #Scheduled – 4 Ways to Schedule Tasks
Spring Batch + Spring TaskScheduler example
Scheduling Tasks
Enable scheduling annotations
To enable support for #Scheduled and #Async annotations add #EnableScheduling and #EnableAsync to one of your #Configuration classes:
#Configuration
#EnableAsync
#EnableScheduling
public class AppConfig {
}
You are free to pick and choose the relevant annotations for your application. For example, if you only need support for #Scheduled, simply omit #EnableAsync. For more fine-grained control you can additionally implement the SchedulingConfigurer and/or AsyncConfigurer interfaces. See the javadocs for full details.
If you prefer XML configuration use the <task:annotation-driven> element.
<task:annotation-driven executor="myExecutor" scheduler="myScheduler"/>
<task:executor id="myExecutor" pool-size="5"/>
<task:scheduler id="myScheduler" pool-size="10"/>
Notice with the above XML that an executor reference is provided for
handling those tasks that correspond to methods with the #Async
annotation, and the scheduler reference is provided for managing those
methods annotated with #Scheduled.
If you're using a default task scheduler in spring, i'm pretty sure it only runs on a single thread, hence why you cannot make them run in parallel.
You need to configure some kind of BatchScheduler with a pool size, to make it run in parallel.

How can I use a custom executor for an #Async method?

I am using ThreadPoolExecutor as my custom executor with #ASync annotation.
In google, I have found that the task below needs to be configured in xml but I'm not sure how the myExecutor is mapped to my custom executor.
<task:annotation-driven executor="myExecutor" />
Even found that in bean properties, its path is not given.
How is it called then?
Four options:
Declare a single bean of type TaskExecutor
Declare a single bean with the name AsyncExecutionAspectSupport.DEFAULT_TASK_EXECUTOR_BEAN_NAME ("taskExecutor")
Implement AsyncConfigurer#getAsyncExecutor
For individual classes/methods, provide a qualifier of an executor bean in the #Async#value.
I am not sure I understand your question but your configuration snippet is correct provided that you have defined an Executor bean with myExecutor as id.
The javadoc of #EnableAsync has a good coverage of how this works. For instance, to create a ThreadPoolTaskExecutor with 5 core threads and 10 max threads:
<task:annotation-driven executor="myExecutor"/>
<task:executor id="myExecutor" pool-size="5-10"/>

Spring Integration + #Aysnc - Gateway vs ServiceActivator

I have a remote service that I'm calling to load pricing data for a product, when a specific event occurs. Once loaded, the product pricing is then broadcast for another consumer to process elsewhere.
The calling code doesn't care about the response - it's fire-and-forget, responding to an application event, and triggering a new workflow.
In order to keep the calling code as quick as possible, I'd like to use #Async here, but I'm having mixed results.
The basic flow is:
CallingCode -> ProductPricingGateway -> Aggregator -> BatchedFetchPricingTask
Here's the Async setup:
<task:annotation-driven executor="executor" scheduler="scheduler"/>
<task:scheduler id="scheduler" pool-size="1" />
<task:executor id="executor" keep-alive="30" pool-size="10-20" queue-capacity="500" rejection-policy="CALLER_RUNS" />
The other two components used are a #Gateway, which the intiating code calls, and a down-stream #ServiceActivator, that sits behind an aggregator. (Calls are batched into small groups).
public interface ProductPricingGateway {
#Gateway(requestChannel="product.pricing.outbound.requests")
public void broadcastPricing(ProductIdentifer productIdentifier);
}
// ...elsewhere...
#Component
public class BatchedFetchPricingTask {
#ServiceActivator(inputChannel="product.pricing.outbound.requests.batch")
public void fetchPricing(List<ProductIdentifer> identifiers)
{
// omitted
}
}
And the other relevant intergation config:
<int:gateway service-interface="ProductPricingGateway"
default-request-channel="product.pricing.outbound.requests" />
<int:channel id="product.pricing.outbound.requests" />
<int:channel id="product.pricing.outbound.requests.batch" />
I find that if I declare #Async on the #ServiceActivator method, it works fine.
However, if I declare it on the #Gateway method (which seems like a more appropriate place), the aggregator is never invoked.
Why?
I'm struggling to see how #Async would work anywhere here, because the starting point is when your code calls the ProductPricingGateway.broadcastPricing() method.
With #Async on the gw, what would the scheduler send?
Similarly, with #Async on the service, what would the scheduler pass in in identifiers?
The correct way to go async as soon as possible would be to make product.pricing.outbound.requests an ExecutorChannel...
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#executor-channel
http://static.springsource.org/spring-integration/reference/html/messaging-channels-section.html#channel-configuration-executorchannel
...where the calling thread hands off the message to a task executor.

#Scheduled Tasks keep JVM hanging when standalone program wants to exit, JVM needs to be killed

i opened this bug in the spring bug tracker. would be cool if some clever people here can already help me
https://jira.springsource.org/browse/SPR-9341
Set "true" on daemon property for the scheduler - eg
<!-- task scheduling for #Scheduled annotation -->
<task:annotation-driven executor="myExecutor" scheduler="myScheduler"/>
<task:executor id="myExecutor" pool-size="1" />
<bean id="myScheduler" class="org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler">
<property name="poolSize" value="2" />
<property name="threadNamePrefix" value="myScheduler-"/>
<property name="waitForTasksToCompleteOnShutdown" value="false" />
<property name="daemon" value="true" />
</bean>
Have you tried having your #Scheduled bean implement DisposableBean (so it can be informed when the Spring context is shutting down) and explicitly closing the context in your main() method?
Conceptually, I don't see how the code posted can work like you expect. Spring needs to launch new threads to run your #Scheduled task at the time/rate you configure, which means that when the code in your main() method exits, there are non-daemon threads still running in the JVM. If you don't tell Spring to shut these threads down, then how will they be terminated?
edit: to be clear, I think the solution is to explicitly call close() on your ApplicationContext. Otherwise Spring does not have a way to tell the executor service running your scheduled tasks to shut itself down. A JVM shutdown hook will not be invoked when main() exits since non-daemon threads are still running.
This is the solution using Java config
#Bean
public TaskScheduler daemonTaskScheduler() {
ThreadPoolTaskScheduler taskScheduler = new ThreadPoolTaskScheduler();
taskScheduler.setDaemon(false);
taskScheduler.setThreadNamePrefix("daemon");
taskScheduler.setPoolSize(5);
return taskScheduler;
}
or if you want to really get into the details, the config class can be like this
#Configuration
public class SchedulerConfig implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
One thing is not supported though is to be able to use multiple TaskSchedulers within a single application. I opened a JIRA for that

Categories