Pass request scope data to async methods in CDI - java

Java EE 7 application is running on Wildfly 9.0.2.Final. There is a problem to access request scoped data from within #Asynchronous methods.
In web filter data (e.g. token) is set into RequestScoped CDI bean. Later we want to access this data. Everything works fine if we work in one thread. But if there is need to run code asynchronously the problem appears. CDI injects empty bean and request data is lost.
Here is the example:
#RequestScoped
public class CurrentUserService implements Serializable {
public String token;
}
#Stateless
public class Service {
#Inject
private RestClient client;
#Resource
private ManagedExecutorService executorService;
#Resource
private ContextService contextService;
#Asynchronous
private <T> Future<T> getFuture(Supplier<T> supplier) {
Callable<T> task = supplier::get;
Callable<T> callable = contextService.createContextualProxy(task, Callable.class);
return executorService.submit(callable);
}
public String getToken() throws Exception {
return getFuture(client::getToken).get();
}
}
#ApplicationScoped
public class RestClient {
#Inject
private CurrentUserService currentUserBean;
public String getToken() {
return currentUserBean.token;
}
}
In the given example we want to access current user token (CurrentUserService#token) from asynchronous Service.getToken method. As the result we will receive null.
It's expected that 'request scoped' data should be accessible from tasks executed within request scope. Something like InheritableThreadLocal should be used to allow assess to original thread data from new threads.
Is it a bug? May be I'm doing something wrong? If yes - what it the correct way to propagate such data into async calls?
Thanks in advance.

According to ยง2.3.2.1 of the Java EE Concurrency Utilities specification, you should not attempt to do this:
Tasks that are submitted to a managed instance of ExecutorService may still be running after the lifecycle of the submitting component. Therefore, CDI beans with a scope of #RequestScoped, #SessionScoped, or #ConversationScoped are not recommended to use as tasks as it cannot be guaranteed that the tasks will complete before the CDI context is destroyed.
You need to collect your request scoped data and pass it to your asynchronous task when you create it, whether you use concurrency utilities or #Asynchronous methods.

Related

CDI PostConstruct and volatile fields

Using a post construct approach when we want to conditionally initialise some of the bean's fields, do we need to care about volatility of the field, since it is a multithread environment?
Say, we have something like this:
#ApplicationScoped
public class FooService {
private final ConfigurationService configurationService;
private FooBean fooBean;
#Inject
FooService(ConfigurationService configurationService) {
this.configurationService = configurationService;
}
void init(#Observes #Initialized(ApplicationScoped.class) Object ignored) {
if (configurationService.isFooBeanInitialisationEnabled()) {
fooBean = initialiseFooBean(configurationService); // some initialisation
}
}
void cleanup(#Observes #Destroyed(ApplicationScoped.class) Object ignored) {
if (fooBean != null) {
fooBean.cleanup();
}
}
}
So should the fooBean be wrapped into, let's say, the AtomicReference or be a volatile or it would be a redundant extra protection?
P.S. In this particular case it can be reformulated as: are post construct and post destroy events performed by the same thread or not? However I would like to have an answer for a more general case.
I would say it depends which thread is actually initiating and destroying the contexts.
If you use regular events, they are synchronous (asynchronous events have been added in CDI 2.0 with ObservesAsync, see
Java EE 8: Sending asynchronous CDI 2.0 events with ManagedExecutorService ) so they are called in the same thread as the caller.
In general, I don't think the same thread is used (in application servers or standalone applications) so I would recommend using volatile to ensure the right value is seen (basically the value constructed seen on destroy thread). However, it is not a use case happening so much to initiate and destroy your application in a concurrent way...
FooService is a singleton which is shared between all managed beans in the application.
Annotation Type ApplicationScoped
private FooBean fooBean is a state of the singleton object.
By default, CDI does not manage concurrency so it is the responsibility of a developer.
In this particular case it can be reformulated as: are post construct and post destroy events performed by the same thread or not?
CDI specification does not restrict containers to use the same thread for initialization and destruction of the application context. This behavior is implementation specific. In general case those threads will be different because initialization happens on the thread handling the first request to the application but destruction happens on the thread handling request from management console.
You may delegate concurrency management to EJB container - if your runtime environment includes one.
Neither volatile nor AtomicReference are needed in this case!
Following definition will do the job:
#javax.ejb.Startup // initialize on application start
#javax.ejb.Singleton // EJB Singleton
public class FooService {
private final ConfigurationService configurationService;
private FooBean fooBean;
#javax.inject.Inject
FooService(ConfigurationService configurationService) {
this.configurationService = configurationService;
}
#javax.annotation.PostConstruct
void init() {
if (configurationService.isFooBeanInitialisationEnabled()) {
fooBean = initialiseFooBean(configurationService); // some initialisation
}
}
#javax.annotation.PreDestroy
void cleanup() {
if (fooBean != null) {
fooBean.cleanup();
}
}
}
According to the specification:
An event with qualifier #Initialized(ApplicationScoped.class) is synchronously fired when the application context is initialized.
An event with qualifier #BeforeDestroyed(ApplicationScoped.class) is synchronously fired when the application context is about to be destroyed, i.e. before the actual destruction.
An event with qualifier #Destroyed(ApplicationScoped.class) is synchronously fired when the application context is destroyed, i.e. after the actual destruction.
And according to this presentation Bean manager lifecycle: the lifecycle of the bean manager is synchronous between the different states of the process and the sequence is kept: "destroy not before init".
Jboss are the specification lead of CDI 2.0
I do not see any scenario that would require a volatile/protection. Even if T1 inits then T2 destroys, it will be T1 then T2, not T1 and T2 concurrently.
And even if it would be concurrently, to have an issue it would imply weird scenario, edge scenario outside the CDI runtime:
T2 calls destroy (fooBean is null and now 'cached' in a register)
Then T1 calls init: destroy before init, at this point we are in the 4th dimension of CDI),
Then T2 calls destroy (fooBean is already cached in a register so is value is null)).
Or
T2 calls a method that access fooBean (fooBean is null and now 'cached' in a register)
Then T1 calls init: T1 is initialized whereas fooBean has already been used by T2, at this point we are in the 4th dimension of CDI
Then T2 calls destroy (fooBean is already cached in a register so is value is null)).

Starting a CDI conversation and injecting #ConversationScoped bean into stateless session bean

Similar questions have been asked, but don't quite address what I'm trying to do. We have an older Seam 2.x-based application with a batch job framework that we are converting to CDI. The job framework uses the Seam Contexts object to initiate a conversation. The job framework also loads a job-specific data holder (basically a Map) that can then be accessed, via the Seam Contexts object, by any service down the chain, including from SLSBs. Some of these services can update the Map, so that job state can change and be detected from record to record.
It looks like in CDI, the job will #Inject a CDI Conversation object, and manually begin/end the conversation. We would also define a new ConversationScoped bean that holds the Map (MapBean). What's not clear to me are two things:
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
Edits for clarification and a simplified class structure:
MapBean would need to be a ConversationScoped object, containing data for a specific instance/run of a job.
#ConversationScoped
public class MapBean implements Serializable {
private Map<String, Object> data;
// accessors
public Object getData(String key) {
return data.get(key);
}
public void setData(String key, Object value) {
data.put(key, value);
}
}
The job would be ConversationScoped:
#ConversationScoped
public class BatchJob {
#Inject private MapBean mapBean;
#Inject private Conversation conversation;
#Inject private JobProcessingBean jobProcessingBean;
public void runJob() {
try {
conversation.begin();
mapBean.setData("key", "value"); // is this MapBean instance now bound to the conversation?
jobProcessingBean.doWork();
} catch (Exception e) {
// catch something
} finally {
conversation.end();
}
}
}
The job might call a SLSB, and the current conversation-scoped instance of MapBean needs to be available:
#Stateless
public class JobProcessingBean {
#Inject private MapBean mapBean;
public void doWork() {
// when this is called, is "mapBean" the current conversation instance?
Object value = mapBean.getData("key");
}
}
Our job and SLSB framework is quite complex, the SLSB can call numerous other services or locally instantiated business logic classes, and each of these would need access to the conversation-scoped MapBean.
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Yes, since MapBean is #ConversationScoped it is tied to the call chain for the duration starting from conversation.begin() until conversation.end(). You can think of #ConversationScoped (and #RequestScoped and #SessionScoped) as instances in ThreadLocal - while there exists an instance of them for every thread, each instance is tied to that single thread.
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a #ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
It's not as magical as you think if you see that this pattern is the same as the one I explained above. The SLSB indeed gets a separate instance, but not just any instance, the one which belongs to the scope from which the SLSB was called.
In addition to the link you posted, see also this answer.
Iv'e tested a similar code to what you posted and it works as expected - the MapBean is the same one injected throughout the call. Just be careful with 2 things:
BatchJob is also #ConversationScoped but does not implement Serializable, which will not allow the bean to passivate.
data is not initialized, so you will get an NPE in runJob().
Without any code samples, I'll have to do some guessing, so let's see if I got you right.
Would the container know to pass this instance to services down the call chain?
If you mean to use the same instance elsewhere in the call, then this can be easily achieved by making the MapBean an #ApplicationScoped bean (or, alternatively, and EJB #Singleton).
it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical.
Here I suppose that the reason why it seems magical is that SLSB is in terms of CDI a #Dependent bean. And as you probably know, CDI always creates new instance for dependent bean injection point. E.g. yes, you get a different SLS/Dependent bean instance for each call.
Perhaps some other scope would fit you better here? Like #RequestScoped or #SessionScoped? Hard to tell without more details.

Customize/Extend Spring's #Async support for shiro

I'm using Spring's #EnableAsync feature to execute methods asynchronously. For security I'm using Apache Shiro. In the code that is executed asynchronously I need to have access to the Shiro subject that was attached to the thread that triggered the async call.
Shiro supports using an existing subject in a different thread by associating the subject with the Callable that is to be executed on the different thread (see here):
Subject.associateWith(Callable)
Unfortunately I don't have direct access to the Callable as this stuff is encapsulated by Spring. I found that I would need to extend Spring's AnnotationAsyncExecutionInterceptor to associate my subject with the created Callable (that's the easy part).
By problem is now how to make Spring use my custom AnnotationAsyncExecutionInterceptor instead of the default one. The default one is created in AsyncAnnotationAdvisor and AsyncAnnotationBeanPostProcessor. I can of course extend these classes as well, but this only shifts to problem as I need the make Spring use my extended classes again.
Is there any way to achieve what I want?
I would be fine with adding a new custom async annotation as well. But I don't think this would be much of a help.
UPDATE:
Actually my finding that AnnotationAsyncExecutionInterceptor would need to be customized was wrong. By chance I stumbled across org.apache.shiro.concurrent.SubjectAwareExecutorService which does pretty exactly what I want and made me think I could simply provide a custom executor instead of customizing the interceptor. See my answer for details.
I managed to achieve what I want - shiro subject is automatically bound and unbound to tasks that are executed by spring's async support - by providing an extended version of the ThreadPoolTaskExecutor:
public class SubjectAwareTaskExecutor extends ThreadPoolTaskExecutor {
#Override
public void execute(final Runnable aTask) {
final Subject currentSubject = ThreadContext.getSubject();
if (currentSubject != null) {
super.execute(currentSubject.associateWith(aTask));
} else {
super.execute(aTask);
}
}
... // override the submit and submitListenable method accordingly
}
To make spring use this executor I had to implement an AsyncConfigurer that returns my custom executor:
#EnableAsync
public class AsyncConfiguration implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
final SubjectAwareTaskExecutor executor = new SubjectAwareTaskExecutor();
executor.setBeanName("async-executor");
executor.setCorePoolSize(10);
executor.setMaxPoolSize(10);
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new SimpleAsyncUncaughtExceptionHandler();
}
}
With this change the parent Thread's subject will automatically be available in methods that are annotated with #Async and - probably even more important - the subject will be de-attached from the thread after execution of the asynchronous method.

Spring thread safety static utils

I was wondering if next scenario is thread-safe:
I have a spring controller with method
#Autowired
private JobService jobService;
public String launch(#ModelAttribute("profile") Profile profile){
JobParameters jobParams = MyUtils.transform(profile);
jobService.launch(profile.getJobName(), jobParams);
return "job";
}
and I have MyUtils class with static method that transforms one kind of object to another... like so :
public class MyUtils {
public static JobParameters transform(Profile profile) {
JobParametersBuilder jpb = new JobParametersBuilder();
jpb.addString("profile.name", profile.getProfileName());
jpb.addString("profile.number", String.valueOf(profile.getNumber()));
return jpb.toJobParameters();
}
}
Classes JobParametersBuilder , JobParameters and JobService are from spring batch core project. Profile class is simple POJO.
The question really is... is this static method transform thread-safe since it is dealing with object instances, although all of those instances are locally created for the method.
This concrete code IS thread safe if some conditions are met. Here is explanation:
launch method is called from Spring boot controller. Every call that comes to Spring boot controller is called from different thread and that thread is taking execution to the end of the call stack(unless you call some asynchronous call inside that thread). Tomcat can handle 200 threads in same time by default. Which means you can have 200 calls to your Controllers in same time and they will all be in separate threads. Which means launch is thread safe.
Profile is passed to the transform method and if it is simple POJO it is thread safe, because on every call it will be new instance of Profile.
Inside transform method you are instantiate JobParametersBuilder every time which is thread safe if code inside toJobParameters is thread safe and doesn't keep any state of the JobParametersBuilder class or some other.

Long-running service in Spring?

I want to have a service in my Spring application that watches a directory for changes using the Java 7 WatchService. The idea is that when a file in the directory is changed, clients connected via WebSockets are notified.
How do I get a bean to run in its own thread as a service?
What you are looking for is Asynchronous execution. With a correctly configured context (see the link), you declare a class like so
#Component
public class AsyncWatchServiceExecutor {
#Autowired
private WatchService watchService; // or create a new one here instead of injecting one
#Async
public void someAsyncMethod() {
for (;;) {
// use WatchService
}
}
}
Everything you do in someAsyncMethod() will happen in a separate thread. All you have to do is call it once.
ApplicationContext context = ...; // get ApplicationContext
context.getBean(AsyncWatchServiceExecutor.class).someAsyncMethod();
Use the WatchService as described in the Oracle documentation.
If you don't have direct access to your ApplicationContext, you can inject the bean in some other bean and call it in a #PostConstruct method.
#Component
public class AsyncInitializer {
#Autowired
private AsyncWatchServiceExecutor exec;
#PostConstruct
public void init() {
exec.someAsyncMethod();
}
}
Careful which proxying strategy (JDK or CGLIB) you use.

Categories