In servlet, because it's singleton except implement SingleThreadModel. Reference this article https://www.fortify.com/vulncat/en/vulncat/java/singleton_member_field_race_condition.html
But in EJB 3, I cannot find a similar document. And because container will create a pool to handle EJBs. I think the class variable should be safe, is it correct?
For example, classVar1 is a class variable, I initial it in constructor and use it later. In servlet, it may have problem, but in EJB 3, it should be ok, right?
#Stateles
public class HelloBean implements Hello {
ObjectXXX classVar1;
public HelloBean() {
ObjectXXX classVar1 = new ObjectXXX();
}
public String doHello(String message) {
return message + classVar1.method1();
}
}
And another question is that the resource (i.e. EntityManager in JPA) injected to EJB, it should be thread safe?
The container must let only 1 thread in particular EJB instance, so: each method can be executed by only a single thread and your variable is 'safe' (as you initialize it in the constructor or #PostConstruct method).
However, the SLSB (stateless EJB) should not be used to keep a state. The EJB is pooled, so you don't have any guarantee that you will return to the same instance. The SFSB is made for this purpose.
The EntityManager, as every instance field in the EJB, is thread safe.
However, the EntityManager itself is not thread safe and cannot be used in environment where more than 1 thread can access it (i.e. in Servlet). EntityManagerFactory should be used instead in such cases.
Related
Using a post construct approach when we want to conditionally initialise some of the bean's fields, do we need to care about volatility of the field, since it is a multithread environment?
Say, we have something like this:
#ApplicationScoped
public class FooService {
private final ConfigurationService configurationService;
private FooBean fooBean;
#Inject
FooService(ConfigurationService configurationService) {
this.configurationService = configurationService;
}
void init(#Observes #Initialized(ApplicationScoped.class) Object ignored) {
if (configurationService.isFooBeanInitialisationEnabled()) {
fooBean = initialiseFooBean(configurationService); // some initialisation
}
}
void cleanup(#Observes #Destroyed(ApplicationScoped.class) Object ignored) {
if (fooBean != null) {
fooBean.cleanup();
}
}
}
So should the fooBean be wrapped into, let's say, the AtomicReference or be a volatile or it would be a redundant extra protection?
P.S. In this particular case it can be reformulated as: are post construct and post destroy events performed by the same thread or not? However I would like to have an answer for a more general case.
I would say it depends which thread is actually initiating and destroying the contexts.
If you use regular events, they are synchronous (asynchronous events have been added in CDI 2.0 with ObservesAsync, see
Java EE 8: Sending asynchronous CDI 2.0 events with ManagedExecutorService ) so they are called in the same thread as the caller.
In general, I don't think the same thread is used (in application servers or standalone applications) so I would recommend using volatile to ensure the right value is seen (basically the value constructed seen on destroy thread). However, it is not a use case happening so much to initiate and destroy your application in a concurrent way...
FooService is a singleton which is shared between all managed beans in the application.
Annotation Type ApplicationScoped
private FooBean fooBean is a state of the singleton object.
By default, CDI does not manage concurrency so it is the responsibility of a developer.
In this particular case it can be reformulated as: are post construct and post destroy events performed by the same thread or not?
CDI specification does not restrict containers to use the same thread for initialization and destruction of the application context. This behavior is implementation specific. In general case those threads will be different because initialization happens on the thread handling the first request to the application but destruction happens on the thread handling request from management console.
You may delegate concurrency management to EJB container - if your runtime environment includes one.
Neither volatile nor AtomicReference are needed in this case!
Following definition will do the job:
#javax.ejb.Startup // initialize on application start
#javax.ejb.Singleton // EJB Singleton
public class FooService {
private final ConfigurationService configurationService;
private FooBean fooBean;
#javax.inject.Inject
FooService(ConfigurationService configurationService) {
this.configurationService = configurationService;
}
#javax.annotation.PostConstruct
void init() {
if (configurationService.isFooBeanInitialisationEnabled()) {
fooBean = initialiseFooBean(configurationService); // some initialisation
}
}
#javax.annotation.PreDestroy
void cleanup() {
if (fooBean != null) {
fooBean.cleanup();
}
}
}
According to the specification:
An event with qualifier #Initialized(ApplicationScoped.class) is synchronously fired when the application context is initialized.
An event with qualifier #BeforeDestroyed(ApplicationScoped.class) is synchronously fired when the application context is about to be destroyed, i.e. before the actual destruction.
An event with qualifier #Destroyed(ApplicationScoped.class) is synchronously fired when the application context is destroyed, i.e. after the actual destruction.
And according to this presentation Bean manager lifecycle: the lifecycle of the bean manager is synchronous between the different states of the process and the sequence is kept: "destroy not before init".
Jboss are the specification lead of CDI 2.0
I do not see any scenario that would require a volatile/protection. Even if T1 inits then T2 destroys, it will be T1 then T2, not T1 and T2 concurrently.
And even if it would be concurrently, to have an issue it would imply weird scenario, edge scenario outside the CDI runtime:
T2 calls destroy (fooBean is null and now 'cached' in a register)
Then T1 calls init: destroy before init, at this point we are in the 4th dimension of CDI),
Then T2 calls destroy (fooBean is already cached in a register so is value is null)).
Or
T2 calls a method that access fooBean (fooBean is null and now 'cached' in a register)
Then T1 calls init: T1 is initialized whereas fooBean has already been used by T2, at this point we are in the 4th dimension of CDI
Then T2 calls destroy (fooBean is already cached in a register so is value is null)).
Similar questions have been asked, but don't quite address what I'm trying to do. We have an older Seam 2.x-based application with a batch job framework that we are converting to CDI. The job framework uses the Seam Contexts object to initiate a conversation. The job framework also loads a job-specific data holder (basically a Map) that can then be accessed, via the Seam Contexts object, by any service down the chain, including from SLSBs. Some of these services can update the Map, so that job state can change and be detected from record to record.
It looks like in CDI, the job will #Inject a CDI Conversation object, and manually begin/end the conversation. We would also define a new ConversationScoped bean that holds the Map (MapBean). What's not clear to me are two things:
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
Edits for clarification and a simplified class structure:
MapBean would need to be a ConversationScoped object, containing data for a specific instance/run of a job.
#ConversationScoped
public class MapBean implements Serializable {
private Map<String, Object> data;
// accessors
public Object getData(String key) {
return data.get(key);
}
public void setData(String key, Object value) {
data.put(key, value);
}
}
The job would be ConversationScoped:
#ConversationScoped
public class BatchJob {
#Inject private MapBean mapBean;
#Inject private Conversation conversation;
#Inject private JobProcessingBean jobProcessingBean;
public void runJob() {
try {
conversation.begin();
mapBean.setData("key", "value"); // is this MapBean instance now bound to the conversation?
jobProcessingBean.doWork();
} catch (Exception e) {
// catch something
} finally {
conversation.end();
}
}
}
The job might call a SLSB, and the current conversation-scoped instance of MapBean needs to be available:
#Stateless
public class JobProcessingBean {
#Inject private MapBean mapBean;
public void doWork() {
// when this is called, is "mapBean" the current conversation instance?
Object value = mapBean.getData("key");
}
}
Our job and SLSB framework is quite complex, the SLSB can call numerous other services or locally instantiated business logic classes, and each of these would need access to the conversation-scoped MapBean.
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Yes, since MapBean is #ConversationScoped it is tied to the call chain for the duration starting from conversation.begin() until conversation.end(). You can think of #ConversationScoped (and #RequestScoped and #SessionScoped) as instances in ThreadLocal - while there exists an instance of them for every thread, each instance is tied to that single thread.
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a #ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
It's not as magical as you think if you see that this pattern is the same as the one I explained above. The SLSB indeed gets a separate instance, but not just any instance, the one which belongs to the scope from which the SLSB was called.
In addition to the link you posted, see also this answer.
Iv'e tested a similar code to what you posted and it works as expected - the MapBean is the same one injected throughout the call. Just be careful with 2 things:
BatchJob is also #ConversationScoped but does not implement Serializable, which will not allow the bean to passivate.
data is not initialized, so you will get an NPE in runJob().
Without any code samples, I'll have to do some guessing, so let's see if I got you right.
Would the container know to pass this instance to services down the call chain?
If you mean to use the same instance elsewhere in the call, then this can be easily achieved by making the MapBean an #ApplicationScoped bean (or, alternatively, and EJB #Singleton).
it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical.
Here I suppose that the reason why it seems magical is that SLSB is in terms of CDI a #Dependent bean. And as you probably know, CDI always creates new instance for dependent bean injection point. E.g. yes, you get a different SLS/Dependent bean instance for each call.
Perhaps some other scope would fit you better here? Like #RequestScoped or #SessionScoped? Hard to tell without more details.
In a struts 2 and spring web based application, please consider below sample.
The BookManager has an action which returns a Map of books to client. It get the map from service layer which is injected by Spring
public class BookManager extends ActionSupport {
//with setter and getter
private Map<String, BookVO> books;
#inject
BookService bookservice
#Action("book-form")
public String form(){
setBooks(bookservice.getAllBooks());
}
}
The service layer gets the book list from DB an returns a MAP.
#Named
public class BookService(){
private Map<String,BookVO> books;
public Map<String,BookVO> getAllBooks(){
books = new HashMap<String,BookVO>();
//fill books from DB
return books;
}
}
I have tested and found that the above implementation is not thread safe.
I can make the code thread safe by removing private field books from BookService and use it like method HashMap<String,BookVO>() books = new HashMap<String,BookVO>();. Why this change make the code thread safe ?
The struts action is thread safe, shouldn't this assure that the even non thread safe spring service runs in a thread safe manner.
If I use the non thread safe version of service in my action, by making a new service object instead of using spring inject, I will face no issue. Why? If the service is not thread safe why making a new instance and calling it will be thread safe!
I can make the code thread safe by removing private field books from BookService and use it like method HashMap() books = new HashMap();. Why this change make the code thread safe ?
Because method-level variables are thread safe, while class-level variables are not.
The struts action is thread safe, shouldn't this assure that the even non thread safe spring service runs in a thread safe manner ?
Nope. It depends.
If I use the non thread safe version of service in my action, by making a new service object instead of using spring inject, I will face no issue. Why? If the service is not thread safe why making a new instance and calling it will be thread safe!
If you instantiate it manually in the action, you are creating an instance of that object private to that action, thread-safe since the actions are ThreadLocal, and managed by you (that's means if your BookService class has some #Inject in it, the container won't resolve them).
If instead you have the DI managed by the container, the instance is not thread-safe; what you're using (#Inject, #Named) is more than "Spring", it's Java EE, is an implementaton of the JSR-330 (Dependency Injection) available only in CDI-enabled applications (JSR-299).
CDI beans are not thread safe. You should use EJB3's #Singleton for this to be thread-safe, but you really don't need to retain that attribute at class-level, since it's used only to be returned, then left there to be overwritten the next time.
BTW consider using reference CDI (Weld in JBOSS) with the Struts2 CDI-plugin, it's worthy of a try.
I am reviewing some code and have some questions regarding the implementation. This class is exposed as a web service and annotated as a singleton. A stateless EJB is injected into this class. This is what the code looks like:
#Singleton
#Webservice
public class AImpl implements A, ARemote {
:
:
#EJB
private B b;
#WebMethod
public String someWebService(){
String value = this.b.someMethod();
return value;
}
}
Here is the definition of B:
#Local
public interface B {
public String someMethod();
}
#Stateless(mappedName="B")
public class BImpl implements B, BRemote {
public String someMethod() {
// do something
}
}
My Question:
Since the webservice is a Singleton, there will be only one instance. There will be only one thread (Please correct me if I am wrong here). Though B is capable of having a thread pool, as it is stateless, does it matter as the webservice from which it is called is a Singleton.
I am learning EJB concepts currently. Any input/clarifications to my understanding will help.
Thanks.
By default, singleton EJBs are #ConcurrencyManagement(CONTAINER) and methods are #Lock(WRITE), so you're right that methods on your singleton bean can only be called from a single thread. However, if you use #ConcurrencyManagement(BEAN) or #Lock(READ), then this would no longer be true. Anyway, in your example, if the singleton session bean is the only caller of BImpl, then even if your EJB server implements pooling for stateless session beans, it will only ever need to create one instance.
(Note that while stateless session beans can have an instance pool, they do not have a thread pool. The application server might use a thread pool for inbound requests, but they are not directly associated with the stateless session bean or related instances.)
We have some JavaEE5 stateless EJB bean that passes the injected EntityManager to its helpers.
Is this safe? It has worked well until now, but I found out some Oracle document that states its implementation of EntityManager is thread-safe. Now I wonder whether the reason we did not have issues until now, was only because the implementation we were using happened to be thread-safe (we use Oracle).
#Stateless
class SomeBean {
#PersistenceContext
private EntityManager em;
private SomeHelper helper;
#PostConstruct
public void init(){
helper = new SomeHelper(em);
}
#Override
public void business(){
helper.doSomethingWithEm();
}
}
Actually it makes sense.. If EntityManager is thread-unsafe, a container would have to do
inercept business()
this.em = newEntityManager();
business();
which will not propagate to its helper classes.
If so, what is the best practice in this kind of a situation? Passing EntityManagerFactory instead of EntityManager?
EDIT: This question is very interesting so if you are interested in this question, you probably want to check out this one, too:
EDIT: More info.
ejb3.0 spec
4.7.11 Non-reentrant Instances
The container must ensure that only one
thread can be executing an instance at
any time. If a client request arrives
for an instance while the instance is
executing another request, the
container may throw the
javax.ejb.ConcurrentAccessException to
the second client[24]. If the EJB 2.1
client view is used, the container may
throw the java.rmi.RemoteException to
the second request if the client is a
remote client, or the
javax.ejb.EJBException if the client
is a local client.[25] Note that a
session object is intended to support
only a single client. Therefore, it
would be an application error if two
clients attempted to invoke the same
session object. One implication of
this rule is that an application
cannot make loopback calls to a
session bean instance.
And,
4.3.2 Dependency Injection
A session bean may use dependency injection
mechanisms to acquire references to
resources or other objects in its
environment (see Chapter 16,
“Enterprise Bean Environment”). If a
session bean makes use of dependency
injection, the container injects these
references after the bean instance is
created, and before any business
methods are invoked on the bean
instance. If a dependency on the
SessionContext is declared, or if the
bean class implements the optional
SessionBean interface (see Section
4.3.5), the SessionContext is also injected at this time. If dependency
injection fails, the bean instance is
discarded. Under the EJB 3.0 API, the
bean class may acquire the
SessionContext interface through
dependency injection without having to
implement the SessionBean interface.
In this case, the Resource annotation
(or resource-env-ref deployment
descriptor element) is used to denote
the bean’s dependency on the
SessionContext. See Chapter 16,
“Enterprise Bean Environment”.
I used a similar pattern, but the helper was created in #PostConstruct and the injected entity manager was passed in the constructor as parameter. Each EJB instance had its own helper and thread-safety was guaranteed then.
I also had a variant were the entity manager was not injected (because the EJB wasn't using it altogether), so the helper has to look it up with InitialContext. In this case, the Persistence context must still be "imported" in the parent EJB with #PersistenceContext:
#Stateless
#PersistenceContext(name="OrderEM")
public class MySessionBean implements MyInterface {
#Resource SessionContext ctx;
public void doSomething() {
EntityManager em = (EntityManager)ctx.lookup("OrderEM");
...
}
}
But it's actually easier to inject it (even if the EJB doesn't use it) than to look it up, especially for testability.
But to come back to your main question, I think that the entity manager that is injected or looked up is a wrapper that forwards to the underlying active entity manager that is bound to the transaction.
Hope it helps.
EDIT
The section § 3.3 and § 5.6 in the spec cover a bit the topic.
I've been using helper methods and passed the EntityManager there, and it is perfectly OK.
So I'd recommend either passing it to methods whenever needed, or make the helper a bean itself, inject it (using #EJB) and inject the EntityManager there as well.
Well, personally, I wouldn't like to have to pass the Entity Manager to all my POJOs in my constructors or methods. Especially for non-trivial programs where the number of POJOs is large.
I would try to create POJOs/HelperClasses that deal with the Entities returned by the EntityManager, instead of using the entitymanager directly.
If not possible, I guess I'd create a New EJB Bean.