Singleton pattern allows to contain one instance per application thread.
How can I make sure only single instance of guava Service Manager is running per JVM ? So when ever it launches a new seperate entry java thread can check whether the service manager is running.
Why do you think that simply not creating multiple instances wouldn't work? Implement a ServiceManagerProvider as a singleton and use only serviceManagerProvider.get() for accessing the Service Manager.
Consider using Dependency Injection instead of the singleton (anti-)pattern:
#Singleton
public class ServiceManagerProvider implements Provider<ServiceManager> {
private final ServiceManager serviceManager = ...
#Overrride
public ServiceManager get() {
return serviceManager;
}
}
Here, you get a single instance per injector, which is exactly what you (should) want.
Related
Similar questions have been asked, but don't quite address what I'm trying to do. We have an older Seam 2.x-based application with a batch job framework that we are converting to CDI. The job framework uses the Seam Contexts object to initiate a conversation. The job framework also loads a job-specific data holder (basically a Map) that can then be accessed, via the Seam Contexts object, by any service down the chain, including from SLSBs. Some of these services can update the Map, so that job state can change and be detected from record to record.
It looks like in CDI, the job will #Inject a CDI Conversation object, and manually begin/end the conversation. We would also define a new ConversationScoped bean that holds the Map (MapBean). What's not clear to me are two things:
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
Edits for clarification and a simplified class structure:
MapBean would need to be a ConversationScoped object, containing data for a specific instance/run of a job.
#ConversationScoped
public class MapBean implements Serializable {
private Map<String, Object> data;
// accessors
public Object getData(String key) {
return data.get(key);
}
public void setData(String key, Object value) {
data.put(key, value);
}
}
The job would be ConversationScoped:
#ConversationScoped
public class BatchJob {
#Inject private MapBean mapBean;
#Inject private Conversation conversation;
#Inject private JobProcessingBean jobProcessingBean;
public void runJob() {
try {
conversation.begin();
mapBean.setData("key", "value"); // is this MapBean instance now bound to the conversation?
jobProcessingBean.doWork();
} catch (Exception e) {
// catch something
} finally {
conversation.end();
}
}
}
The job might call a SLSB, and the current conversation-scoped instance of MapBean needs to be available:
#Stateless
public class JobProcessingBean {
#Inject private MapBean mapBean;
public void doWork() {
// when this is called, is "mapBean" the current conversation instance?
Object value = mapBean.getData("key");
}
}
Our job and SLSB framework is quite complex, the SLSB can call numerous other services or locally instantiated business logic classes, and each of these would need access to the conversation-scoped MapBean.
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Yes, since MapBean is #ConversationScoped it is tied to the call chain for the duration starting from conversation.begin() until conversation.end(). You can think of #ConversationScoped (and #RequestScoped and #SessionScoped) as instances in ThreadLocal - while there exists an instance of them for every thread, each instance is tied to that single thread.
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a #ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
It's not as magical as you think if you see that this pattern is the same as the one I explained above. The SLSB indeed gets a separate instance, but not just any instance, the one which belongs to the scope from which the SLSB was called.
In addition to the link you posted, see also this answer.
Iv'e tested a similar code to what you posted and it works as expected - the MapBean is the same one injected throughout the call. Just be careful with 2 things:
BatchJob is also #ConversationScoped but does not implement Serializable, which will not allow the bean to passivate.
data is not initialized, so you will get an NPE in runJob().
Without any code samples, I'll have to do some guessing, so let's see if I got you right.
Would the container know to pass this instance to services down the call chain?
If you mean to use the same instance elsewhere in the call, then this can be easily achieved by making the MapBean an #ApplicationScoped bean (or, alternatively, and EJB #Singleton).
it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical.
Here I suppose that the reason why it seems magical is that SLSB is in terms of CDI a #Dependent bean. And as you probably know, CDI always creates new instance for dependent bean injection point. E.g. yes, you get a different SLS/Dependent bean instance for each call.
Perhaps some other scope would fit you better here? Like #RequestScoped or #SessionScoped? Hard to tell without more details.
If I have a stateful class that requires a utility-like stateless class to perform an operation on it. These stateful classes are kept in a list in a container (stateful) class. This is what I would do with plain Java:
class Container {
List<Stateful> sfList;
}
class Stateful {
List<String> list;
void someMethod() {
list.add("SF");
Stateless.foo(this);
}
class Stateless {
public static void foo(Stateful sf) {
sf.getList().add("SL");
}
}
and inside main this is the procedure:
Stateful sf = new Stateful();
sfList.add(sf);
sf.someMethod();
Now with JavaEE CDI I would do:
class StatefulBean {
List<String> list;
#Inject
StatelessBean slsb;
void someMethod() {
list.add("SF");
slsb.add(this);
}
#Stateless
class StatelessBean {
public void add(StatefulBean sfsb) {
sfsb.add("SL");
}
}
In this case all StatefulBean could access a shared pool of StatelessBean with no concurrency issues and it will scale properly with requests.
However, since Stateful is not a managed bean I can't inject into it and so I used the utility class instead. Also I'm creating Stateful with a constructor so I can't inject into it the stateless bean (I will get a NPE using it).
My questions are:
Are there concurrency and scalabilty differences between the stateless injection approach (provided it would work) and the utility class approach?
How can I make the EE injection approach work?
Are there concurrency and scalabilty differences between the stateless injection approach (provided it would work) and the utility class approach?
Yes there are, primarily around the loss of management. The way you're instantiating Stateful on every invocation of the method, there's no pooling involved. This is going to lead to you creating more instances than you probably need.
Another loss is on the scalability side. Where the container will manage the passivation and activation of your stateful bean in a distributed environment, the manual approach will see to it that you manage your own activation and passivation.
Since Stateful is not a managed bean..
Incorrect. According to the CDI Spec, any java class that meets the listed criteria (in your case, a default no-arg constructor) is a managed bean. That means that you could #Inject StatelessBean into Stateless and the container would oblige. To allow this level of management, you'll need to set bean-discovery-mode=all in your beans.xml.
Even with the (apparently needless) circular reference, normal Java concurrency rules apply: as long as the state that you're manipulating is not static or in a static class, you're threadsafe. Each threaded call to that static method still operates on a separate stack, so no problems there.
How can I make the EE injection approach work?
If you need on-demand instantiation of Stateless(or any other bean really), use the CDI Instance to programmatically obtain a managed instance of any class you want. You can now add something like this to Container:
#Inject #Dependent Instance<Stateful> stateful;
#PostConstruct
public void createStateless(){
//instantiate sfList;
sfList.add(stateful.get()); //execute as many times as you need
}
In a struts 2 and spring web based application, please consider below sample.
The BookManager has an action which returns a Map of books to client. It get the map from service layer which is injected by Spring
public class BookManager extends ActionSupport {
//with setter and getter
private Map<String, BookVO> books;
#inject
BookService bookservice
#Action("book-form")
public String form(){
setBooks(bookservice.getAllBooks());
}
}
The service layer gets the book list from DB an returns a MAP.
#Named
public class BookService(){
private Map<String,BookVO> books;
public Map<String,BookVO> getAllBooks(){
books = new HashMap<String,BookVO>();
//fill books from DB
return books;
}
}
I have tested and found that the above implementation is not thread safe.
I can make the code thread safe by removing private field books from BookService and use it like method HashMap<String,BookVO>() books = new HashMap<String,BookVO>();. Why this change make the code thread safe ?
The struts action is thread safe, shouldn't this assure that the even non thread safe spring service runs in a thread safe manner.
If I use the non thread safe version of service in my action, by making a new service object instead of using spring inject, I will face no issue. Why? If the service is not thread safe why making a new instance and calling it will be thread safe!
I can make the code thread safe by removing private field books from BookService and use it like method HashMap() books = new HashMap();. Why this change make the code thread safe ?
Because method-level variables are thread safe, while class-level variables are not.
The struts action is thread safe, shouldn't this assure that the even non thread safe spring service runs in a thread safe manner ?
Nope. It depends.
If I use the non thread safe version of service in my action, by making a new service object instead of using spring inject, I will face no issue. Why? If the service is not thread safe why making a new instance and calling it will be thread safe!
If you instantiate it manually in the action, you are creating an instance of that object private to that action, thread-safe since the actions are ThreadLocal, and managed by you (that's means if your BookService class has some #Inject in it, the container won't resolve them).
If instead you have the DI managed by the container, the instance is not thread-safe; what you're using (#Inject, #Named) is more than "Spring", it's Java EE, is an implementaton of the JSR-330 (Dependency Injection) available only in CDI-enabled applications (JSR-299).
CDI beans are not thread safe. You should use EJB3's #Singleton for this to be thread-safe, but you really don't need to retain that attribute at class-level, since it's used only to be returned, then left there to be overwritten the next time.
BTW consider using reference CDI (Weld in JBOSS) with the Struts2 CDI-plugin, it's worthy of a try.
I was wondering if next scenario is thread-safe:
I have a spring controller with method
#Autowired
private JobService jobService;
public String launch(#ModelAttribute("profile") Profile profile){
JobParameters jobParams = MyUtils.transform(profile);
jobService.launch(profile.getJobName(), jobParams);
return "job";
}
and I have MyUtils class with static method that transforms one kind of object to another... like so :
public class MyUtils {
public static JobParameters transform(Profile profile) {
JobParametersBuilder jpb = new JobParametersBuilder();
jpb.addString("profile.name", profile.getProfileName());
jpb.addString("profile.number", String.valueOf(profile.getNumber()));
return jpb.toJobParameters();
}
}
Classes JobParametersBuilder , JobParameters and JobService are from spring batch core project. Profile class is simple POJO.
The question really is... is this static method transform thread-safe since it is dealing with object instances, although all of those instances are locally created for the method.
This concrete code IS thread safe if some conditions are met. Here is explanation:
launch method is called from Spring boot controller. Every call that comes to Spring boot controller is called from different thread and that thread is taking execution to the end of the call stack(unless you call some asynchronous call inside that thread). Tomcat can handle 200 threads in same time by default. Which means you can have 200 calls to your Controllers in same time and they will all be in separate threads. Which means launch is thread safe.
Profile is passed to the transform method and if it is simple POJO it is thread safe, because on every call it will be new instance of Profile.
Inside transform method you are instantiate JobParametersBuilder every time which is thread safe if code inside toJobParameters is thread safe and doesn't keep any state of the JobParametersBuilder class or some other.
I'm trying to change some legacy code to use DI with Spring framework. I have a concrete case for which I'm wondering which is the most proper way to implement it.
It is a java desktop application. There is a DataManager interface used to query / change data from the data store. Currently there is only one implementation using a XML file for store, but in the future it is possible to add SQL implementation. Also for unit testing I may need to mock it.
Currently every peace of code that needs the data manager retrieves it by using a factory. Here is the source code of the factory:
public class DataManagerFactory
{
private static DataManagerIfc dataManager;
public static DataManagerIfc getInstance()
{
// Let assume synchronization is not needed
if(dataManager == null)
dataManager = new XMLFileDataManager();
return dataManager;
}
}
Now I see 3 ways to change the application to use DI and Spring.
I. Inject the dependency only in the factory and do not change any other code.
Here is the new code:
public class DataManagerFactory
{
private DataManagerIfc dataManager;
public DataManagerFactory(DataManagerIfc dataManager)
{
this.dataManager = dataManager;
}
public DataManagerIfc getDataManager()
{
return dataManager;
}
public static DataManagerIfc getInstance()
{
return getFactoryInstance().getDataManager();
}
public static DataManagerFactory getFactoryInstance()
{
ApplicationContext context =
new ClassPathXmlApplicationContext(new String[] {"com/mypackage/SpringConfig.xml"});
return context.getBean(DataManagerFactory.class);
}
}
And the XML with the bean description:
<bean id="dataManagerFactory"
class="com.mypackage.DataManagerFactory">
<constructor-arg ref="xmlFileDataManager"/>
</bean>
<bean id="xmlFileDataManager"
class="com.mypackage.datamanagers.xmlfiledatamanager.XMLFileDataManager">
</bean>
II. Change every class that is using the data manager so it takes it through the constructor and store it as a class variable. Make Spring bean definitions only for the "root" classes from where the chain of creation starts.
III. Same as II. but for every class that is using the data manager create a Spring bean definition and instantiate every such class by using the Spring Ioc container.
As I'm new to the DI concept, I will appreciate every advice what will be the correct and "best practice" solution.
Many thanks in advance.
Use option 3.
The first option keeps your code untestable. You won't be able to easily mock the static factory method so that it returns a mock DataManager.
The second option will force you to have the root classes know all the dependencies of all the non-root classes in order to make the code testable.
The third option really uses dependency injection, where each bean only know about its direct dependencies, and is injected by the DI container.
Well... why did you write the factory in the first place? Spring is not intended to make you change how you write code (not just to suit Spring that is), so keeping the factory is correct as it uses well-known pattern. Injecting the dependency into the factory will retain that behaviour.
Option 3 is the correct route to take. By using such a configuration you can usefully take components of your configuration and use them in new configurations, and everything will work as expected.
As a rule of thumb, I would expect one call to Spring to instantiate the application context and get the top-level bean. I wouldn't expect to make repeated calls to the Spring framework to get multiple beans. Everything should be injected at the correct level to reflect responsibilities etc.
Beware (since you're new to this) that you don't plumb in your data manager into every class available! This is quite a common mistake to make, and if you've not abstracted out and centralised responsibilities sufficiently, you'll find you're configuring classes with lots of managers. When you see you're doing this it's a good time to step back and look at your abstractions and componentisation.