SpringMVC #Scope session not creating threads - java

Say I have the following class...
#Controller
public class WebController {
#Autowired PersonService personService;
#RequestMapping(value = "/get", method = RequestMethod.GET)
#ResponseBody
#Scope("session")
public List<Player> getPerson(String personName) {
return playerService.getByName(personName);
}
}
Now this invokes the following service...
#Service("playerService")
public class PlayerServiceImpl implements PlayerService {
private List<Player> players;
#Override
#Transactional
public List<Player> getByName(final String name) {
if (players == null) {
players = getAll();
}
return getValidPlayers(name);
}
If I initially start my application, players is null, correctly, then when in the same session, I invoke this method again with a new value, players is no longer null, as you would expect. However, no new thread appears to be being created, if I open a new browser window (therefore creating a new session) and invoke this method, it still has the values from the previous session.
Why is #Scope("session") not creating a new thread in the thread pool?
I've specified <context:component-scan base-package="com." /> in my servlet-context as expected, everything works fine apart from the service methods are all acting as singletons rather than creating a new thread per session like say a Java EE container.
If players was marked as static I'd understand.
I've also tried marking my controller as #Scope("session") (as shown below) but this appears to have no impact either. What's the best way to make my Spring app create a new thread for a new session?
#Controller
#Scope("session")
public class PlayerController {

You are using #Scope annotation the wrong way.
Quoting the docs:
When used as a type-level annotation in conjunction with the Component annotation, indicates the name of a scope to use for instances of the annotated type.
When used as a method-level annotation in conjunction with the Bean annotation, indicates the name of a scope to use for the instance returned from the method.
So you can annotate either a spring component bean or a method that creates a bean if you're using java config. Java config is the only reason it even compiles (it wouldn't in pre 3.0 spring)
In your case that annotation is on a normal bean method where it doesn't mean anything.
Solving the right problem
It looks like you're trying to implement db cache by storing query results in a List<Player> players.
Don't do that. Use one of the prebuilt cache abstractions (spring has a very nice one) instead.
So where should #Scope go?
Annotating #Controller with #Scope("session") won't help as it will create session scoped controllers but the service they have injected is still a singleton.
Annotating only Service bean won't work either, cause #Controller is a singleton and it's dependencies are autowired on application startup.
Annotating both #Service and #Controller might work, but seems a bit heavy handed.
It's better to avoid state at all.

New threads are created for each request.
Your service has an instance variable (players) which is not threadsafe - it is shared by all threads. Any spring bean - including controllers and services are by default a singleton, you need to specify on the service annotation its scope.
#Service("playerService")
#Scope("session")
public class PlayerServiceImpl
But its best(simpler, easier to scale) to keep beans singletons and not rely on instance variables (unless they are also managed by spring/threadsafe/singletons).

Related

Why is #Bean(initMethod="") not detecting given method in spring?

Edit Fixed by changing package.
I have this configuration file for spring framework
#Configuration
public class AppConfig {
#Bean(initMethod = "populateCache")
public AccountRepository accountRepository(){
return new JdbcAccountRepository();
}
}
JdbcAccountRepository looks like this.
#Repository
public class JdbcAccountRepository implements AccountRepository {
#Override
public Account findByAccountId(long
return new SavingAccount();
}
public void populateCache() {
System.out.println("Populating Cache");
}
public void clearCache(){
System.out.println("Clearing Cache");
}
}
I'm new to spring framework and trying to use initMethod or destroyMethod. Both of these method are showing following errors.
Caused by: org.springframework.beans.factory.support.BeanDefinitionValidationException: Could not find an init method named 'populateCache' on bean with name 'accountRepository'
Here is my main method.
public class BeanLifeCycleDemo {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = new
AnnotationConfigApplicationContext(AppConfig.class);
AccountRepository bean = applicationContext.getBean(AccountRepository.class);
applicationContext.close();
}
}
Edit
I was practicing from a book and had created many packages for different chapters. Error was it was importing different JdbcAccountRepository from different package that did not have that method. I fixed it and it works now. I got hinted at this from answers.
Like you said, if you are mixing configurations types, it can be confusing. Besides, even if you created a Bean of type AccountRepository, because Spring does a lot of things at runtime, it can call your initMethod, even if the compiler couldn't.
So yes, if you have many beans with the same type, Spring can be confused an know which one to call, hence your exception.
Oh and by the way, having a Configuration creating the accountRepoisitory Bean, you can remove the #Repository from your JdbcAccountRepository... It is either #Configuration + #Bean or #Component/Repository/Service + #ComponentScan.
TL;DR
Here is more information and how Spring creates your bean : What object are injected by Spring ?
#Bean(initMethod = "populateCache")
public AccountRepository accountRepository(){
return new JdbcAccountRepository();
}
With this code, Spring will :
Detect that you want to add a Bean in the application Context
The bean information are retrieved from the method signature. In your case, it will create a bean of type AccountRepository named accountRepository... That's all Spring knows, it won't look inside your method body.
Once Spring is done analysing your classpath, or scanning the bean definitions, it will start instanciating your object.
It will therefor creates your bean accountRepository of type AccountRepository.
But Spring is "clever" and nice with us. Even if you couldn't write this code without your compiler yelling at you, Spring can still call your method.
To make sure, try writing this code :
AccountRepository accountRepository = new JdbcAccountRepository();
accountRepository.populateCache(); // Compiler error => the method is not found.
But it works for Spring... Magic.
My recommandation, but you might thinking the same now: If you have classes across many packages to answer different business case, then rely on #Configuration classes. #ComponentScan is great to kickstart your development, but reach its limit when your application grows...
You mix two different ways of spring bean declaration:
Using #Configuration classes. Spring finds all beans annotated with #Configuration and uses them as a reference to what beans should be created.
So if you follow this path of configuration - don't use #Repository on beans. Spring will detect it anyway
Using #Repository - other way around - you don't need to use #Configuration in this case. If you decide to use #Repository put #PostConstruct annotation on the method and spring will call it, in this case remove #Configuration altogether (or at least remove #Bean method that creates JdbcAccountRepository)
Annotate populateCache method with #PostConstruct and remove initMethod from #Bean. It will work.

Why Spring's #Transactional works without proxy?

I got interested in how Spring's #Transactional works internally, but everywhere I read about it there's a concept of proxy. Proxies are supposed to be autowired in place of real bean and "decorate" base method with additional transaction handling methods.
The theory is quite clear to me and makes perfect sense so I tried to check how it works in action.
I created a Spring Boot application with a basic controller and service layers and marked one method with #Transactional annotation. Service looks like this:
public class TestService implements ITestService {
#PersistenceContext
EntityManager entityManager;
#Transactional
public void doSomething() {
System.out.println("Service...");
entityManager.persist(new TestEntity("XYZ"));
}}
Controller calls the service:
public class TestController {
#Autowired
ITestService testService;
#PostMapping("/doSomething")
public ResponseEntity addHero() {
testService.doSomething();
System.out.println(Proxy.isProxyClass(testService.getClass()));
System.out.println(testService);
return new ResponseEntity(HttpStatus.OK);
}}
The whole thing works, new entity is persisted to the DB but the whole point of my concern is the output:
Service...
false
com.example.demo.TestService#7fb48179
It seems that the service class was injected explicitly instead of proxy class. Not only "isProxy" returns false, but also the class output ("com.example.demo.TestService#7fb48179") suggests its not a proxy.
Could you please help me out with that? Why wasn't the proxy injected, and how does it even work without proxy? Is there any way I can "force" it to be proxied, and if so - why the proxy is not injected by default by Spring ?
There's not much to be added, this is a really simple app. Application properties are nothing fancy either :
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=root
spring.datasource.password=superSecretPassword
spring.datasource.url=jdbc:mysql://localhost:3306/heroes?serverTimezone=UTC
spring.jpa.hibernate.ddl-auto=create-drop
Thank you in advance!
Your understanding is correct, but your test is flawed:
When the spring docs say "proxy", they are referring to the pattern, not a particular implementation. Spring supports various strategies for creating proxy objects. One of these is the java.lang.reflect.Proxy you tested for, but by default spring uses a more advanced technique that generates a new class definition at runtime that subclasses the actual implementation class of the service (and overrides all methods to apply transaction advice). You can see this in action by checking testService.getClass(), which will refer to that generated class, or by halting execution in a debugger, and inspecting the fields of targetService.
The reason that toString() refers to the original object is that the proxy implements toString() by delegating to its target object, which uses its class name to build the String.

Starting a CDI conversation and injecting #ConversationScoped bean into stateless session bean

Similar questions have been asked, but don't quite address what I'm trying to do. We have an older Seam 2.x-based application with a batch job framework that we are converting to CDI. The job framework uses the Seam Contexts object to initiate a conversation. The job framework also loads a job-specific data holder (basically a Map) that can then be accessed, via the Seam Contexts object, by any service down the chain, including from SLSBs. Some of these services can update the Map, so that job state can change and be detected from record to record.
It looks like in CDI, the job will #Inject a CDI Conversation object, and manually begin/end the conversation. We would also define a new ConversationScoped bean that holds the Map (MapBean). What's not clear to me are two things:
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
Edits for clarification and a simplified class structure:
MapBean would need to be a ConversationScoped object, containing data for a specific instance/run of a job.
#ConversationScoped
public class MapBean implements Serializable {
private Map<String, Object> data;
// accessors
public Object getData(String key) {
return data.get(key);
}
public void setData(String key, Object value) {
data.put(key, value);
}
}
The job would be ConversationScoped:
#ConversationScoped
public class BatchJob {
#Inject private MapBean mapBean;
#Inject private Conversation conversation;
#Inject private JobProcessingBean jobProcessingBean;
public void runJob() {
try {
conversation.begin();
mapBean.setData("key", "value"); // is this MapBean instance now bound to the conversation?
jobProcessingBean.doWork();
} catch (Exception e) {
// catch something
} finally {
conversation.end();
}
}
}
The job might call a SLSB, and the current conversation-scoped instance of MapBean needs to be available:
#Stateless
public class JobProcessingBean {
#Inject private MapBean mapBean;
public void doWork() {
// when this is called, is "mapBean" the current conversation instance?
Object value = mapBean.getData("key");
}
}
Our job and SLSB framework is quite complex, the SLSB can call numerous other services or locally instantiated business logic classes, and each of these would need access to the conversation-scoped MapBean.
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Yes, since MapBean is #ConversationScoped it is tied to the call chain for the duration starting from conversation.begin() until conversation.end(). You can think of #ConversationScoped (and #RequestScoped and #SessionScoped) as instances in ThreadLocal - while there exists an instance of them for every thread, each instance is tied to that single thread.
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a #ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
It's not as magical as you think if you see that this pattern is the same as the one I explained above. The SLSB indeed gets a separate instance, but not just any instance, the one which belongs to the scope from which the SLSB was called.
In addition to the link you posted, see also this answer.
Iv'e tested a similar code to what you posted and it works as expected - the MapBean is the same one injected throughout the call. Just be careful with 2 things:
BatchJob is also #ConversationScoped but does not implement Serializable, which will not allow the bean to passivate.
data is not initialized, so you will get an NPE in runJob().
Without any code samples, I'll have to do some guessing, so let's see if I got you right.
Would the container know to pass this instance to services down the call chain?
If you mean to use the same instance elsewhere in the call, then this can be easily achieved by making the MapBean an #ApplicationScoped bean (or, alternatively, and EJB #Singleton).
it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical.
Here I suppose that the reason why it seems magical is that SLSB is in terms of CDI a #Dependent bean. And as you probably know, CDI always creates new instance for dependent bean injection point. E.g. yes, you get a different SLS/Dependent bean instance for each call.
Perhaps some other scope would fit you better here? Like #RequestScoped or #SessionScoped? Hard to tell without more details.

What is the right design pattern to get a prototype-bean from a component-bean?

I am just wondering what a good architecture design looks like.
Let's say we have a CarRepository which manages all beans of type Car in a car rental application.
Car beans are of type prototype
CarRepository bean is of type repository (singleton)
Now, the CarRepository is asked to create a new Car bean, e.g. when the rental company has bought a new car.
Of course, I could implement ApplicatioContextAware and use context.getBean("car"), but for me, it doesn't fit well to the idea of dependency injection. What is best-practice for injecting a shorter-lived bean into a singleton?
Update: Maybe I should add an example to make it more clear.
#Repository
public class CarRepository {
private List<Car> cars;
public void registerNewCar(int id, String model) {
// I don't want to access the Spring context via ApplicationContextAware here
// Car tmp = (Car) context.getBean("car");
// car.setId(id);
// car.setModel(model);
// cars.add(tmp);
}
}
#Scope("prototype")
public class Car {
private int id;
private String model;
// getter and setters
}
Spring offers a mechanism that handles injecting a shorter-lived bean in a longer-lived one. It's called a scoped proxy. How it works is that the singleton is injected with a proxy that will handle method calls by searching the shorter scope (like session or request) for a bean instance and delegating to that instance.
You didn't specify, if you are using xml or annotations to configure your application or what version of Spring you are using. You can read more about configuring the scope proxy with xml in the reference guide. I'm going to give you an example how to configure it with annotations in a Spring 4-ish environment.
For me the best way is to use the meta-annotations mechanism. It allows you to create your own annotations that will be later used by Spring to configure your app. For example:
#Retention(RUNTIME)
#Scope(value=WebApplicationContext.SCOPE_SESSION, proxyMode=ScopedProxyMode.TARGET_CLASS)
public #interface SessionScoped{
}
Such an annotation, when specified on a #Component (or #Service or any other specialization) class or a #Bean method in your Java config will cause that bean to be injected as a proxy. For example:
#Configuration
#EnableAspectJAutoProxy
public class MyConfig{
#SessionScoped
#Bean
public MyClass myBean(){
// return your bean
}
}
All that being said, your example really makes me think you should be working with entities (Car) and repositories. See Spring Data if you are designing the model layer and you want to store Cars data in your database etc.
If you do not want to use context.getBean(...), then, you can construct the car using new Car(...) as it will have same effect.
These are only two ways!

Spring MVC Scoped Bean Dependencies and Race Conditions

I have some serious doubts regarding scope bean dependencies because of lack of spring knowledge.
I have read reference manual at 3.5.4.5 Scoped beans as dependencies and have implemented sucessfully an example about it.
However before going further more, I wanted to share my concerns.
Let me to share my use case and little implementation details
For each user request I would like to create a city for per user.
#Configuration
public class CityFactory{
#Bean(name = {"currentCity" , "loggedInCity"})
#Scope(value = WebApplicationContext.SCOPE_REQUEST,proxyMode = ScopedProxyMode.TARGET_CLASS)
#Autowired
public CityBean getCityBean(HttpServletRequest request) {
return CityUtil.findCityWithHostName(request.getServerName());
}
For each request I want to inject this city into a singleton scoped controller which is default scope for controller.
#RequestMapping("/demo")
#Controller
public class DemoController {
#Autowired
CityBean city;
#RequestMapping(value = "/hello/{name}", method = RequestMethod.GET)
public ModelAndView helloWorld(#PathVariable("name") String name, Model model) {
Map<String, Object> myModel = new HashMap<String, Object>();
model.addAttribute("hello", name);
model.addAttribute("test", "test in " + city.getDomainName() + " !!! ");
return new ModelAndView("v3/test", "m", model);
}
}
My questions:
1) Is there any race condition? I am afraid of context switches which will destroy my application in a multi request environment.
2) I am aware of another solution which creating a controller per request but it is more error prone than current solution. Because another developer can forget scoping controllers to make request.
How can I make controllers globally request scope? Just because of being little curious.
Thanks...
No race conditions - each request has its own thread
But I think there's an easier way to do what you want. You can have your CityBean:
#Service
public class CityBean {
public String getDomainName(String serverName) {
// obtain the name based on the server name
}
}
And in your controller:
#Autowired CityBean bean
pass HttpServletRequest as argument to the method, and call cityBean.getDomainName(request.getServerName());
(If you use some ORM, perhaps you'll have a City entity, which you can fetch and pass around, just beware of lazy collections)
There are no race conditions here.
That's the point of scoped proxies - the instance of CityBean injected into DemoController is a proxy which delegates calls of its method to the actual request-bound instance of CityBean, so that each request works with its own CityBean.
I agree that you shouldn't make the controller itself request-scoped - it would be confusing for other people since it's not a typical approach in Spring MVC applications.
You can also follow the approach suggest by Bozho and get rid of request-scoped bean at all, though that approach has a drawback since it requires you to add extra argument to your controller methods.

Categories