Spring lazy loading after bean is really called - java

I would like to have a (singleton) bean initialized only after it is actually used (instead of when it is only autowired). Let say I have a Client that I want to initialize only when I want to call any of its methods
#Component
#Lazy(true)
public class Client {
#PostConstruct
void init() {}
void action(){}
}
And I have a Service class that sometimes use it (and maybe sometimes not).
#Service
public class Service {
#Autowired
Client client;
void action1WithClient(){}
void action2WithClient(){}
void actionWithoutClient(){}
}
As it is now, the client is initialized right on the application startup without actually being used due to #Autowired and the fact that Service is eagerly loaded.
Currently only solution that comes to my mind is to do kind of double-checked locking with explicitly asking for Client bean from spring application context when someone tries to use (ie. without #Autowired) or (maybe even better) to do "manual" lazy loading inside the Client.
Question: Is there a "spring" way to postpone the initialization of client until any of its method is actually called (e.g. something like lazy loading is working for hibernate collections)?
I am using Spring 4.

Ah, ok, I should read the javadoc more properly... Solution seems to add anotation "each" autowired dependeny:
#Autowired
#Lazy
Client client;
Anyway - if anybody knows if it is possible to omit such declarations, since it might be error prone - one can easily forget to use #Lazy at each injection point.

Edit:
The easiest way is #ComponentScan(lazyInit = true, basePackages=...).
Previous answer:
There is http://docs.spring.io/spring/docs/3.0.x/javadoc-api/org/springframework/aop/target/LazyInitTargetSource.html
A bean wrapped with LazyInitTargetSource will not be created until the first actual usage.
How to nicely wrap most of your beans with this wrapper? One possible way is to create your own BeanFactoryPostProcessor...

Related

Why Spring's #Transactional works without proxy?

I got interested in how Spring's #Transactional works internally, but everywhere I read about it there's a concept of proxy. Proxies are supposed to be autowired in place of real bean and "decorate" base method with additional transaction handling methods.
The theory is quite clear to me and makes perfect sense so I tried to check how it works in action.
I created a Spring Boot application with a basic controller and service layers and marked one method with #Transactional annotation. Service looks like this:
public class TestService implements ITestService {
#PersistenceContext
EntityManager entityManager;
#Transactional
public void doSomething() {
System.out.println("Service...");
entityManager.persist(new TestEntity("XYZ"));
}}
Controller calls the service:
public class TestController {
#Autowired
ITestService testService;
#PostMapping("/doSomething")
public ResponseEntity addHero() {
testService.doSomething();
System.out.println(Proxy.isProxyClass(testService.getClass()));
System.out.println(testService);
return new ResponseEntity(HttpStatus.OK);
}}
The whole thing works, new entity is persisted to the DB but the whole point of my concern is the output:
Service...
false
com.example.demo.TestService#7fb48179
It seems that the service class was injected explicitly instead of proxy class. Not only "isProxy" returns false, but also the class output ("com.example.demo.TestService#7fb48179") suggests its not a proxy.
Could you please help me out with that? Why wasn't the proxy injected, and how does it even work without proxy? Is there any way I can "force" it to be proxied, and if so - why the proxy is not injected by default by Spring ?
There's not much to be added, this is a really simple app. Application properties are nothing fancy either :
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=root
spring.datasource.password=superSecretPassword
spring.datasource.url=jdbc:mysql://localhost:3306/heroes?serverTimezone=UTC
spring.jpa.hibernate.ddl-auto=create-drop
Thank you in advance!
Your understanding is correct, but your test is flawed:
When the spring docs say "proxy", they are referring to the pattern, not a particular implementation. Spring supports various strategies for creating proxy objects. One of these is the java.lang.reflect.Proxy you tested for, but by default spring uses a more advanced technique that generates a new class definition at runtime that subclasses the actual implementation class of the service (and overrides all methods to apply transaction advice). You can see this in action by checking testService.getClass(), which will refer to that generated class, or by halting execution in a debugger, and inspecting the fields of targetService.
The reason that toString() refers to the original object is that the proxy implements toString() by delegating to its target object, which uses its class name to build the String.

Create proxy for a interface, which could inject by #Autowired, proxy invokes different implement by key parameter

I want to define a annotation like #PlatformRelated, once it is marked in a interface, there will be a proxy bean at spring context, and this proxy bean should be #Priority.I want this proxy could invoke different implement according to key parameter #KeyPrameter.And I still wanna use spring features like #Async,#Trasaction,etc... at my Implement1 and Implement2.
#PlatformRelated
interface MyInterface {
method(#KeyPrameter String parameter);
}
#Component
class Implement1 implements MyInterface {
method(String parameter){
//do something 111
}
}
#Component
class Implement2 implements MyInterface {
method(String parameter){
//do something 222
}
}
#Service
class BusinessService{
#Autowired
private MyInterface myInterface;
public void doSomething() {
myInterface.method("key1");
//Implement1 work
myInterface.method("key2");
//Implement2 work
}
}
Do you guys have some good idea to complete it?
I must admit I haven't totally understood the meaning #Priority, however, I can say that if you want to implement this feature in spring, you should probably take a look at Bean Post Processors.
BeanPostProcessors are essentially a hook to Bean Creation process in spring intended for altering bean behavior.
Among other things, they allow wrapping the underlying bean into the proxy (CGLIB/java.lang.Proxy if you're working with interfaces, or even using programmatically Spring AOP), these proxies can provide a hook to the method execution that can read your annotations (like mentioned #KeyParameter) and execute a code in a way similar to Aspect's code that you already make use of.
Not all bean post processor wrap the bean into the proxy. For example, if you want to implement a BPP that uses "#Autowire", you will return the same bean, just "inject" (read, put by reflection) its dependencies. On the other hand, if you want to implement with BPP #Transactional behavior, then yes, you should wrap the bean into a proxy that would take care of transaction management capabilities before and after the method execution.
It's totally ok to have a spring bean that gets "altered" by many post processors, some of them would wrap it into a proxy other will just modify-and-return the same bean, If there are many BPP-s that wrap the bean into proxy we'll get "proxy inside proxy inside proxy" (you get the idea). Each layer of proxy will handle one specific behavior.
As an example, I suggest you take a look at existing Spring postprocessors, or, for instance, a source code of the following library: Spring Boot metering integration library
This library contains some implementations of post processors that allow metrics infrastructure integration by defining annotations on methods of Spring Beans.

Starting a CDI conversation and injecting #ConversationScoped bean into stateless session bean

Similar questions have been asked, but don't quite address what I'm trying to do. We have an older Seam 2.x-based application with a batch job framework that we are converting to CDI. The job framework uses the Seam Contexts object to initiate a conversation. The job framework also loads a job-specific data holder (basically a Map) that can then be accessed, via the Seam Contexts object, by any service down the chain, including from SLSBs. Some of these services can update the Map, so that job state can change and be detected from record to record.
It looks like in CDI, the job will #Inject a CDI Conversation object, and manually begin/end the conversation. We would also define a new ConversationScoped bean that holds the Map (MapBean). What's not clear to me are two things:
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
Edits for clarification and a simplified class structure:
MapBean would need to be a ConversationScoped object, containing data for a specific instance/run of a job.
#ConversationScoped
public class MapBean implements Serializable {
private Map<String, Object> data;
// accessors
public Object getData(String key) {
return data.get(key);
}
public void setData(String key, Object value) {
data.put(key, value);
}
}
The job would be ConversationScoped:
#ConversationScoped
public class BatchJob {
#Inject private MapBean mapBean;
#Inject private Conversation conversation;
#Inject private JobProcessingBean jobProcessingBean;
public void runJob() {
try {
conversation.begin();
mapBean.setData("key", "value"); // is this MapBean instance now bound to the conversation?
jobProcessingBean.doWork();
} catch (Exception e) {
// catch something
} finally {
conversation.end();
}
}
}
The job might call a SLSB, and the current conversation-scoped instance of MapBean needs to be available:
#Stateless
public class JobProcessingBean {
#Inject private MapBean mapBean;
public void doWork() {
// when this is called, is "mapBean" the current conversation instance?
Object value = mapBean.getData("key");
}
}
Our job and SLSB framework is quite complex, the SLSB can call numerous other services or locally instantiated business logic classes, and each of these would need access to the conversation-scoped MapBean.
First, the job needs to also #Inject the MapBean so that it can be loaded with job-specific data before the Conversation.begin() method is called. Would the container know to pass this instance to services down the call chain?
Yes, since MapBean is #ConversationScoped it is tied to the call chain for the duration starting from conversation.begin() until conversation.end(). You can think of #ConversationScoped (and #RequestScoped and #SessionScoped) as instances in ThreadLocal - while there exists an instance of them for every thread, each instance is tied to that single thread.
Related to that, according to this question Is it possible to #Inject a #RequestScoped bean into a #Stateless EJB? it should be possible to inject a #ConservationScoped bean into a SLSB, but it seems a bit magical. If the SLSB is used by a different process (job, UI call, etc), does it get separate instance for each call?
It's not as magical as you think if you see that this pattern is the same as the one I explained above. The SLSB indeed gets a separate instance, but not just any instance, the one which belongs to the scope from which the SLSB was called.
In addition to the link you posted, see also this answer.
Iv'e tested a similar code to what you posted and it works as expected - the MapBean is the same one injected throughout the call. Just be careful with 2 things:
BatchJob is also #ConversationScoped but does not implement Serializable, which will not allow the bean to passivate.
data is not initialized, so you will get an NPE in runJob().
Without any code samples, I'll have to do some guessing, so let's see if I got you right.
Would the container know to pass this instance to services down the call chain?
If you mean to use the same instance elsewhere in the call, then this can be easily achieved by making the MapBean an #ApplicationScoped bean (or, alternatively, and EJB #Singleton).
it should be possible to inject a ConservationScoped bean into a SLSB, but it seems a bit magical.
Here I suppose that the reason why it seems magical is that SLSB is in terms of CDI a #Dependent bean. And as you probably know, CDI always creates new instance for dependent bean injection point. E.g. yes, you get a different SLS/Dependent bean instance for each call.
Perhaps some other scope would fit you better here? Like #RequestScoped or #SessionScoped? Hard to tell without more details.

Is it allowed to have instance variables in Spring Services?

I have a spring service that provides configuration data. When the service is invoked by the GUI, it loads the configuration data from the database. It turns out that this happens quite often during the rendering of a single request. I want to optimize this by caching the configuration data. However, I am not sure if this is a good programming style or if it is "allowed" to have an instance variable in a service.
Here is some example code of what I am thinking of doing:
#Serivce("MyConfigService")
public class MyConfigServiceImpl implements MyConfigService {
private Config cachedConfig;
#Override
public Config loadConfig() {
if (cachedConfig != null) {
// load config
cachedConfig = loadedConfig;
}
return cachedConfig;
}
#Override
public saveConfig(Config config) {
cachedConfig = null;
// save the configuration
}
}
Having a instance variable (not managed by spring) introduces the possibility of the service becoming thread unsafe. So I try to avoid them, or make sure they are thread safe.
You may want to look at #configurable and #postconstuct annotations to achieve your goals.
Are instance variables allowed in Spring service layer classes? Sure.
Is it a good idea to use one to save a reference to the Config object here?
Maybe, maybe not.
You're not showing how Config normally gets loaded... Does the same Config instance get returned to all users? i.e. - When User1 calls saveConfig then User2 calls loadConfig User2 gets the Config object User1 saved.
If so, you should be able to cache the value with no problems.
Also, instead of implementing it yourself, you could use Spring's annotation-based caching.
Instance variables is what Spring IoC container is all about; what is dubious in your design is that you have your own lazy-loading logic in loadConfig—that's the concern you must leave to Spring via lazy-init=true (or similar, I don't remember exactly). The design will also probably involve lookup methods and posibly request-scoped beans.

Spring circular reference example

I have a circular reference in one of my projects at work using spring, which I am unable to fix, and fails with the following error at startup:
'org.springframework.security.authenticationManager': Requested bean is currently in creation: Is there an unresolvable circular reference?
I tried to recreate the same problem at a smaller level in a sample project (without all the details of my work project). I have however been unable to come up with a plausible scenario where spring fails with an error.
Here's what I have:
public class ClassA {
#Autowired
ClassB classB;
}
public class ClassB {
#Autowired
ClassC classC;
}
#Component
public class ClassC {
#Autowired
ClassA classA;
}
#Configuration
public class Config {
#Bean
public ClassA classA() {
return new ClassA();
}
#Bean
public ClassB classB() {
return new ClassB();
}
}
I have a similar scenario in my project, which fails, and I was expecting spring to complain in my sample project as well. But it works fine! Can someone give me a simple example of how to break spring with the circular reference error?
Edit: I fixed the issue using javax.inject.Provider. The only other difference in the 2 projects was the annotations used were javax.inject.Inject and javax.annotation.ManagedBean in place of #Autowired and #Component.
You could use #Lazy to indicate that the bean is lazily created, breaking the eager cycle of autowiring.
The idea is that some bean on the cycle could be instantiated as a proxy, and just at the moment it is really needed it will be initialized. This means, all beans are initialized except the one that is a proxy. Using it for the first time will trigger the configuration and as the other beans are already configured it will not be a problem.
From one Issue in Spring-Jira:
#Lazy annotation that can be used in conjunction with #Configuration
to indicate that all beans within that configuration class should be
lazily initialized. Of course, #Lazy may also be used in conjunction
with individual #Bean methods to indicate lazy initialization on a
one-by-one basis.
https://jira.springsource.org/browse/SJC-263
Meaning that annotating your bean as #Lazy would be enough. Or if you prefer just annotate the configuration class as #Lazy as follows:
#Configuration
#Lazy
public class Config {
#Bean
public ClassA classA() {
return new ClassA();
}
#Bean
public ClassB classB() {
return new ClassB();
}
}
If you implement an interface of your beans this will work quite well.
This is an old thread, so I guess you almost forgot about the issue, but I want to let you know about the mystery. I encountered the same problem, and mine didn't go away magically, so I had to resolve the problem. I'll solve your questions step by step.
1. Why you couldn't reproduce the circular reference exception?
Because Spring takes care of it. It creates beans and injects them as required.
2. Then why does your project produce the exception?
As #sperumal said, Spring may produce circular exception if you use constructor injection
According to the log, you use Spring Security in your project
In the Spring Security config, they do use constructor injection
Your beans which injects the authenticationManager had the circular reference
3. Then why has the exception gone away mystically?
The exception may or may not occur depends on the creation order of beans. I guess you made several *context.xml files or so, and load them with config something like below in web.xml
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>classpath:*-context.xml</param-value>
</context-param>
The xml files will be loaded by XmlWebApplicationContext class and the loading order of files are not guaranteed. It just loads files from the file system. The problem is here. There's no problem if the class loads the application context file first, because your beans are already created when they are used for the construction injection of Spring Security. But, if it loads the Spring Security context file first, the circular reference problem occurs, because Spring tries to use your beans in the constructor injection before they had been created.
4. How to solve the problem?
Force the loading order of the xml files. In my case, I loaded the security context xml file at the end of the application context file by using <import resource="">. The loading order can be changed depends on environments even with the same code, so I recommend setting the order to remove potential problems.
According to Spring documentation, it is possible to get Circular dependency issue or BeanCurrentlyInCreationException by using constructor injection.
The solution to fix the issue is to use setters instead of Constructor injection.
Reference http://static.springsource.org/spring/docs/3.1.x/spring-framework-reference/html/beans.html.

Categories