in our software we are using spring java config. We have a setup, where one configuration extends an abstract configuration. Please have a look at this testcase:
import java.util.concurrent.atomic.AtomicInteger;
import org.junit.Test;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
public class SpringConfigTest {
#Test
public void test() {
final AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext(MyConfig.class);
ctx.getBeansOfType(AtomicInteger.class).entrySet().stream().forEach(b -> System.out.println(b.getKey() + " : " + b.getValue() + " (" + b.getValue().hashCode() + ")"));
}
#Configuration
public static class MyConfig extends AbstractConfig {
#Bean(name = "anotherName")
public AtomicInteger myBean() {
return new AtomicInteger(5);
}
}
public static abstract class AbstractConfig {
#Bean
public AtomicInteger myBean() {
return new AtomicInteger(10);
}
}
}
The idea is, that MyConfig overwrites AbstractConfig and in the created ApplicationContext there is only one bean of type AtomicInteger under the name anotherName.
The results was:
anotherName : 5 (2109798150)
myBean : 5 (1074389766)
So it says, there are two beans (two instances - one for each name) - and even more surpring: the same method (MyConfig#myBean()) was used to create both of them.
This behaviour looks odd to us: We expected that spring would either respect the usual java way of inheritance and create only the bean from MyConfig... or at least would create two independent beans ("10" and "5") in case it sees AbstractConfig as an independant config.
While investigating this we also tried to register the method name on the MyConfig class:
public static class MyConfig extends AbstractConfig {
#Bean(name = ["anotherName", "myBean"])
public AtomicInteger myBean() {
...
and this time we got only one bean:
anotherName : 5 (2109798150)
.. what was even more surprising for us.
Does anybody know if this is really the correct behaviour or are we only using it wrong? Should we raise a ticket in spring's jira?
Thanks in advance!
I'm not a Spring pro, but I'd say that behaviour is by design. To achieve what you want (I hope I guessed right) "inject this bean instead of the other" you would use #Primary on a bean, to selectively enable a configuration depending on circumstances you would use a #Conditional i.e. #Profile.
In Spring, beans can be wired by type first, but then by name.
Hence #Qualifier("myBeanName") can disambiguate autowiring multiple beans with the same type, e.g.
So:
the very fact that the non-abstract bean has been given another name, causes it to be considered a different bean in the application context.
You can declare beans in a non-configuration class. That is called "lite" mode, but it is still a bean in the application context.
See also this answer on lite mode.
I didn't know one could give a bean more than one name, but since Spring beans are singletons by default, it stands to reason only one bean would be created in the second case, as "myBean" already exists and only one bean with that name can be in the application context.
Related
In a real project, I found out that #Component may be omitted in the following code:
// no #Component !!!!!
public class MovieRecommender {
private final CustomerPreference customerPreference;
#Autowired
public MovieRecommender(CustomerPreference customerPreference) {
this.customerPreference = customerPreference;
}
// ...
}
#Component
public class CustomerPreference {...}
(The example is taken from the official Spring docs https://docs.spring.io/spring-framework/docs/4.3.x/spring-framework-reference/htmlsingle/#beans-autowired-annotation , and the docs show no #Component at all, which may mean either that it is not needed, or that it is just not shown.)
The project where I work does not use any XML bean declarations, but it uses frameworks other than just Spring, so it is possible that something declares the class as a bean. Or it may be a feature of the version of Spring that we use, and if that feature is not documented, it may be dropped later.
Question:
Must the class that uses #Autowired be annotated with #Component (well, be a bean)? Is there any official documentation about that?
UPD Folks, there is no #Configuration and no XML configs in the project, I know that such declarations make a bean from a class, but the question is not about them. I even wrote "(well, be a bean)" in the question above to cover that. Does #Autowired work in a class that is not a bean? Or maybe it declares the class that uses it as a bean?
there are several ways to instantiate a bean in Spring.
One of them indeed is with the #Component annotations, with that, Spring will scan all the packages defined for component-scan and initialize all annotated classes (either with #Component or one of the annotations that uses it - Controller, Service, etc.).
Other way to initialise beans is using a configuration class (annotated with #Configuration) that includes methods annotated with #Bean. each of these methods will create a bean.
There's also an option to create the beans using xml configurations, but this is becoming less and less common, as the annotation-based approach is more convinient
According to https://stackoverflow.com/a/3813725/755804 , with autowireBean() it is possible to autowire a bean from a class not declared as a bean.
#Autowired
private AutowireCapableBeanFactory beanFactory;
public void sayHello(){
System.out.println("Hello World");
Bar bar = new Bar();
beanFactory.autowireBean(bar);
bar.sayHello();
}
and
package com.example.demo;
import org.springframework.beans.factory.annotation.Autowired;
public class Bar {
#Autowired
private Foo foo;
public void sayHello(){
System.out.println("Bar: Hello World! foo="+foo);
}
}
On the other hand, by default the latest Spring does not assume that classes that use #Autowire are #Component-s.
UPD
As to the mentioned real project, the stack trace shows that the constructor is called from createBean(). That is, the framework creates beans from classes declared in the framework's configs.
Edit Fixed by changing package.
I have this configuration file for spring framework
#Configuration
public class AppConfig {
#Bean(initMethod = "populateCache")
public AccountRepository accountRepository(){
return new JdbcAccountRepository();
}
}
JdbcAccountRepository looks like this.
#Repository
public class JdbcAccountRepository implements AccountRepository {
#Override
public Account findByAccountId(long
return new SavingAccount();
}
public void populateCache() {
System.out.println("Populating Cache");
}
public void clearCache(){
System.out.println("Clearing Cache");
}
}
I'm new to spring framework and trying to use initMethod or destroyMethod. Both of these method are showing following errors.
Caused by: org.springframework.beans.factory.support.BeanDefinitionValidationException: Could not find an init method named 'populateCache' on bean with name 'accountRepository'
Here is my main method.
public class BeanLifeCycleDemo {
public static void main(String[] args) {
ConfigurableApplicationContext applicationContext = new
AnnotationConfigApplicationContext(AppConfig.class);
AccountRepository bean = applicationContext.getBean(AccountRepository.class);
applicationContext.close();
}
}
Edit
I was practicing from a book and had created many packages for different chapters. Error was it was importing different JdbcAccountRepository from different package that did not have that method. I fixed it and it works now. I got hinted at this from answers.
Like you said, if you are mixing configurations types, it can be confusing. Besides, even if you created a Bean of type AccountRepository, because Spring does a lot of things at runtime, it can call your initMethod, even if the compiler couldn't.
So yes, if you have many beans with the same type, Spring can be confused an know which one to call, hence your exception.
Oh and by the way, having a Configuration creating the accountRepoisitory Bean, you can remove the #Repository from your JdbcAccountRepository... It is either #Configuration + #Bean or #Component/Repository/Service + #ComponentScan.
TL;DR
Here is more information and how Spring creates your bean : What object are injected by Spring ?
#Bean(initMethod = "populateCache")
public AccountRepository accountRepository(){
return new JdbcAccountRepository();
}
With this code, Spring will :
Detect that you want to add a Bean in the application Context
The bean information are retrieved from the method signature. In your case, it will create a bean of type AccountRepository named accountRepository... That's all Spring knows, it won't look inside your method body.
Once Spring is done analysing your classpath, or scanning the bean definitions, it will start instanciating your object.
It will therefor creates your bean accountRepository of type AccountRepository.
But Spring is "clever" and nice with us. Even if you couldn't write this code without your compiler yelling at you, Spring can still call your method.
To make sure, try writing this code :
AccountRepository accountRepository = new JdbcAccountRepository();
accountRepository.populateCache(); // Compiler error => the method is not found.
But it works for Spring... Magic.
My recommandation, but you might thinking the same now: If you have classes across many packages to answer different business case, then rely on #Configuration classes. #ComponentScan is great to kickstart your development, but reach its limit when your application grows...
You mix two different ways of spring bean declaration:
Using #Configuration classes. Spring finds all beans annotated with #Configuration and uses them as a reference to what beans should be created.
So if you follow this path of configuration - don't use #Repository on beans. Spring will detect it anyway
Using #Repository - other way around - you don't need to use #Configuration in this case. If you decide to use #Repository put #PostConstruct annotation on the method and spring will call it, in this case remove #Configuration altogether (or at least remove #Bean method that creates JdbcAccountRepository)
Annotate populateCache method with #PostConstruct and remove initMethod from #Bean. It will work.
Here's the class with the primary bean:
#Configuration
public class AppConfig {
#Bean
#Primary
public WeatherGauge weatherGauge() {
return () -> "40 F";
}
}
and here's the class defining and using the competing bean:
#Configuration
public class WeatherConfig {
#Bean
public WeatherGauge weatherGauge() {
return () -> "20 C";
}
#Bean
public StateReporter stateReporter(WeatherGauge weatherGauge) {
return new StateReporter(weatherGauge);
}
}
I'm using spring-boot-starter-parent:2.1.9.RELEASE. If I use StateReporter and print the weather gauged, I get 20 C, which does not come from the primary bean. The #Primary is ignored. Is this by design or a flaw? Just the way #Configuration works? If i define the primary implementation as a #Component class, the #Primary is in fact honored.
Edit: I forgot to say that AppConfig gets picked up if the other bean is not present. Everything is in the same package as the main class and I do use the allow-override=true property.
You're defining two beans with the same name and type: only one will be created, and the other will be overridden.
The #Primary annotation is for when two or more beans of the same type exist. It designates one of them as the primary bean used in dependency injection.
You can see how this works by making a small code change.
#Bean
#Primary
public WeatherGauge weatherGauge2() {
return new WeatherGauge("40 F");
}
#Bean
public WeatherGauge weatherGauge() {
return new WeatherGauge("20 C");
}
Now two beans are defined, with one of them weatherGauge2 being the primary.
I assumed that you used prop:
spring.main.allow-bean-definition-overriding=true
to run this code. This link propably will clear your problem for you.
If you dont want to read whole article:
Mechanism which caused you this problem is called bean overriding. It's almost impossible to predict which bean will override another with java based configs. When you use both (xml and java based) configurations, then java based is always loaded first and XML configuration always latest, so it will override everything else. That's why your #Component class with #Primary is honored - because is loaded after configuration.
I am moderately confused about the DI injection mechanism in Spring when having multiple beans with the same name/type.
According to the exam slides from the Pivotal's "Core Spring" Course, Spring's behaviour with identical beans can be boiled down to:
One can define same bean more than once
Spring injects the bean defined last
Using #Order, the loading mechanism (and thus, which bean is loaded last) can be modified
However, in the following example, Spring will ignore any #Order annotations and inject the bean from the Config class last mentioned in the #Import statement.
I'm therefore wondering whether the order of config classes in the #Import annotation overrides any #Order annotations. Or do I miss another important point?
Any hints are highly appreciated. Thanks Stack Overflow!
Main Configuration class
#Configuration
#Import({RogueConfig.class,RewardsConfig.class})
public class TestInfrastructureConfig {
// nothing interesting here, just importing configs
}
RewardsConfig
#Configuration
#Order(1)
public class RewardsConfig {
#Bean
public RewardNetwork rewardNetwork() {
System.out.println("This Bean has been loaded from: " + this.getClass().getName());
return new RewardNetworkImpl(null, null, null);
}
}
RogueConfig
#Configuration
#Order(2)
public class RogueConfig {
#Bean
public RewardNetwork rewardNetwork() {
System.out.println("This Bean has been loaded from: " + this.getClass().getName());
return new RewardNetworkImpl(null, null, null);
}
}
Test class
public class RewardNetworkTests {
ApplicationContext applicationContext;
#BeforeEach
void setUp() {
applicationContext = SpringApplication.run(TestInfrastructureConfig.class);
}
#Test
void injectingRewardNetworkBeanWithOrdering() {
RewardNetwork rewardNetwork = applicationContext.getBean(RewardNetwork.class);
assertNotNull(rewardNetwork);
}
}
No matter what values I assign #Order, or if I use ordering at all, the result will always be:
This Bean has been loaded from: config.RewardsConfig$$EnhancerBySpringCGLIB$$62461c55
The only way to change this is to modify the Import annotation in my TestInfrastructureConfig like so:
#Import({RewardsConfig.class,RogueConfig.class}), which yields:
This Bean has been loaded from: config.RogueConfig$$EnhancerBySpringCGLIB$$6ca7bc89
I am wondering what needs to be done to allow the values defined in #Order to take any effect.
I've been able to get Spring to use the #Order annotations by loading the configurations directly ( i.e. without the detour through a #Configuration class using #Import):
#SpringJUnitConfig({RogueConfig.class, RewardsConfig.class})
public class CdiTest {
#Test
public void testCdiWithIdenticalBeans(#Autowired RewardNetwork rewardNetwork) {
assertThat(rewardNetwork).isNotNull();
}
}
With the #Order(2) annotation on the RogueConfig class, this bean got loaded last, as shown in stdout:
This Bean has been loaded from: config.RogueConfig$$EnhancerBySpringCGLIB$$552b937f
It seems that when using #Import in config classes it will load bean definitions in the order provided in the annotation, thus making any #Order annotations on the respective config classes useless.
In the following example is there any different between result and result2 bean definitions:
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Scope;
#Configuration
public class Application {
#Bean
public Integer data1() {
return 42;
}
#Bean
public Integer data2() {
return 7;
}
#Bean
public Integer result(
#Qualifier("data1") Integer a,
#Qualifier("data2") Integer b) {
return a + b;
}
#Bean
public Integer result2() {
return data1() + data2();
}
public static void main(final String... args) {
final ApplicationContext context = new AnnotationConfigApplicationContext(Application.class);
System.out.println(context.getBean("result"));
System.out.println(context.getBean("result2"));
}
}
are there any related best practices?
any drawbacks?
First question: Is there any differences?
Yes, the result version will get two beans it depends on using dependency injection and follow all rules regarding the scopes of these beans. The result2 version calls two factory methods itself and is not taking advantage of dependency injection.
Second and third question: Are there any best practices or drawbacks?
The first version that actually lets spring inject the dependencies benefit from all advantages that comes with spring dependency injection. You can specify scopes and override the specifications of which beans to inject in other contexts.
The other version will just make hardcoded calls to the two factory methods itself, which means that the factory methods themselves cannot have any dependencies injected and will not respect any annotations like scope.
My recommendation is to go with version one which takes follows the dependency injection paradigm. Otherwise, at least the two factory methods should be treated as regular methods and have the spring annotations removed in order not to trick any reader of your code that spring manages the beans lifecycle.
Imagine a non-trivival example where data1 and data2 is creating complex beans that are used by several other beans, and where you may want to change the actual instances based on context, such as unit tests, test/stage environment or production...