#ConditionalOnBean(KafkaTemplate.class) crashes entire application - java

I have a Spring boot application that consumes data from Kafka topic and send email notifications with a data received from Kafka,
#Bean
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
it works perfectly,
but after I added #ConditionalOnBean:
#Bean
#ConditionalOnBean(KafkaTemplate.class)
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
application failed to start:
required a bean of type 'com.acme.EmailService' that could not be
found.
And I can't find any explanation, how it is possible, because KafkaTemplate bean automatically created by Spring in KafkaAutoConfiguration class.
Could you please give me an explanation?

From the documentation:
The condition can only match the bean definitions that have been
processed by the application context so far and, as such, it is
strongly recommended to use this condition on auto-configuration
classes only. If a candidate bean may be created by another
auto-configuration, make sure that the one using this condition runs
after.
This documentation clearly says what might be wrong here. I understand KafkaTemplateConfiguration creates the KafkaTemplate.class. But it may not be added in the bean context while the condition was being checked. Try to use autoconfiguration for KafkaTemplate or make sure the ordering of different configuration classes so that you can have the guarantee of having the KafkaTemplate in bean registry before that conditional check.

Related

Spring Boot 2 - Wire Two LDAP Templates

I need to configure multiple LDAP data sources / LdapTemplates in my Spring Boot 2 application. The first LdapTemplate will be used for most of the work, while the second will be used for a once-in-a-while subset of data (housed elsewhere).
I have read these StackOverflow questions regarding doing that, but they seem to be for Spring Boot 1.
Can a spring ldap repository project access two different ldap directories?
Multiple LDAP repositories with Spring LDAP Repository
From what I can gather, much of that configuration/setup had to be done anyway, even for just one LDAP data source, back in Spring Boot 1. With Spring Boot 2, I just put the properties in my config file like so
ldap.url=ldap://server.domain.com:389
ldap.base:DC=domain,DC=com
ldap.username:domain\ldap.svc.acct
ldap.password:secret
and autowire the template in my repository like so
#Autowired
private final LdapTemplate ldapTemplate;
and I'm good to go. (See: https://stackoverflow.com/a/53474188/3669288)
For a second LDAP data source, can I just add the properties and configuration elements for "ldap2" and be done (see linked questions)? Or does adding this configuration cause Spring Boot 2's auto configuration to think I'm overriding it and so now I lose my first LdapTemplate, meaning I now need to go explicitly configure that as well?
If so, do I need to configure everything, or will only a partial configuration work? For example, if I add the context source configuration and mark it as #Primary (does that work for LDAP data sources?), can I skip explicitly assigning it to the first LdapTemplate? On a related note, do I still need to add the #EnableLdapRepositories annotation, which is otherwise autoconfigured by Spring Boot 2?
TLDR: What's the minimum configuration I need to add in Spring Boot 2 to wire in a second LdapTemplate?
This takes what I've learned over the weekend and applies it as an answer to my own question. I'm still not an expert in this so I welcome more experienced answers or comments.
The Explanation
First, I still don't know for certain if I need the #EnableLdapRepositories annotation. I don't yet make use of those features, so I can't say if not having it matters, or if Spring Boot 2 is still taking care of that automatically. I suspect Spring Boot 2 is, but I'm not certain.
Second, Spring Boot's autoconfigurations all happen after any user configurations, such as my code configuring a second LDAP data source. The autoconfiguration is using a couple of conditional annotations for whether or not it runs, based on the existence of a context source or an LdapTemplate.
This means that it sees my "second" LDAP context source (the condition is just that a context source bean exists, regardless of what its name is or what properties it is using) and skips creating one itself, meaning that I no longer have that piece of my primary data source configured.
It will also see my "second" LdapTemplate (again, the condition is just that an LdapTemplate bean exists, regardless of what its name is or what context source or properties it is using) and skip creating one itself, so I again no longer have that piece of my primary data source configured.
Unfortunately, those conditions mean that in this case there is no in-between either (where I can manually configure the context source, for example, and then allow the autoconfiguration of the LdapTemplate to still happen). So the solution is to either make my configuration run after the autoconfiguration, or to not leverage the autoconfiguration at all and set them both up myself.
As for making my configuration run after the autoconfiguration: the only way to do that is to make my configuration an autoconfiguration itself and specify its order to be after Spring's built-in autoconfiguration (see: https://stackoverflow.com/a/53474188/3669288). That's not appropriate for my use case, so for my situation (because Spring Boot's setup does make sense for a standard single-source situation) I'm stuck forgoing the autoconfiguration and setting them both up myself.
The Code
Setting up two data sources is pretty well covered in the following two answers (though partly for other reasons), as linked in my question, but I'll also detail my setup here.
Can a spring ldap repository project access two different ldap directories?
Multiple LDAP repositories with Spring LDAP Repository
First up, the configuration class needs to be created, as one was not previously needed at all with Spring Boot 2. Again, I left out the #EnableLdapRepositories annotation partly because I don't use it yet, and partly because I think Spring Boot 2 will still cover that for me. (Note: All of this code was typed up in the Stack Overflow answer box as I don't have a development environment where I'm writing this, so imports are skipped and the code may not be perfectly compilable and function correctly, though I hope it's good.)
#Configuration
public class LdapConfiguration {
}
Second is manually configuring the primary data source; the one that used to be autoconfigured but no longer will be. There is one piece of Spring Boot's autoconfiguration that can be leveraged here, and that is its reading in of the standard spring.ldap.* properties (into a properties object), but since it wasn't given a name, you have to reference it by its fully qualified class name. This means you can skip straight to setting up the context source for the primary data source. This code is not quite as full featured as the actual autoconfiguration code (See: Spring Code)
I marked this LdapTemplate as #Primary because for my use, this is the primary data source and so it's what all other autowired calls should default to. This also means you don't need a #Qualifier where you autowire this source up (as seen later).
#Configuration
public class LdapConfiguration {
#Bean(name="contextSource")
public LdapContextSource ldapContextSource(#Qualifier("spring.ldap-org.springframework.boot.autoconfigure.ldap.LdapProperties") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate")
#Primary
public LdapTemplate ldapTemplate(#Qualifier("contextSource") LdapContextSource source) {
return new LdapTemplate(source);
}
}
Third is to manually configure the secondary data source, the one that caused all of this to begin with. For this one, you do need to configure the reading of your properties into an LdapProperties object. This code builds on the previous code, so you can see the complete class for context.
#Configuration
public class LdapConfiguration {
#Bean(name="contextSource")
public LdapContextSource ldapContextSource(#Qualifier("spring.ldap-org.springframework.boot.autoconfigure.ldap.LdapProperties") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate")
#Primary
public LdapTemplate ldapTemplate(#Qualifier("contextSource") LdapContextSource source) {
return new LdapTemplate(source);
}
#Bean(name="ldapProperties2")
#ConfigurationProperties("app.ldap2")
public LdapProperties ldapProperties2() {
return new LdapProperties();
}
#Bean(name="contextSource2")
public LdapContextSource ldapContextSource2(#Qualifier("ldapProperties2") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate2")
public LdapTemplate ldapTemplate2(#Qualifier("contextSource2") LdapContextSource source) {
return new LdapTemplate(source);
}
}
Finally, in your class that uses these LdapTemplates, you can autowire them as normal. This uses constructor autowiring instead of the field autowiring the other two answers used. Either is technically valid though constructor autowiring is recommended.
#Component
public class LdapProcessing {
protected LdapTemplate ldapTemplate;
protected LdapTemplate ldapTemplate2;
#Autowired
public LdapProcessing(LdapTemplate ldapTemplate, #Qualifier("ldapTemplate2") LdapTemplate ldapTemplate2) {
this.ldapTemplate = ldapTemplate;
this.ldapTemplate2 = ldapTemplate2;
}
}
TLDR: Defining a "second" LDAP data source stops the autoconfiguration of the first LDAP data source, so both must be (nearly fully) manually configured if using more than one; Spring's autoconfiguration can not be leveraged even for the first LDAP data source.

#ConditionalOnMissingBean vs #ConditionalOnSingleCandidate

We´re working on multi-module project where each module is a cutom spring boot starter that holds several retryable tasks. (Using spring-retry)
In order to ensure that retry annotations are activated only once cross starters, a configuration bean is added to each starter auto configuration submodule:
#EnableRetry
#Configuration
#ConditionalOnMissingBean(RetryConfiguration.class)
public class StarterRetryAutoConfiguation {}
The solution is working as expected.
Question: What's the difference between #ConditionalOnSingleCandidate and #ConditionalOnMissingBean ?
I've read the Spring documentation more then once. However, I didn't get when and where should we use each one of them.
ConditionalOnSingleCandidate:
#Conditional that only matches when a bean of the specified class is
already contained in the BeanFactory and a single candidate can be
determined. The condition will also match if multiple matching bean
instances are already contained in the BeanFactory but a primary
candidate has been defined; essentially, the condition match if
auto-wiring a bean with the defined type will succeed.
The condition can only match the bean definitions that have been
processed by the application context so far and, as such, it is
strongly recommended to use this condition on auto-configuration
classes only. If a candidate bean may be created by another
auto-configuration, make sure that the one using this condition runs
after.
ConditionalOnMissingBean:
#Conditional that only matches when no beans meeting the specified
requirements are already contained in the BeanFactory. None of the
requirements must be met for the condition to match and the
requirements do not have to be met by the same bean.
We can use #ConditionalOnMissingBean if we want to load a bean only if a certain other bean is not in the application context:
#Configuration
class OnMissingBeanModule {
#Bean
#ConditionalOnMissingBean
DataSource dataSource() {
return new InMemoryDataSource();
}
}
In this example, we’re only injecting an in-memory datasource into the application context if there is not already a datasource available. This is very similar to what Spring Boot does internally to provide an in-memory database in a test context.
We can use #ConditionalOnSingleCandidate if we want to load a bean only if a single candidate for the given bean class has been determined.
#Configuration
#ConditionalOnSingleCandidate(DataSource.class)
class OnSingleCandidateModule {
...
}
In this example, the condition matches if there is exactly one primary DataSource bean specified in your application context.

Spring Boot - metrics for MeterBinder never registered

Situation: You have a metric registered in Spring Boot via a MeterBinder. Maybe it one of the auto-configured metrics like jvm.gc.pause1 or it could be a custom metric of your own. But one day, you start your application and it is missing. It isn’t reported, it doesn’t show in Actuator, it’s just gone.
Root Cause: Probably your code or a library you are using is injecting the MeterRegistry. There are lots of legitimate reasons to do this, so don’t blame yourself. But injecting the MeterRegistry means that it will be created and initialised before all your beans are created, including possible MeterBinders.
It is also possible nothing is injecting MeterRegistry, but Spring has decided to create it before the MeterBinders for some other reason. Whatever the case, MeterBinders will stop working for you and there isn’t much you can do about it.
My solution is to create my own post-processor:
#Component
class FixMeterBinders implements BeanPostProcessor {
#Autowired
ObjectProvider<MeterRegistry> meters;
public Object postProcessAfterInitialization(Object bean, String beanName) {
if(bean instanceof MeterBinder) {
((MeterBinder)bean).bindTo(meters.getObject());
}
return bean;
}
}
There is a big downside to this approach: If Spring’s post-processor is working as intended, each MeterBinder will be run twice, so you need to make sure the work they do is idempotent.

Spring #EnableAsync breaks bean initialization order?

I wanted to introduce #Async methods (for sending mails in parallel) in my SpringBoot application.
But when I put the #EnableAsync annotation on our application's main #Configuration class (annotated with #SpringBootApplication), the Flyway DB migrations are executed before the DataSourceInitializer (which runs schema.sql and data.sql for my tests) executed.
The first operation involving a 'should-be-migrated' database table fails.
Removing the #EnableAsync puts everything back to normal. Why does this happen and how could I fix this (or work around the issue)?
Update Some more findings: #EnableAsync(mode = AdviceMode.ASPECTJ) keeps the original order of DB setup, but the #Async method runs on the same thread as caller thread then. I also saw that the Bean 'objectPostProcessor' is created early (3rd bean) when #EnableAsync is not present, or #EnableAsync(mode = AdviceMode.ASPECTJ) is used. When only #EnableAsync is used, this bean is created much later.
Update 2 While I wasn't able to create a minimal project which reproduces the problem yet, I found out that the proper DB setup order is restored in my affected application when I comment out the #EnableWebSocketMessageBroker in the following:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer
{
...
}
Bean 'webSocketConfig' is the first bean created (as per INFO-level console output) if #EnableWebSocketMessageBroker is present.
It turned out that having both #EnableAsync and #EnableWebSocketMessageBroker present in my application caused the described effect.
Removing one of it, restored the expected behavior, in which case the DataSourceInitializerPostProcessor created the DataSourceInitializer which triggered execution of schema.sql and data.sql, before flyway migrations took place.
When both annotations were present, the registration of the BeanPostProcessor named internalAsyncAnnotationProcessor happened before the DataSourceInitializerPostProcessor was registered.
The cause of the problem was that the registration of internalAsyncAnnotationProcessor caused the creation of the dataSource bean as a side effect. This side effect was caused by spring looking for a TaskExecutor bean to use, for the #Async method execution. spring unexpectedly picked up the clientInboundChannelExecutor bean which was present because of the #EnableWebSocketMessageBroker. Using this bean caused the instantiation of WebSocketMessagingAutoConfiguration which created the objectMapper bean (for json-serialization) which uses services that use DAO-repositories which depend on dataSource. So all those beans got created.
Because DataSourceInitializerPostProcessor wasn't even registered at that time, DataSourceInitializer was created much later, after the flyway migration took place.
The javadoc for #EnableAsync says the following:
By default, a SimpleAsyncTaskExecutor will be used to process async method invocations. Besides, annotated methods having a void return type cannot transmit any exception back to the caller. By default, such uncaught exceptions are only logged.
I assumed, that a SimpleAsyncTaskExecutor will be created to run the #Async methods, but instead spring picked up an existing bean with a matching type.
So the solution for this issue was to implement AsyncConfigurer, and provide my own Executor. This is also suggested in the javadoc of #EnableAsync:
To customize all this, implement AsyncConfigurer and provide:
* your own Executor through the getAsyncExecutor() method, and
* your own AsyncUncaughtExceptionHandler through the getAsyncUncaughtExceptionHandler() method.
With this tweak the DB setup is again executed as expected.

Detecting unused Spring beans

Given a Spring configuration that exclusively contains eager (non-lazy) singleton beans, i.e. the defaults, is it possible to have Spring throw an exception in the case where any of those beans is not injected anywhere? I'm essentially looking for a way to detect dead code in the form of Spring beans.
My question is somewhat similar to these.
http://forum.spring.io/forum/spring-projects/container/116494-any-tools-or-method-to-identify-unused-spring-beans
Spring Instantiation and 'unused beans'
How to detect unused properties in Spring
However,
I'm not interested in manually inspecting a graph or parsing log data.
I don't have the added complexity of multiple context files, overriding beans, bean post-processing, or xml. It's a simple, straightforward, annotation-driven configuration.
I'm using Spring Boot 1.2.6 which is several years newer than those questions (maybe new functionality exists).
Spring will certainly throw an exception if a necessary bean is missing. Can it also throw an exception in the opposite scenario where a bean is found but unnecessary?
Spring will certainly throw an exception if a necessary bean is
missing. Can it also throw an exception in the opposite scenario where
a bean is found but unnecessary?
TL/DR:
Spring does not support this (and probably never will).
Long version:
Detecting if a bean is used can be really hard.
First, lets define when does spring throw the "missing bean" exception.
During the initialisation of the spring context, spring creates the beans in the order in which it will allow for all dependencies to be satisfied (if possible). If a bean is missing a dependency, spring will throw an exception (as you said).
So, the exception is thrown during the spring context initialisation process.
Now, you could say that we could monitor this process and look for a bean that was not used as a dependency in any other bean.
The problem is that not all bean dependencies are defined during the spring context initialisation process.
Let's look at the following example:
First, we have a simple interface, DataService
public interface DataService {
String getData();
}
Now we have 2 spring beans that implement this interface:
#Service("firstDataService")
public class FirstDataService implements DataService {
#Override
public String getData() {
return "FIRST DATA SERVICE";
}
}
#Service("secondDataService")
public class SecondDataService implements DataService {
#Override
public String getData() {
return "SECOND DATA SERVICE";
}
}
Now, imagine that there is no bean that depends on these two beans directly. When I say directly, I mean there is no bean that depends on these beans via constructor-based, setter-based or field-based dependency injection.
Because of that, spring will not inject these beans inside any other bean during the context initialisation process.
Now, consider the following bean:
#Service
public class DataCollector {
#Autowired
ApplicationContext applicationContext;
String getDataFromService(String beanName) {
DataService ds = (DataService) applicationContext.getBean(beanName);
return ds.getData();
}
}
If I call the getDataFromService method of the DataCollector bean with "firstDataService" value for the beanName parameter, the method will return "FIRST DATA SERVICE" as a result.
If I call the method with "secondDataService", I will return "SECOND DATA SERVICE" as a result.
Now, when spring looks at the definition of DataController during context initialisation, there is no way to determine on which beans DataCollector depends on.
It all depends on the application logic, and the value that we use for the beanName parameter when we call the getDataFromService method.
Because of that, spring is not capable of determining if there is bean that is never used (because the bean usage can be dynamic, like in the case above).

Categories