Handle redis cache availability with spring boot - java

I have below Cache Repo with other methods,
#Component
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
/** The string redis template. */
#Autowired
private StringRedisTemplate stringRedisTemplate;
}
I am using spring-boot-starter-redis in the pom.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-redis</artifactId>
</dependency>
I am using EnrollCasheRepository in my filter with #Autowired.
Even I comment out redis properties in application.propertiesand also my redis server is down, but I still get an EnrollCasheRepository object. What would be a better way to check whether redis is install in my machine and proceed with EnrollCasheRepository if so.
I am looking for a better way other than handle Exception thrown if redis not installed and proceed?

Spring-boot redis configuration is provided by RedisAutoConfiguration
This class creates the connection factory and initializes the StringRedisTemplate bean.
The configuration is dependant on having Jedis available on the classpath.
It appears that there is no test to check if the connection details are valid or not.
If you want to have your EnrollCasheRepository bean created dependant on jedis being configured, the simplest way to achieve it is probably going to be annotating it with #ConditionalOnProperty and creating a feature flag config value.
#Component
#ConditionalOnProperty("redis.enabled")
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
Add the flag to your application.properties(or equivalent)
redis.enabled=true
If you want to be more intelligent about it, like detecting if the configured redis server is available before creating the bean, then that would be more complex.
You could look at using #Conditional with your own implementation of Condition, but that is probably more trouble than it is worth.
You are probably better off creating the bean, then testing if it works after the fact.

Related

Spring Boot 2.7.2 with Spring Kafka fails during initialisation with Exception: The 'ProducerFactory' must support transactions`

I'm using Spring Boot v2.7.2 and the latest version of Spring Kafka provided by spring-boot-dependencies:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
I want the app to load all configuration from file hence I created the beans with this bare minimum configuration:
public class KakfaConfig {
#Bean
public ProducerFactory<Integer, FileUploadEvent> producerFactory() {
return new DefaultKafkaProducerFactory<>(Collections.emptyMap());
}
#Bean
public KafkaTemplate<Integer, FileUploadEvent> kafkaTemplate() {
return new KafkaTemplate<Integer, anEvent>(producerFactory());
}
}
It works and loads the configuration from the application.yaml below as expected.
spring:
application:
name: my-app
kafka:
bootstrap-servers: localhost:9092
producer:
client-id: ${spring.application.name}
# transaction-id-prefix: "tx-"
template:
default-topic: my-topic
However, if I uncomment the transaction-id-prefix line, the application fails to start with the exception
java.lang.IllegalArgumentException: The 'ProducerFactory' must support transactions
The documentation in here reads
If you provide a custom producer factory, it must support
transactions. See ProducerFactory.transactionCapable().
The only way I managed to make it work is removing the transaction prefix from the application.yaml and configure it in the code as per below:
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix("tx-");
return pf;
}
Any thoughts on how I can configure everything using the application properties file? Is this a bug?
The only solution atm is really setting the transaction-prefix-id in the code whilst creating the ProducerFactory, despite it's already been defined in the application.yaml.
The Spring Boot team replied as per below:
The intent is that transactions should be used and that the ProducerFactory should support them. The transaction-id-prefix property can be set and this results in the auto-configuration of the kafkaTransactionManager bean. However, if you define your own ProducerFactory (to constrain the types, for example) there's no built-in way to have the transaction-id-prefix applied to that ProducerFactory.
It's a fundamental principle of auto-configuration that it backs off when a user defines a bean of their own. If we post-processed the user's bean to change its configuration, it would no longer be possible for your code to take complete control of how things are configured. Unfortunately, this flexibility does sometimes require you to write a little bit more code. This is one such time.
If we want to keep the prefix as a property in the application.yaml file, we can inject it to avoid config duplication:
#Value("${spring.kafka.producer.transaction-id-prefix}")
private String transactionPrefix;
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix(transactionIdPrefix);
return pf;
}

Spring Boot 2 - Wire Two LDAP Templates

I need to configure multiple LDAP data sources / LdapTemplates in my Spring Boot 2 application. The first LdapTemplate will be used for most of the work, while the second will be used for a once-in-a-while subset of data (housed elsewhere).
I have read these StackOverflow questions regarding doing that, but they seem to be for Spring Boot 1.
Can a spring ldap repository project access two different ldap directories?
Multiple LDAP repositories with Spring LDAP Repository
From what I can gather, much of that configuration/setup had to be done anyway, even for just one LDAP data source, back in Spring Boot 1. With Spring Boot 2, I just put the properties in my config file like so
ldap.url=ldap://server.domain.com:389
ldap.base:DC=domain,DC=com
ldap.username:domain\ldap.svc.acct
ldap.password:secret
and autowire the template in my repository like so
#Autowired
private final LdapTemplate ldapTemplate;
and I'm good to go. (See: https://stackoverflow.com/a/53474188/3669288)
For a second LDAP data source, can I just add the properties and configuration elements for "ldap2" and be done (see linked questions)? Or does adding this configuration cause Spring Boot 2's auto configuration to think I'm overriding it and so now I lose my first LdapTemplate, meaning I now need to go explicitly configure that as well?
If so, do I need to configure everything, or will only a partial configuration work? For example, if I add the context source configuration and mark it as #Primary (does that work for LDAP data sources?), can I skip explicitly assigning it to the first LdapTemplate? On a related note, do I still need to add the #EnableLdapRepositories annotation, which is otherwise autoconfigured by Spring Boot 2?
TLDR: What's the minimum configuration I need to add in Spring Boot 2 to wire in a second LdapTemplate?
This takes what I've learned over the weekend and applies it as an answer to my own question. I'm still not an expert in this so I welcome more experienced answers or comments.
The Explanation
First, I still don't know for certain if I need the #EnableLdapRepositories annotation. I don't yet make use of those features, so I can't say if not having it matters, or if Spring Boot 2 is still taking care of that automatically. I suspect Spring Boot 2 is, but I'm not certain.
Second, Spring Boot's autoconfigurations all happen after any user configurations, such as my code configuring a second LDAP data source. The autoconfiguration is using a couple of conditional annotations for whether or not it runs, based on the existence of a context source or an LdapTemplate.
This means that it sees my "second" LDAP context source (the condition is just that a context source bean exists, regardless of what its name is or what properties it is using) and skips creating one itself, meaning that I no longer have that piece of my primary data source configured.
It will also see my "second" LdapTemplate (again, the condition is just that an LdapTemplate bean exists, regardless of what its name is or what context source or properties it is using) and skip creating one itself, so I again no longer have that piece of my primary data source configured.
Unfortunately, those conditions mean that in this case there is no in-between either (where I can manually configure the context source, for example, and then allow the autoconfiguration of the LdapTemplate to still happen). So the solution is to either make my configuration run after the autoconfiguration, or to not leverage the autoconfiguration at all and set them both up myself.
As for making my configuration run after the autoconfiguration: the only way to do that is to make my configuration an autoconfiguration itself and specify its order to be after Spring's built-in autoconfiguration (see: https://stackoverflow.com/a/53474188/3669288). That's not appropriate for my use case, so for my situation (because Spring Boot's setup does make sense for a standard single-source situation) I'm stuck forgoing the autoconfiguration and setting them both up myself.
The Code
Setting up two data sources is pretty well covered in the following two answers (though partly for other reasons), as linked in my question, but I'll also detail my setup here.
Can a spring ldap repository project access two different ldap directories?
Multiple LDAP repositories with Spring LDAP Repository
First up, the configuration class needs to be created, as one was not previously needed at all with Spring Boot 2. Again, I left out the #EnableLdapRepositories annotation partly because I don't use it yet, and partly because I think Spring Boot 2 will still cover that for me. (Note: All of this code was typed up in the Stack Overflow answer box as I don't have a development environment where I'm writing this, so imports are skipped and the code may not be perfectly compilable and function correctly, though I hope it's good.)
#Configuration
public class LdapConfiguration {
}
Second is manually configuring the primary data source; the one that used to be autoconfigured but no longer will be. There is one piece of Spring Boot's autoconfiguration that can be leveraged here, and that is its reading in of the standard spring.ldap.* properties (into a properties object), but since it wasn't given a name, you have to reference it by its fully qualified class name. This means you can skip straight to setting up the context source for the primary data source. This code is not quite as full featured as the actual autoconfiguration code (See: Spring Code)
I marked this LdapTemplate as #Primary because for my use, this is the primary data source and so it's what all other autowired calls should default to. This also means you don't need a #Qualifier where you autowire this source up (as seen later).
#Configuration
public class LdapConfiguration {
#Bean(name="contextSource")
public LdapContextSource ldapContextSource(#Qualifier("spring.ldap-org.springframework.boot.autoconfigure.ldap.LdapProperties") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate")
#Primary
public LdapTemplate ldapTemplate(#Qualifier("contextSource") LdapContextSource source) {
return new LdapTemplate(source);
}
}
Third is to manually configure the secondary data source, the one that caused all of this to begin with. For this one, you do need to configure the reading of your properties into an LdapProperties object. This code builds on the previous code, so you can see the complete class for context.
#Configuration
public class LdapConfiguration {
#Bean(name="contextSource")
public LdapContextSource ldapContextSource(#Qualifier("spring.ldap-org.springframework.boot.autoconfigure.ldap.LdapProperties") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate")
#Primary
public LdapTemplate ldapTemplate(#Qualifier("contextSource") LdapContextSource source) {
return new LdapTemplate(source);
}
#Bean(name="ldapProperties2")
#ConfigurationProperties("app.ldap2")
public LdapProperties ldapProperties2() {
return new LdapProperties();
}
#Bean(name="contextSource2")
public LdapContextSource ldapContextSource2(#Qualifier("ldapProperties2") LdapProperties properties) {
LdapContextSource source = new LdapContextSource();
source.setUrls(properties.getUrls());
source.setUserDn(properties.getUsername());
source.setPassword(properties.getPassword());
source.setBaseEnvironmentProperties(Collections.unmodifiableMap(properties.getBaseEnvironment()));
return source;
}
#Bean(name="ldapTemplate2")
public LdapTemplate ldapTemplate2(#Qualifier("contextSource2") LdapContextSource source) {
return new LdapTemplate(source);
}
}
Finally, in your class that uses these LdapTemplates, you can autowire them as normal. This uses constructor autowiring instead of the field autowiring the other two answers used. Either is technically valid though constructor autowiring is recommended.
#Component
public class LdapProcessing {
protected LdapTemplate ldapTemplate;
protected LdapTemplate ldapTemplate2;
#Autowired
public LdapProcessing(LdapTemplate ldapTemplate, #Qualifier("ldapTemplate2") LdapTemplate ldapTemplate2) {
this.ldapTemplate = ldapTemplate;
this.ldapTemplate2 = ldapTemplate2;
}
}
TLDR: Defining a "second" LDAP data source stops the autoconfiguration of the first LDAP data source, so both must be (nearly fully) manually configured if using more than one; Spring's autoconfiguration can not be leveraged even for the first LDAP data source.

How to set Spring Data Cassandra keyspace dynamically?

We're using Spring Boot 1.5.10 with Spring Data for Apache Cassandra and that's all working well.
We've had a new requirement coming along where we need to connect to a different keyspace while the service is up and running.
Through the use of Spring Cloud Config Server, we can easily set the value of spring.data.cassandra.keyspace-name, however, we're not certain if there's a way that we can dynamically switch (force) the service to use this new keyspace without having to restart if first?
Any ideas or suggestions?
Using #RefreshScope with properties/repositories doesn't work as the keyspace is bound to the Cassandra Session bean.
Using Spring Data Cassandra 1.5 with Spring Boot 1.5 you have at least two options:
Declare a #RefreshScope CassandraSessionFactoryBean, see also CassandraDataAutoConfiguration. This will interrupt all Cassandra operations upon refresh and re-create all dependant beans.
Listen to RefreshScopeRefreshedEvent and change the keyspace via USE my-new-keyspace;. This approach is less invasive and doesn't interrupt running queries. You'd basically use an event listener.
#Component
class MySessionRefresh {
private final Session session;
private final Environment environment;
// omitted constructors for brevity
#EventListener
#Order(Ordered.LOWEST_PRECEDENCE)
public void handle(RefreshScopeRefreshedEvent event) {
String keyspace = environment.getProperty("spring.data.cassandra.keyspace-name");
session.execute("USE " + keyspace + ";");
}
}
With Spring Data Cassandra 2, we introduced the SessionFactory abstraction providing AbstractRoutingSessionFactory for code-controlled routing of CQL/session calls.
Yes, you can use the #RefreshScope annotation on a the bean(s) holding the spring.data.cassandra.keyspace-name value.
After changing the config value through Spring Cloud Config Server, you have to issue a POST on the /refresh endpoint of your application.
From the Spring cloud documentation:
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
From the RefreshScope class javadoc:
A Scope implementation that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. All lifecycle methods are applied to the bean instances, so any destruction callbacks that were registered in the bean factory are called when it is refreshed, and then the initialization callbacks are invoked as normal when the new instance is created. A new bean instance is created from the original bean definition, so any externalized content (property placeholders or expressions in string literals) is re-evaluated when it is created.

Spring Boot with session-based data source

I've been tearing my hair out with what should be a pretty common use case for a web application. I have a Spring-Boot application which uses REST Repositories, JPA, etc. The problem is that I have two data sources:
Embedded H2 data source containing user authentication information
MySQL data source for actual data which is specific to the authenticated user
Because the second data source is specific to the authenticated user, I'm attempting to use AbstractRoutingDataSource to route to the correct data source according to Principal user after authentication.
What's absolutely driving me crazy is that Spring-Boot is fighting me tooth and nail to instantiate this data source at startup. I've tried everything I can think of, including Lazy and Scope annotations. If I use Session scope, the application throws an error about no session existing at startup. #Lazy doesn't appear to help at all. No matter what annotations I use, the database is instantiated at startup by Spring Boot and doesn't find any lookup key which essentially crashes the entire application.
The other problem is that the Rest Repository API has IMO a terrible means of specifying the actual data source to be used. If you have multiple data sources with Spring Boot, you have to juggle Qualifier annotations which is a runtime debugging nightmare.
Any advice would be very much appreciated.
Your problem is with the authentication manager configuration. All the samples and guides set this up in a GlobalAuthenticationConfigurerAdapter, e.g. it would look like this as an inner class of your SimpleEmbeddedSecurityConfiguration:
#Configuration
public static class AuthenticationConfiguration extends GlobalAuthenticationConfigurerAdapter
{
#Bean(name = Global.AUTHENTICATION_DATA_QUALIFIER + "DataSource")
public DataSource dataSource()
{
return new EmbeddedDatabaseBuilder().setName("authdb").setType(EmbeddedDatabaseType.H2).addScripts("security/schema.sql", "security/data.sql").build();
}
#Override
public void init(AuthenticationManagerBuilder auth) throws Exception
{
auth.jdbcAuthentication().dataSource(dataSource()).passwordEncoder(passwordEncoder());
}
}
If you don't use GlobalAuthenticationConfigurerAdapter then the DataSource gets picked up by Spring Data REST during the creation of the Security filters (before the #Primary DataSource bean has even been registered) and the whole JPA initialization starts super early (bad idea).
UPDATE: the authentication manager is not the only problem. If you need to have a session-scoped #Primary DataSource (pretty unusual I'd say), you need to switch off everything that wants access to the database on startup (Hibernate and Spring Boot in various places). Example:
spring.datasource.initialize: false
spring.jpa.hibernate.ddlAuto: none
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: false
spring.jpa.properties.hibernate.dialect: H2
FURTHER UPDATE: if you're using the Actuator it also wants to use the primary data source on startup for a health indicator. You can override that by prividing a bean of the same type, e.g.
#Bean
#Scope(value="session", proxyMode=ScopedProxyMode.TARGET_CLASS)
#Lazy
public DataSourcePublicMetrics dataSourcePublicMetrics() {
return new DataSourcePublicMetrics();
}
P.S. I believe the GlobalAuthenticationConfigurerAdapter might not be necessary in Spring Boot 1.2.2, but it is in 1.2.1 or 1.1.10.

Using spring-configuration without creating a spring dependency for your clients

If I use spring configuration to construct an object, is it possible for me to instantiate that object for clients without requiring them to import my spring configuration?
If this can't be done, does one client using spring always necessitate any of its clients use spring?
If this can be done, what is the correct way to do it? Something like this...
Library code:
public MyFactory() {
#Autowired
InitializedObject obj
public getInstance() {
return obj;
}
}
Client code:
import com.code.package.something.myfactory.MyFactory;
...
InitializedObject obj = MyFactory.getInstance();
One of the option is to avoid Spring annotations altogether, working only with spring configuration files to have constructor or setter based injection. That way your classes become independent of Spring code and your client can use your code as long as he provide the dependencies correctly using factory or whatever.
Autowiring dependency is done by the Spring container. If there's none, then it won't get autowired. You need to either ensure the client has a Spring container configured properly to autowire your dependency, or setup one yourself (eg: using ClassPathXmlApplicationContext)

Categories