How to set Spring Data Cassandra keyspace dynamically? - java

We're using Spring Boot 1.5.10 with Spring Data for Apache Cassandra and that's all working well.
We've had a new requirement coming along where we need to connect to a different keyspace while the service is up and running.
Through the use of Spring Cloud Config Server, we can easily set the value of spring.data.cassandra.keyspace-name, however, we're not certain if there's a way that we can dynamically switch (force) the service to use this new keyspace without having to restart if first?
Any ideas or suggestions?

Using #RefreshScope with properties/repositories doesn't work as the keyspace is bound to the Cassandra Session bean.
Using Spring Data Cassandra 1.5 with Spring Boot 1.5 you have at least two options:
Declare a #RefreshScope CassandraSessionFactoryBean, see also CassandraDataAutoConfiguration. This will interrupt all Cassandra operations upon refresh and re-create all dependant beans.
Listen to RefreshScopeRefreshedEvent and change the keyspace via USE my-new-keyspace;. This approach is less invasive and doesn't interrupt running queries. You'd basically use an event listener.
#Component
class MySessionRefresh {
private final Session session;
private final Environment environment;
// omitted constructors for brevity
#EventListener
#Order(Ordered.LOWEST_PRECEDENCE)
public void handle(RefreshScopeRefreshedEvent event) {
String keyspace = environment.getProperty("spring.data.cassandra.keyspace-name");
session.execute("USE " + keyspace + ";");
}
}
With Spring Data Cassandra 2, we introduced the SessionFactory abstraction providing AbstractRoutingSessionFactory for code-controlled routing of CQL/session calls.

Yes, you can use the #RefreshScope annotation on a the bean(s) holding the spring.data.cassandra.keyspace-name value.
After changing the config value through Spring Cloud Config Server, you have to issue a POST on the /refresh endpoint of your application.
From the Spring cloud documentation:
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
From the RefreshScope class javadoc:
A Scope implementation that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. All lifecycle methods are applied to the bean instances, so any destruction callbacks that were registered in the bean factory are called when it is refreshed, and then the initialization callbacks are invoked as normal when the new instance is created. A new bean instance is created from the original bean definition, so any externalized content (property placeholders or expressions in string literals) is re-evaluated when it is created.

Related

#ConditionalOnBean(KafkaTemplate.class) crashes entire application

I have a Spring boot application that consumes data from Kafka topic and send email notifications with a data received from Kafka,
#Bean
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
it works perfectly,
but after I added #ConditionalOnBean:
#Bean
#ConditionalOnBean(KafkaTemplate.class)
public EmailService emailService() {
return new EmailServiceImpl(getJavaMailSender());
}
application failed to start:
required a bean of type 'com.acme.EmailService' that could not be
found.
And I can't find any explanation, how it is possible, because KafkaTemplate bean automatically created by Spring in KafkaAutoConfiguration class.
Could you please give me an explanation?
From the documentation:
The condition can only match the bean definitions that have been
processed by the application context so far and, as such, it is
strongly recommended to use this condition on auto-configuration
classes only. If a candidate bean may be created by another
auto-configuration, make sure that the one using this condition runs
after.
This documentation clearly says what might be wrong here. I understand KafkaTemplateConfiguration creates the KafkaTemplate.class. But it may not be added in the bean context while the condition was being checked. Try to use autoconfiguration for KafkaTemplate or make sure the ordering of different configuration classes so that you can have the guarantee of having the KafkaTemplate in bean registry before that conditional check.

Unable to look at cache metrics(hit/miss/size) in spring boot?

I have implemented two cacheManagers.
One using caffiene and one using redis.
I have exposed them as beans and they are working as expected of them.
There isn't any cache endpoint in the list available at /actuator/metrics path either. I was able to only load /actuator/caches and /actuator/caches/{cacheName} endpoint. These endpoint only show the name and class of the cache being used. I am unable to see any metrics related to them.
I am using springboot 2.1.3 and spring-boot-actuator.
#Bean
public CacheManager caffeine() {
CaffeineCacheManager caffeineCacheManager = new CaffeineCacheManager();
caffeineCacheManager.setCaffeine(caffeineCacheBuilder());
return caffeineCacheManager;
}
private Caffeine<Object, Object> caffeineCacheBuilder() {
return Caffeine.newBuilder().
initialCapacity(initialCapacity).
maximumSize(maxCapacity).
expireAfterAccess(Duration.parse(ttl)).
recordStats();
}
Turned out #Cacheable makes the cache dynamically at runtime, thus if we want the annotated caches to have metrics, we have to set the cache names explicitly in the cacheManagerBean which inturn makes the cacheManager static.
Or create a custom registryBinder as stated in after upgrade to Spring Boot 2, how to expose cache metrics to prometheus?
and call the register method only after the cache is created.

Handle redis cache availability with spring boot

I have below Cache Repo with other methods,
#Component
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
/** The string redis template. */
#Autowired
private StringRedisTemplate stringRedisTemplate;
}
I am using spring-boot-starter-redis in the pom.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-redis</artifactId>
</dependency>
I am using EnrollCasheRepository in my filter with #Autowired.
Even I comment out redis properties in application.propertiesand also my redis server is down, but I still get an EnrollCasheRepository object. What would be a better way to check whether redis is install in my machine and proceed with EnrollCasheRepository if so.
I am looking for a better way other than handle Exception thrown if redis not installed and proceed?
Spring-boot redis configuration is provided by RedisAutoConfiguration
This class creates the connection factory and initializes the StringRedisTemplate bean.
The configuration is dependant on having Jedis available on the classpath.
It appears that there is no test to check if the connection details are valid or not.
If you want to have your EnrollCasheRepository bean created dependant on jedis being configured, the simplest way to achieve it is probably going to be annotating it with #ConditionalOnProperty and creating a feature flag config value.
#Component
#ConditionalOnProperty("redis.enabled")
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
Add the flag to your application.properties(or equivalent)
redis.enabled=true
If you want to be more intelligent about it, like detecting if the configured redis server is available before creating the bean, then that would be more complex.
You could look at using #Conditional with your own implementation of Condition, but that is probably more trouble than it is worth.
You are probably better off creating the bean, then testing if it works after the fact.

Using different DataBases with Spring jdbcDaoSupport

Which is the best approach to implement several different databases in one project, using Spring JdbcDaoSupport?
I have several DB with different datasources and syntax: MySQL & Postgres, for example. In pure java-jdbc projects i used Factory Method and Abstract Factory patterns, and multiple DAOimpl classes (one for each DB) with common DAO interfaces for switch between databases. Now i use Spring-jdbc and want to implement similar behavior.
I faced the same matter two year ago and I finally choose an implementation based on a "Spring Custom Scope" (http://docs.spring.io/spring/docs/current/spring-framework-reference/htmlsingle/#beans-factory-scopes-custom).
The spring frameworks allows multiple instances of the same bean definition to coexists together. They differ only from some contextual settings.
For instance, this bean definition will create various loginAction bean depending on the currently processed HTTP request
<bean id="loginAction" class="com.foo.LoginAction" scope="request"/>
If you create a custom scope called "domain", you will be able to instanciate several datasource based on the same bean definition.
A datasource bean definition based on JndiObjectFactoryBean would let the servlet container manage the database connection (through the web.xml file). However, you would have to variabilize your datasource name with a Spring Property.
Beans like the database Transaction Manager must also be marked with this scope.
Next you need to activate the scope each time an HTTP request is running: I can suggest you to define the datasource name as a prefix of the request url.
Because most of web frameworks allows you to intercept HTTP requests, you can retrieve the expected datasource before processing the request.
Then, create (or reuse) a set of beans specific to the selected datasource and store it inside a ThreadLocal variable (that your custom scope implementation will rely on)
This implementation should look a little complex at first glance, but its usage appears transparent.

Set name of Spring Boot embedded database

How can I set the name of an embedded database started in a Spring Boot app running in a test?
I'm starting two instances of the same Spring Boot app as part of a test, as they collaborate. They are both correctly starting an HSQL database, but defaulting to a database name of testdb despite being provided different values for spring.datasource.name.
How can I provide different database names, or some other means of isolating the two databases? What is going to be the 'lightest touch'? If I can avoid it, I'd rather control this with properties than adding beans to my test config - the test config classes shouldn't be cluttered up because of this one coarse-grained collaboration test.
Gah - setting spring.datasource.name changes the name of the datasource, but not the name of the database.
Setting spring.datasource.url=jdbc:hsql:mem:mydbname does exactly what I need it to. It's a bit crap that I have to hardcode the embedded database implementation, but Spring Boot is using an enum for default values, which would mean a bigger rewrite if it were to try getting the name from a property.
You can try it so:
spring.datasource1.name=testdb
spring.datasource2.name=otherdb
And then declare datasource in your ApplicationConfig like this
#Bean
#ConfigurationProperties(prefix="spring.datasource1")
public DataSource dataSource1() {
...
}
#Bean
#ConfigurationProperties(prefix="spring.datasource2")
public DataSource dataSource2() {
...
}
See official docs for more details: https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html#howto-configure-a-datasource

Categories