I am using Spring cloud stream along with Kafka binders to connect to a Kafka cluster using SASL. The SASL config looks as follows:
spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=SCRAM-SHA-512
spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config= .... required username="..." password="..."
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
I want to update the username and password programmatically/at runtime, how can I do that in Spring Cloud Stream using Spring Kafka binders?
Side note:
Using BinderFactory I can get reference to KafkaMessageChannelBinder which has KafkaBinderConfigurationProperties, in its configuration hashmap I can see those configurations but I want to know how can I update the configuration at runtime such that those changes are reflected in the connections as well?
#Autowired
BinderFactory binderFactory
....
public void foo()
{
KafkaMessageChannelBinder k = (KafkaMessageChannelBinder)binderFactory.getBinder(null, MessageChannel.class);
// Using debugger I inspected k.configurationProperties.configuration which has the SASL properties I need to update
}
jaas username and password can be provided using configuration, which also means that they can be overridden using the same properties at runtime.
Here is an example: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/multi-binder-samples/kafka-multi-binder-jaas/src/main/resources/application.yml#L26
At runtime, you can override the values set in application.properties. For example, if you are running the app using java -jar, you could simply pass the property along with it: spring.cloud.stream.kafka.binder.jaas.options.username. Then this new value will take effect for the duration of the application run.
I came across this problem yesterday and spent about 3-4 hours in order to figure out how to programmatically update the username and password in Spring Cloud Stream using Spring Kafka binders as one cannot/should not store passwords inside Git.(Spring Boot Version 2.5.2)
Overriding the bean KafkaBinderConfigurationProperties works.
#Bean
#Primary
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaBinderConfigurationProperties properties) {
String saslJaasConfigString = "org.apache.kafka.common.security.scram.ScramLoginModule required username=${USERNAME_FROM_EXTERNAL_SYSTEM_LIKE_VAULT} password=${PASSWORD_FROM_EXTERNAL_SYSTEM_LIKE_VAULT}"
Map<String, String> configMap = properties.getConfiguration();
configMap.put(SaslConfigs.SASL_JAAS_CONFIG, saslJaasConfigString);
return properties;
}
Related
I'm using Spring Boot v2.7.2 and the latest version of Spring Kafka provided by spring-boot-dependencies:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
I want the app to load all configuration from file hence I created the beans with this bare minimum configuration:
public class KakfaConfig {
#Bean
public ProducerFactory<Integer, FileUploadEvent> producerFactory() {
return new DefaultKafkaProducerFactory<>(Collections.emptyMap());
}
#Bean
public KafkaTemplate<Integer, FileUploadEvent> kafkaTemplate() {
return new KafkaTemplate<Integer, anEvent>(producerFactory());
}
}
It works and loads the configuration from the application.yaml below as expected.
spring:
application:
name: my-app
kafka:
bootstrap-servers: localhost:9092
producer:
client-id: ${spring.application.name}
# transaction-id-prefix: "tx-"
template:
default-topic: my-topic
However, if I uncomment the transaction-id-prefix line, the application fails to start with the exception
java.lang.IllegalArgumentException: The 'ProducerFactory' must support transactions
The documentation in here reads
If you provide a custom producer factory, it must support
transactions. See ProducerFactory.transactionCapable().
The only way I managed to make it work is removing the transaction prefix from the application.yaml and configure it in the code as per below:
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix("tx-");
return pf;
}
Any thoughts on how I can configure everything using the application properties file? Is this a bug?
The only solution atm is really setting the transaction-prefix-id in the code whilst creating the ProducerFactory, despite it's already been defined in the application.yaml.
The Spring Boot team replied as per below:
The intent is that transactions should be used and that the ProducerFactory should support them. The transaction-id-prefix property can be set and this results in the auto-configuration of the kafkaTransactionManager bean. However, if you define your own ProducerFactory (to constrain the types, for example) there's no built-in way to have the transaction-id-prefix applied to that ProducerFactory.
It's a fundamental principle of auto-configuration that it backs off when a user defines a bean of their own. If we post-processed the user's bean to change its configuration, it would no longer be possible for your code to take complete control of how things are configured. Unfortunately, this flexibility does sometimes require you to write a little bit more code. This is one such time.
If we want to keep the prefix as a property in the application.yaml file, we can inject it to avoid config duplication:
#Value("${spring.kafka.producer.transaction-id-prefix}")
private String transactionPrefix;
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix(transactionIdPrefix);
return pf;
}
I have implemented two cacheManagers.
One using caffiene and one using redis.
I have exposed them as beans and they are working as expected of them.
There isn't any cache endpoint in the list available at /actuator/metrics path either. I was able to only load /actuator/caches and /actuator/caches/{cacheName} endpoint. These endpoint only show the name and class of the cache being used. I am unable to see any metrics related to them.
I am using springboot 2.1.3 and spring-boot-actuator.
#Bean
public CacheManager caffeine() {
CaffeineCacheManager caffeineCacheManager = new CaffeineCacheManager();
caffeineCacheManager.setCaffeine(caffeineCacheBuilder());
return caffeineCacheManager;
}
private Caffeine<Object, Object> caffeineCacheBuilder() {
return Caffeine.newBuilder().
initialCapacity(initialCapacity).
maximumSize(maxCapacity).
expireAfterAccess(Duration.parse(ttl)).
recordStats();
}
Turned out #Cacheable makes the cache dynamically at runtime, thus if we want the annotated caches to have metrics, we have to set the cache names explicitly in the cacheManagerBean which inturn makes the cacheManager static.
Or create a custom registryBinder as stated in after upgrade to Spring Boot 2, how to expose cache metrics to prometheus?
and call the register method only after the cache is created.
We're using Spring Boot 1.5.10 with Spring Data for Apache Cassandra and that's all working well.
We've had a new requirement coming along where we need to connect to a different keyspace while the service is up and running.
Through the use of Spring Cloud Config Server, we can easily set the value of spring.data.cassandra.keyspace-name, however, we're not certain if there's a way that we can dynamically switch (force) the service to use this new keyspace without having to restart if first?
Any ideas or suggestions?
Using #RefreshScope with properties/repositories doesn't work as the keyspace is bound to the Cassandra Session bean.
Using Spring Data Cassandra 1.5 with Spring Boot 1.5 you have at least two options:
Declare a #RefreshScope CassandraSessionFactoryBean, see also CassandraDataAutoConfiguration. This will interrupt all Cassandra operations upon refresh and re-create all dependant beans.
Listen to RefreshScopeRefreshedEvent and change the keyspace via USE my-new-keyspace;. This approach is less invasive and doesn't interrupt running queries. You'd basically use an event listener.
#Component
class MySessionRefresh {
private final Session session;
private final Environment environment;
// omitted constructors for brevity
#EventListener
#Order(Ordered.LOWEST_PRECEDENCE)
public void handle(RefreshScopeRefreshedEvent event) {
String keyspace = environment.getProperty("spring.data.cassandra.keyspace-name");
session.execute("USE " + keyspace + ";");
}
}
With Spring Data Cassandra 2, we introduced the SessionFactory abstraction providing AbstractRoutingSessionFactory for code-controlled routing of CQL/session calls.
Yes, you can use the #RefreshScope annotation on a the bean(s) holding the spring.data.cassandra.keyspace-name value.
After changing the config value through Spring Cloud Config Server, you have to issue a POST on the /refresh endpoint of your application.
From the Spring cloud documentation:
A Spring #Bean that is marked as #RefreshScope will get special treatment when there is a configuration change. This addresses the problem of stateful beans that only get their configuration injected when they are initialized. For instance if a DataSource has open connections when the database URL is changed via the Environment, we probably want the holders of those connections to be able to complete what they are doing. Then the next time someone borrows a connection from the pool he gets one with the new URL.
From the RefreshScope class javadoc:
A Scope implementation that allows for beans to be refreshed dynamically at runtime (see refresh(String) and refreshAll()). If a bean is refreshed then the next time the bean is accessed (i.e. a method is executed) a new instance is created. All lifecycle methods are applied to the bean instances, so any destruction callbacks that were registered in the bean factory are called when it is refreshed, and then the initialization callbacks are invoked as normal when the new instance is created. A new bean instance is created from the original bean definition, so any externalized content (property placeholders or expressions in string literals) is re-evaluated when it is created.
I have below Cache Repo with other methods,
#Component
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
/** The string redis template. */
#Autowired
private StringRedisTemplate stringRedisTemplate;
}
I am using spring-boot-starter-redis in the pom.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-redis</artifactId>
</dependency>
I am using EnrollCasheRepository in my filter with #Autowired.
Even I comment out redis properties in application.propertiesand also my redis server is down, but I still get an EnrollCasheRepository object. What would be a better way to check whether redis is install in my machine and proceed with EnrollCasheRepository if so.
I am looking for a better way other than handle Exception thrown if redis not installed and proceed?
Spring-boot redis configuration is provided by RedisAutoConfiguration
This class creates the connection factory and initializes the StringRedisTemplate bean.
The configuration is dependant on having Jedis available on the classpath.
It appears that there is no test to check if the connection details are valid or not.
If you want to have your EnrollCasheRepository bean created dependant on jedis being configured, the simplest way to achieve it is probably going to be annotating it with #ConditionalOnProperty and creating a feature flag config value.
#Component
#ConditionalOnProperty("redis.enabled")
#CacheConfig(cacheNames = "enroll", cacheManager = "springtoolCM")
public class EnrollCasheRepository {
Add the flag to your application.properties(or equivalent)
redis.enabled=true
If you want to be more intelligent about it, like detecting if the configured redis server is available before creating the bean, then that would be more complex.
You could look at using #Conditional with your own implementation of Condition, but that is probably more trouble than it is worth.
You are probably better off creating the bean, then testing if it works after the fact.
How can I set the name of an embedded database started in a Spring Boot app running in a test?
I'm starting two instances of the same Spring Boot app as part of a test, as they collaborate. They are both correctly starting an HSQL database, but defaulting to a database name of testdb despite being provided different values for spring.datasource.name.
How can I provide different database names, or some other means of isolating the two databases? What is going to be the 'lightest touch'? If I can avoid it, I'd rather control this with properties than adding beans to my test config - the test config classes shouldn't be cluttered up because of this one coarse-grained collaboration test.
Gah - setting spring.datasource.name changes the name of the datasource, but not the name of the database.
Setting spring.datasource.url=jdbc:hsql:mem:mydbname does exactly what I need it to. It's a bit crap that I have to hardcode the embedded database implementation, but Spring Boot is using an enum for default values, which would mean a bigger rewrite if it were to try getting the name from a property.
You can try it so:
spring.datasource1.name=testdb
spring.datasource2.name=otherdb
And then declare datasource in your ApplicationConfig like this
#Bean
#ConfigurationProperties(prefix="spring.datasource1")
public DataSource dataSource1() {
...
}
#Bean
#ConfigurationProperties(prefix="spring.datasource2")
public DataSource dataSource2() {
...
}
See official docs for more details: https://docs.spring.io/spring-boot/docs/current/reference/html/howto-data-access.html#howto-configure-a-datasource