How to update kafka config of already existing topic - java

For creation I am using
#Bean
public NewTopic userTopic(final KafkaTopicConfiguration userConfig) {
return TopicBuilder.name(userConfig.getName())
.partitions(userConfig.getPartitions())
.replicas(userConfig.getReplicas())
.configs(userConfig.getConfigs())
.build();
}
Initially, I thought that restarting the app with new config properties will update the topic configuration as it does for partitions, but config nor replicas are not updated.
How to update config specifically during app start? Is there a specific bean that can be defined to do that automatically?

Related

How to use Cacheable annotation in spring boot app when it already uses jedis as a fast storage

I'm working on java spring project (Spring Boot v2.3.3.RELEASE, Spring v5.2.8.RELEASE).
I'm want to use spring's #Cacheable annotation with customized cache (not spring default cache).
I already have Jedis configured in this app to be used as fast storage.
In order to use this annotation I need to configure cacheManger,
but I'm not sure how to implement it so it will make use with Jedis, all I find is Redis.
any help will be appreciated!
Found this stack post on setting up CacheManager -> RedisCacheManager -> RedisTemplate -> JedisConnectionFactory -> Redis(Cluster/Standalone)Configuration + JedisClientConfiguration
Think RedisStandalone Configuration best suits your needs:
Configuration class used for setting up RedisConnection via
RedisConnectionFactory using connecting to a single node Redis
installation.
There is no JedisCachemanager, only a RedisCacheManager like you said, therefore you have to setup a RedisTemplate with a JedisConnectionFactory: redisTemplate.setConnectionFactory(getJedisConnectionFactory()), then you can setup the jedisConnectionFactory with your specific configs e.g. setting database index name, redis host name, JedisClientConfiguration, timeouts, etc. See JedisConnectionFactory
Example setup of the JedisConnectionFactory setup
redisTemplate.setConnectionFactory(getJedisConnectionFactory())
private JedisConnectionFactory getJedisConnectionFactory() {
JedisConnectionFactory jedisConnectionFactory = new
JedisConnectionFactory();
jedisConnectionFactory.setClientName("myClientName");
jedisConnectionFactory.set(new JedisClientConfiguration(
new RedisStandaloneConfiguration(...),
new JedisClientConfiguration(...)))

Spring Boot 2.7.2 with Spring Kafka fails during initialisation with Exception: The 'ProducerFactory' must support transactions`

I'm using Spring Boot v2.7.2 and the latest version of Spring Kafka provided by spring-boot-dependencies:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
I want the app to load all configuration from file hence I created the beans with this bare minimum configuration:
public class KakfaConfig {
#Bean
public ProducerFactory<Integer, FileUploadEvent> producerFactory() {
return new DefaultKafkaProducerFactory<>(Collections.emptyMap());
}
#Bean
public KafkaTemplate<Integer, FileUploadEvent> kafkaTemplate() {
return new KafkaTemplate<Integer, anEvent>(producerFactory());
}
}
It works and loads the configuration from the application.yaml below as expected.
spring:
application:
name: my-app
kafka:
bootstrap-servers: localhost:9092
producer:
client-id: ${spring.application.name}
# transaction-id-prefix: "tx-"
template:
default-topic: my-topic
However, if I uncomment the transaction-id-prefix line, the application fails to start with the exception
java.lang.IllegalArgumentException: The 'ProducerFactory' must support transactions
The documentation in here reads
If you provide a custom producer factory, it must support
transactions. See ProducerFactory.transactionCapable().
The only way I managed to make it work is removing the transaction prefix from the application.yaml and configure it in the code as per below:
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix("tx-");
return pf;
}
Any thoughts on how I can configure everything using the application properties file? Is this a bug?
The only solution atm is really setting the transaction-prefix-id in the code whilst creating the ProducerFactory, despite it's already been defined in the application.yaml.
The Spring Boot team replied as per below:
The intent is that transactions should be used and that the ProducerFactory should support them. The transaction-id-prefix property can be set and this results in the auto-configuration of the kafkaTransactionManager bean. However, if you define your own ProducerFactory (to constrain the types, for example) there's no built-in way to have the transaction-id-prefix applied to that ProducerFactory.
It's a fundamental principle of auto-configuration that it backs off when a user defines a bean of their own. If we post-processed the user's bean to change its configuration, it would no longer be possible for your code to take complete control of how things are configured. Unfortunately, this flexibility does sometimes require you to write a little bit more code. This is one such time.
If we want to keep the prefix as a property in the application.yaml file, we can inject it to avoid config duplication:
#Value("${spring.kafka.producer.transaction-id-prefix}")
private String transactionPrefix;
#Bean
public ProducerFactory<Integer, FileUploadEvent> fileUploadProducerFactory() {
var pf = new DefaultKafkaProducerFactory<Integer, FileUploadEvent>(Collections.emptyMap());
pf.setTransactionIdPrefix(transactionIdPrefix);
return pf;
}

Spring AMQP: remove old bindings and queues

I’m using Spring AMQP and Spring Boot #Configuration and #Bean annotations in order to create all required queues, exchanges and bindings.
#Bean
public Queue queue() {
return new Queue("my_old_queue", true, false, false);
}
#Bean
public Exchange exchange() {
return new DirectExchange("MY_OLD_EXCHANGE");
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue())
.to(exchange())
.with("old_binding")
.noargs();
}
But I’ve faced with a problem of upgrading my topology:
I wanna add a new queue/binding/exchange
And remove an old queue/binding/exchange (even if it was durable entity).
Does some annotation exists for removing or unbinding (like #Unbind)?
I’ve seen the example where RabbitManagementTemplate was suggested, but it’s a completely different way of configuration - I wanna keep everything in the single #Configuration class and use annotations or config beans only (is it possible?).
Does some common pattern exists for creating/removing and updating rabbit topology (maybe I missed something)?
You cannot delete entities with annotations or configuration, use the RabbitAdmin.delete*() methods to remove them like in that answer - the management template was used to list the bindings, the RabbitAdmin (amqpAdmin) does the removals.

Spring Cloud Stream Kafka Binder Configuration update at runtime

I am using Spring cloud stream along with Kafka binders to connect to a Kafka cluster using SASL. The SASL config looks as follows:
spring.cloud.stream.kafka.binder.configuration.sasl.mechanism=SCRAM-SHA-512
spring.cloud.stream.kafka.binder.configuration.sasl.jaas.config= .... required username="..." password="..."
spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL
I want to update the username and password programmatically/at runtime, how can I do that in Spring Cloud Stream using Spring Kafka binders?
Side note:
Using BinderFactory I can get reference to KafkaMessageChannelBinder which has KafkaBinderConfigurationProperties, in its configuration hashmap I can see those configurations but I want to know how can I update the configuration at runtime such that those changes are reflected in the connections as well?
#Autowired
BinderFactory binderFactory
....
public void foo()
{
KafkaMessageChannelBinder k = (KafkaMessageChannelBinder)binderFactory.getBinder(null, MessageChannel.class);
// Using debugger I inspected k.configurationProperties.configuration which has the SASL properties I need to update
}
jaas username and password can be provided using configuration, which also means that they can be overridden using the same properties at runtime.
Here is an example: https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/multi-binder-samples/kafka-multi-binder-jaas/src/main/resources/application.yml#L26
At runtime, you can override the values set in application.properties. For example, if you are running the app using java -jar, you could simply pass the property along with it: spring.cloud.stream.kafka.binder.jaas.options.username. Then this new value will take effect for the duration of the application run.
I came across this problem yesterday and spent about 3-4 hours in order to figure out how to programmatically update the username and password in Spring Cloud Stream using Spring Kafka binders as one cannot/should not store passwords inside Git.(Spring Boot Version 2.5.2)
Overriding the bean KafkaBinderConfigurationProperties works.
#Bean
#Primary
public KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties(KafkaBinderConfigurationProperties properties) {
String saslJaasConfigString = "org.apache.kafka.common.security.scram.ScramLoginModule required username=${USERNAME_FROM_EXTERNAL_SYSTEM_LIKE_VAULT} password=${PASSWORD_FROM_EXTERNAL_SYSTEM_LIKE_VAULT}"
Map<String, String> configMap = properties.getConfiguration();
configMap.put(SaslConfigs.SASL_JAAS_CONFIG, saslJaasConfigString);
return properties;
}

Managing Kafka Topic with spring

We are planning to use Kafka for queueing in our application. I have some bit of experience in RabbitMQ and Spring.
With RabbitMQ and Spring, we used to manage queue creation while starting up the spring service.
With Kafka, I'm not sure what could be the best way to create the topics? Is there a way to manage the topics with Spring.
Or, should we write a separate script which helps in creating topics? Maintaining a separate script for creating topics seems a bit weird for me.
Any suggestions will be appreciated.
In spring it is possible to create topics during the start of the application using beans:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
}
Alternatively you can write your own create topics by autowiring the AdminClient, so for instance reading the list from an input file or specify advanced properties such as partition numbers:
#Autowired
private KafkaAdmin admin;
//...your implementation
Also note that since Kafka 1.1.0 auto.create.topics.enable is enabled by default (see Broker configs).
For more information refer to the spring-kafka docs
To automatically create a Kafka topic in Spring Boot, only this is required:
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
//foo: topic name
//10: number of partitions
//2: replication factor
}
The Kafka Admin is being automatically created and configured by Spring Boot.
Version 2.3 of Spring Kafka introduced a TopicBuilder class, to make building topics fluent and more intuitive:
#Bean
public NewTopic topic(){
return TopicBuilder.name("foo")
.partitions(10)
.replicas(2)
.build();
}

Categories