Managing Kafka Topic with spring - java

We are planning to use Kafka for queueing in our application. I have some bit of experience in RabbitMQ and Spring.
With RabbitMQ and Spring, we used to manage queue creation while starting up the spring service.
With Kafka, I'm not sure what could be the best way to create the topics? Is there a way to manage the topics with Spring.
Or, should we write a separate script which helps in creating topics? Maintaining a separate script for creating topics seems a bit weird for me.
Any suggestions will be appreciated.

In spring it is possible to create topics during the start of the application using beans:
#Bean
public KafkaAdmin admin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
StringUtils.arrayToCommaDelimitedString(kafkaEmbedded().getBrokerAddresses()));
return new KafkaAdmin(configs);
}
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
}
Alternatively you can write your own create topics by autowiring the AdminClient, so for instance reading the list from an input file or specify advanced properties such as partition numbers:
#Autowired
private KafkaAdmin admin;
//...your implementation
Also note that since Kafka 1.1.0 auto.create.topics.enable is enabled by default (see Broker configs).
For more information refer to the spring-kafka docs

To automatically create a Kafka topic in Spring Boot, only this is required:
#Bean
public NewTopic topic1() {
return new NewTopic("foo", 10, (short) 2);
//foo: topic name
//10: number of partitions
//2: replication factor
}
The Kafka Admin is being automatically created and configured by Spring Boot.
Version 2.3 of Spring Kafka introduced a TopicBuilder class, to make building topics fluent and more intuitive:
#Bean
public NewTopic topic(){
return TopicBuilder.name("foo")
.partitions(10)
.replicas(2)
.build();
}

Related

Spring AMQP: remove old bindings and queues

I’m using Spring AMQP and Spring Boot #Configuration and #Bean annotations in order to create all required queues, exchanges and bindings.
#Bean
public Queue queue() {
return new Queue("my_old_queue", true, false, false);
}
#Bean
public Exchange exchange() {
return new DirectExchange("MY_OLD_EXCHANGE");
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue())
.to(exchange())
.with("old_binding")
.noargs();
}
But I’ve faced with a problem of upgrading my topology:
I wanna add a new queue/binding/exchange
And remove an old queue/binding/exchange (even if it was durable entity).
Does some annotation exists for removing or unbinding (like #Unbind)?
I’ve seen the example where RabbitManagementTemplate was suggested, but it’s a completely different way of configuration - I wanna keep everything in the single #Configuration class and use annotations or config beans only (is it possible?).
Does some common pattern exists for creating/removing and updating rabbit topology (maybe I missed something)?
You cannot delete entities with annotations or configuration, use the RabbitAdmin.delete*() methods to remove them like in that answer - the management template was used to list the bindings, the RabbitAdmin (amqpAdmin) does the removals.

How to update kafka config of already existing topic

For creation I am using
#Bean
public NewTopic userTopic(final KafkaTopicConfiguration userConfig) {
return TopicBuilder.name(userConfig.getName())
.partitions(userConfig.getPartitions())
.replicas(userConfig.getReplicas())
.configs(userConfig.getConfigs())
.build();
}
Initially, I thought that restarting the app with new config properties will update the topic configuration as it does for partitions, but config nor replicas are not updated.
How to update config specifically during app start? Is there a specific bean that can be defined to do that automatically?

Unable to synchronise Kafka and MQ transactions usingChainedKafkaTransaction

We have a spring boot application which consumes messages from IBM MQ does some transformation and publishes the result to a Kafka topic. We use https://spring.io/projects/spring-kafka for this. I am aware that Kafka does not supports XA; however, in the documentation I found some inputs about using a ChainedKafkaTransactionManager to chain multiple transaction managers and synchronise the transactions. The same documentation also provides an example about how to synchronise Kafka and database while reading messages from Kafka and storing them in the database.
I follow the same example in my se case and chained the JmsTransactionManager with KafkaTransactionManager under the umbrella of a ChainedKafkaTransactionManager. The bean definitions follows below:
#Bean({"mqListenerContainerFactory"})
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(this.connectionFactory());
factory.setTransactionManager(this.jmsTransactionManager());
return factory;
}
#Bean
public JmsTransactionManager jmsTransactionManager() {
return new JmsTransactionManager(this.connectionFactory());
}
#Bean("chainedKafkaTransactionManager")
public ChainedKafkaTransactionManager<?, ?> chainedKafkaTransactionManager(
JmsTransactionManager jmsTransactionManager, KafkaTransactionManager kafkaTransactionManager) {
return new ChainedKafkaTransactionManager<>(kafkaTransactionManager, jmsTransactionManager);
}
#Transactional(transactionManager = "chainedKafkaTransactionManager", rollbackFor = Throwable.class)
#JmsListener(destination = "${myApp.sourceQueue}", containerFactory = "mqListenerContainerFactory")
public void receiveMessage(#Headers Map<String, Object> jmsHeaders, String message) {
// Processing the message here then publishing it to Kafka using KafkaTemplate
kafkaTemplate.send(sourceTopic,transformedMessage);
// Then throw an exception just to test the transaction behaviour
throw new RuntimeException("Not good Pal!");
}
When running the application what is happening is that he message keep getting rollbacked into the MQ Queue but messages keep growing in Kafka topic which means to me that kafkaTemplate interaction does not get rollbacked.
If I understand well according with the documentation this should not be the case. "If a transaction is active, any KafkaTemplate operations performed within the scope of the transaction use the transaction’s Producer."
In our application.yaml we configured the Kafka producer to use transactions by setting up spring.kafka.producer.transaction-id-prefix
The question is what I am missing here and how should I fix it.
Thank you in advance for your inputs.
Consumers can see uncommitted records by default; set the isolation.level consumer property to read_committed to avoid receiving records from rolled-back transactions.

Spring AMQP cannot create bean to return a List<Binding>

I am trying to use Spring AMQP, version 2.1.2.release, to create multiple bindings to a topic exchange.
I found this question: How to setup multiple topics in a RabbitMQ Java config class using Spring Framework?
Which seemed to have the answer. I also found the documention which provides the same solution.
However, the Bindings are not being created when I return a List in my Bean. If I return a single Binding, then it does work. I cannot add a comment to that question due to lack of reputation.
Here is my code:
#Bean
public TopicExchange topicExchange() {
return new TopicExchange("topicExchange");
}
#Bean
public Queue testQueue() {
return new Queue("testQueue");
}
#Bean
List<Binding> multipleBindings() {
return Arrays.asList(
BindingBuilder.bind(testQueue()).to(topicExchange()).with("t1"),
BindingBuilder.bind(testQueue()).to(topicExchange()).with("t2"));
}
#Bean
Binding singleBinding() {
return BindingBuilder.bind(testQueue()).to(topicExchange()).with("t3");
}
In this code, I get the "t3" topic binding, but do not see "t1" or "t2" when I look at the Rabbit Management console.
Please help, as this code looks very simple and it follows the documentation. What am I missing?
Thank you
You are referring to very old documentation. According the version you use, there is already a Declarables container instead of List to use: https://docs.spring.io/spring-amqp/docs/2.1.4.RELEASE/reference/#collection-declaration

Configuring Spring Data Redis with Lettuce for ElastiCache Master/Slave

I have a Elasticache setup with one master and two slaves. I am still not sure how to pass in a list of master slave RedisURIs to construct a StatefulRedisMasterSlaveConnection for LettuceConnectionFactory. I only see support for standaloneConfiguration with single host and port.
LettuceClientConfiguration configuration = LettuceTestClientConfiguration.builder().readFrom(ReadFrom.SLAVE).build();
LettuceConnectionFactory factory = new LettuceConnectionFactory(SettingsUtils.standaloneConfiguration(),configuration);
I know there is a similar question Configuring Spring Data Redis with Lettuce for Redis master/slave
But I don't think it works for ElastiCache Master/Slave setup as currently the above code would try to use MasterSlaveTopologyProvider to discover slave ips. However, slave IP addresses are not reachable. So what's the right way to configure Spring Data Redis to make it compatible with Master/Slave ElastiCache? It seems to me LettuceConnectionFactory needs to take in a list of endpoints and use StaticMasterSlaveTopologyProvider in order to work.
There have been further improvements in AWS and Lettuce making it easier to support Master/Slave.
One improvement that has happened recently in AWS is it has launched reader endpoints for Redis which distributes load among replicas: Amazon ElastiCache launches reader endpoints for Redis.
Hence the best way to connect to Redis using Spring Data Redis will be to use the primary endpoint (master) and reader endpoint (for replicas) of the Redis cluster.
You can get both of them from the AWS console. Here is a sample code:
#Bean
public LettuceConnectionFactory redisConnectionFactory() {
LettuceClientConfiguration clientConfig = LettuceClientConfiguration.builder()
.readFrom(ReadFrom.SLAVE_PREFERRED)
.build();
RedisStaticMasterReplicaConfiguration redisStaticMasterReplicaConfiguration =
new
RedisStaticMasterReplicaConfiguration(REDIS_CLUSTER_PRIMARY_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.addNode(REDIS_CLUSTER_READER_ENDPOINT, redisPort);
redisStaticMasterReplicaConfiguration.setPassword(redisPassword);
return new LettuceConnectionFactory(redisStaticMasterReplicaConfiguration, clientConfig);
}
Right now, static Master/Slave with provided endpoints is not supported by Spring Data Redis. I filed a ticket to add support for that.
You can implement this functionality yourself by subclassing LettuceConnectionFactory, creating an own configuration and LettuceConnectionFactory.
You would start with something like:
public static class MyLettuceConnectionFactory extends LettuceConnectionFactory {
private final MyMasterSlaveConfiguration configuration;
public MyLettuceConnectionFactory(MyMasterSlaveConfiguration standaloneConfig,
LettuceClientConfiguration clientConfig) {
super(standaloneConfig, clientConfig);
this.configuration = standaloneConfig;
}
#Override
protected LettuceConnectionProvider doCreateConnectionProvider(AbstractRedisClient client, RedisCodec<?, ?> codec) {
return new ElasticacheConnectionProvider((RedisClient) client, codec, getClientConfiguration().getReadFrom(),
this.configuration);
}
}
static class MyMasterSlaveConfiguration extends RedisStandaloneConfiguration {
private final List<RedisURI> endpoints;
public MyMasterSlaveConfiguration(List<RedisURI> endpoints) {
this.endpoints = endpoints;
}
public List<RedisURI> getEndpoints() {
return endpoints;
}
}
You can find all code in this gist, not posting all code here as it would be a wall of code.

Categories