I am new to Kafka, and created consumer using Spring boot #kafkalistener.
My use case is once read the message from kafka partition, i need to process and when any exception arise, need to re-process the message after sometime. On exception scenario, I should not update the offset and after server start, i need to process the message again.
Following are the configuration
#Configuration
#EnableKafka
public class ReceiverConfiguration {
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.getContainerProperties().setSyncCommits(true);
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<String, String>(consumerConfigs());
}
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> propsMap = new HashMap();
propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "<some broker configuration>");
propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "6000");
propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, CustomDeserializer.class);
propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, "Test-Consumer-Group");
propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return propsMap;
}
#Bean
public Listener listener() {
System.out.println("%%%%%%%%% Initializing Listener %%%%%%%");
return new Listener();
}
}
Following are the listener class
public class Listener {
private static final Logger logger = LoggerFactory.getLogger(Listener.class);
public CountDownLatch getCountDownLatch1() {
return countDownLatch1;
}
private CountDownLatch countDownLatch1 = new CountDownLatch(1);
#KafkaListener(topics = "topic")
public void listen(ConsumerRecord<String, CustomObject> record, Acknowledgment ack) throws Exception{
logger.info("******** 1 message: "+record);
//ack.acknowledge();
}
}
scenario 1: During Consumer service is running, when producer sending the message, Listener class reading the message and Not updating the offset, till this part looks good. If i Stop the consumer, offset is updated in the consumer group.
Problem: Should not update the offset during server stop scenario. Once my back-end processing issue is resolved, when i restart my consumer service, I need to consume the message again only when the offset is not committed. But here offset is committed and there is no chance i can consumer the message from partition again.
scenario 2: Assuming my consumer service is down, Producer sending message to Topic partition, can see offset is not incremented and lag is 1. Still service is not enabled with ack.acknowledge(), i.e. code is commented out only, even though offset is committed in the consumer group.
Problem: Till I am acknowledging the offset, should not commit the offset. Problem noticed in server start.
Please help me resolving the issue, was not able to find proper redirection.
Appreciate your help
Related
i have an simple rest api that have a h2 database so my plan is when i run multiple instances of the same app they will have different in memory databases.Now i want to syncronize these databases beetwen them.I thought kafka to be a good solution , so for example when i get an POST for instance with port 8080 , i should post also for all other instances. Now my app acts as a producer/consumer at the same time and i do not know why only one instance receive the message.
The code:
#EnableKafka
#Configuration
public class KafkaProducerConfigForDepartment {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, MessageEventForDepartment> producerFactoryForDepartment() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, MessageEventForDepartment> kafkaTemplate() {
return new KafkaTemplate<>(producerFactoryForDepartment());
}
}
#Configuration
public class KafkaTopicConfig {
#Value(value = "${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Bean
public ConsumerFactory<String, MessageEventForDepartment> consumerFactoryForDepartments() {
Map<String, Object> props = new HashMap<>();
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "groupId");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(MessageEventForDepartment.class));
}
#Bean
public NewTopic topic1() {
return TopicBuilder.name("topic12")
.partitions(10)
.replicas(10)
.build();
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, MessageEventForDepartment>
kafkaListenerContainerFactoryForDepartments() {
ConcurrentKafkaListenerContainerFactory<String, MessageEventForDepartment> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactoryForDepartments());
return factory;
}
}
#Component
#Slf4j
public class DepartmentKafkaService {
#Autowired
private DepartmentService departmentService;
#KafkaListener(topics = "topic12" , groupId = "groupId",containerFactory = "kafkaListenerContainerFactoryForDepartments")
public void listenGroupFoo(MessageEventForDepartment message) {
log.info(message.toString());
}
}
Why is this happening ? or maybe my approach is not very good , what are your thoughts ,guys?
Have you considered Kafka Streams? In my opinion, your solution is already done by internal RocksDB and Global KTable implementation in Kafka Streams.
RocksDB will behave exactly like the H2 database which you've mentioned. GlobalKTables functionality allows you to broadcast the current state to all running KafkaStreams instances and read data with ease.
Example:
Producer part:
#RestController
class MessageEventForDepartmentController {
#Autowired
KafkaTemplate<String, MessageEventForDepartment> kafkaTemplate;
#PostMapping(path = "/departments", consumes = "application/json")
#ResponseStatus(HttpStatus.ACCEPTED)
void(#RequestBody MessageEventForDepartment event) {
kafkaTemplate.send("topic-a", event.getId(), event);
}
}
Consumer part - KafkaStreams GlobalKTable
#Component
public class StreamsBuilderMessageEventForDepartment {
#Autowired
void buildPipeline(StreamsBuilder streamsBuilder) {
KeyValueBytesStoreSupplier storeSupplier = Stores.inMemoryKeyValueStore("MessageEventForDepartmentGlobalStateStore");
Materialized<String, MessageEventForDepartment, KeyValueStore<Bytes, byte[]>> materialized = Materialized.<String, MessageEventForDepartment>as(storeSupplier)
.withKeySerde(Serdes.String())
.withValueSerde(new JsonSerde(MessageEventForDepartment.class));
GlobalKTable<String, MessageEventForDepartment> messagesCount = messagesGroupedByUser.globalTable("topic-a", materialized);
}
}
Read data from RocksDB
#RestController
class MessageEventForDepartmentReadModelController {
#Autowired
KafkaStreams kafkaStreams
#Get(path = "/departments")
MessageEventForDepartment getMessageEventForDepartment(String eventId) {
ReadOnlyKeyValueStore<String, MessageEventForDepartment> store = kafkaStreams.store(StoreQueryParameters.fromNameAndType("MessageEventForDepartmentGlobalStateStore", QueryableStoreTypes.keyValueStore()));
return store.get(eventId);
}
}
The reason why only one instance of the application receives each message is that each instance has the same ConsumerConfig.GROUP_ID_CONFIG. Kafka's consumer protocol is such that each consumer group gets each message delivered once (obviously, there's a lot more nuance to it, but this is basically how it works).
Pawel's suggestion to use KafkaStreams is a good one—a GlobalKTable would provide what you want.
Luca Pette wrote a great primer on Kakfa Streams here: https://lucapette.me/writing/getting-started-with-kafka-streams/
My understanding to your qus is that your using multiple instances for the same app which uses IN-MEMEORY so for Eventually consistency your going with Kafka stream.
MY SOLUTIONS:
I have used Rabbitmq mirroring which solves the same problem you have in Kafka also supports mirroring find the doc: https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=27846330#content/view/27846330
Consider redis cluster or master slave for In-memory db
I have written a kafka consumer using spring-kafka library (spring-boot-starter-parent 2.3.4.RELEASE).
I have following consumer configuration in my code
/**
* configuration for kafka consumers at container level
*
* #return
*/
#Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>>
kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(1); factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
/**
* default kafka consumer factory
*
* #return
*/
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
/**
* configuration for kafka consumer at thread level.
*
* #return
*/
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, GenericDeserializer.class);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
return props;
}
Following is my listener method :
#KafkaListener(id = "client", topics = "MyTopic", clientIdPrefix = "client")
public void listen(#Payload UserNotification data,Acknowledgment ack) {
// Business logic
ack.acknowledge();
}
Here I am reading 1 message at a time, apply business logic and use ack.acknowledge() to commit offset, but what I have seen, sometime offset commit succeed but many time I get org.apache.kafka.clients.consumer.CommitFailedException on line ack.acknowledge(). Here I can confirm that business logic is completed in 2 sec max. Following is the detailed exception:
2022-06-03|04:27:04.326|INSTANCEID_IS_UNDEFINED|xyz-856495f857-8nqx7|client-0-C-1|ERROR||||o.s.k.l.KafkaMessageListenerContainer$ListenerConsumer|149|Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.xyz.listen(java.lang.String,org.springframework.kafka.support.Acknowledgment)' threw exception; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Offset commit cannot be completed since the consumer is not part of an active group for auto partition assignment; it is likely that the consumer was kicked out of the group.
at org.springframework.kafka.listener.SeekUtils.seekOrRecover(SeekUtils.java:157)
at org.springframework.kafka.listener.SeekToCurrentErrorHandler.handle(SeekToCurrentErrorHandler.java:113)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:2012)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1911)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1838)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1735)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1465)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1128)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1031)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Can someone please help me why this is happening, because default poll interval is 5 min and group coordinator should not kickout the consumer if processing just takes 2 sec.
You must be able to process max.poll.records (default 500) in max.poll.interval.ms (default 300000 - 5 mins).
If it takes 2 seconds per record, it will take up to 16.6667 minutes to process the batch, and you will be kicked out of the group.
Reduce max.poll.records and/or increase max.poll.interval.ms.
I'm using spring-kafka 2.8.0 and I'm trying to implement non-blocking retries for batch kafka consumer. Here are my config and consumer:
#Configuration
public class KafkaConfig {
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, GenericRecord>>
batchListenerFactory(ConsumerFactory<Object, Object> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, GenericRecord> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setBatchListener(true);
return factory;
}
}
#Component
public class MyConsumer {
#KafkaListener(
topics = "my-topic",
containerFactory = "batchListenerFactory"
)
#RetryableTopic(
backoff = #Backoff(delay = 1000, multiplier = 2.0),
attempts = "4",
topicSuffixingStrategy = SUFFIX_WITH_INDEX_VALUE,
autoCreateTopics = "false"
)
public void consume(List<ConsumerRecord<String, GenericRecord>> messages) {
// do some stuff
}
}
But on sturtup I'm getting the following exception:
java.lang.IllegalArgumentException: The provided class BatchMessagingMessageListenerAdapter is not assignable from AcknowledgingConsumerAwareMessageListener
My questions are:
Is there any way to combine batch consumer with #RetryableTopic?
Is there any another way to implement non-blocking retries for batch consumer? Is it possible to use RetryTemplate for this purpose?
#RetryableTopic is not supported with batch listeners.
The RecoveringBatchErrorHandler (DefaultErrorHandler for 2.8 and later) supports sending a failed record within a batch to a dead letter topic, with the help of the listener throwing a BatchListenerFailedException indicating which record failed.
You would then have to implement your own listener on that topic.
I'm trying to commit a message just after reading it from the topic. I've followed this link (https://www.confluent.io/blog/apache-kafka-spring-boot-application) to create a Kafka consumer with spring. Normally it works perfect and the consumer gets the message and waits till anotherone enters in the queue. But the problem is that when I process this messages it takes a lot of time (circa 10 minutes) the kafka queue thinks that the message is not consumed (commited) and the consumers reads it again and again. I have to say that when my process time is less than 5 minutes it works well but when it lastas longer it doesn't commit the message.
I've looked for some answers around but it doesn't help me because I'm not using the same source code (and of course a different structure). I've tried to send asynchronous methods and also to commit asynchronously the message but I've failed.
Some of the sources are:
Spring Boot Kafka: Commit cannot be completed since the group has already rebalanced
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
https://dzone.com/articles/kafka-clients-at-most-once-at-least-once-exactly-o
Kafka 0.10 Java consumer not reading message from topic
https://github.com/confluentinc/confluent-kafka-dotnet/issues/470
The main class is here :
#SpringBootApplication
#EnableAsync
public class SpringBootKafkaApp {
public static void main(String[] args) {
SpringApplication.run(SpringBootKafkaApp .class, args);
}
The consumer class (where I need to commit my message)
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
Properties props=prope.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
How can I commit the message just after I read it from the queue.
I want to be sure that when I receive the message I commit the message immediately. Right now the message is commited when I finish to execute the method just after the (System.out.println). So can anybody tell me how to do this?
----- update -------
Sorry for the late reply but as #GirishB suggested I've been looking to the configuration of GirishB but I don't see where I can define the topic I want to read/listen from my configuration file (applications.yml). All the examples that I see use a structure similar to this (http://tutorials.jenkov.com/java-util-concurrent/blockingqueue.html). Is there any option that I can read a topic that is declared in other server? Using something similar to this #KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
=========== SOLUTION 1 ========================================
I followed #victor gallet advice and included the declaration of the confumer porperties in oder to accomodate the "Acknowledgment" object in the consume method. I've also followed this link (https://www.programcreek.com/java-api-examples/?code=SpringOnePlatform2016/grussell-spring-kafka/grussell-spring-kafka-master/s1p-kafka/src/main/java/org/s1p/CommonConfiguration.java) to get all the methods that I've used to declare and set all the properties (consumerProperties, consumerFactory, kafkaListenerContainerFactory). The only problem I found is the
"new SeekToCurrentErrorHandler() " declaration because I'm getting an error and for the moment I'm not able to resolve it (would be great if someone explain it to me).
#Service
public class Consumer {
#Autowired
AppPropert prop;
Consumer cons;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
//factory.setErrorHandler(new SeekToCurrentErrorHandler());//getting error here despite I've loaded the library
return factory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
#Bean
public Map<String, Object> consumerProperties() {
Map<String, Object> props = new HashMap<>();
Properties propsManu=prop.startProperties();// here I'm getting my porperties file where I retrive the configuration from a remote server (you have to trust that this method works)
//props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, this.configProperties.getBrokerAddress());
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, propsManu.getProperty("bootstrap-servers"));
//props.put(ConsumerConfig.GROUP_ID_CONFIG, "s1pGroup");
props.put(ConsumerConfig.GROUP_ID_CONFIG, propsManu.getProperty("group-id"));
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 15000);
//props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("key-deserializer"));
//props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, propsManu.getProperty("value-deserializer"));
return props;
}
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message) throws IOException {
/*HERE I MUST CONSUME THE MESSAGE AND COMMIT IT */
acknowledgment.acknowledge();// commit immediately
Properties props=prop.startProp();//just getting my properties from my config-file
ControllerPRO pro = new ControllerPRO();
List<Future<String>> async= new ArrayList<Future<String>>();//call this method asynchronous, doesn't help me
try {
CompletableFuture<String> ret=pro.processLaunch(message,props);//here I call the process method
/*This works fine when the processLaunch method takes less than 5 minutes,
if it takes longer the consumer will get the same message from the topic and start again with this operation
*/
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("End of consumer method ");
}
}
``````````````````````````````````````````````````````````
You have to modify your consumer configuation with property enable.auto.commit set to false :
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
Then, you have to modify Spring Kafka Listener factory and set the ack-mode to MANUAL_IMMEDIATE. Here's an example of a ConcurrentKafkaListenerContainerFactory :
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckOnError(false);
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
factory.setErrorHandler(new SeekToCurrentErrorHandler());
return factory;
}
As explained from documentation, MANUAL_IMMEDIATE means : Commit the offset immediately when the Acknowledgment.acknowledge() method is called by the listener.
You can find all committing methods here.
Then, in your listener code, you can commit the offset manually by adding an Acknowledgmentobject, for example:
#KafkaListener(topics = "${app.topic.pro}", groupId = "group_id")
public void consume(String message, Acknowledgment acknowledgment) {
// commit immediately
acknowledgment.acknowledge();
}
You may use a java.util.concurrent.BlockingQueue to push the message as you consume and commit the Kafka offset. Then using another thread get the message from the blockingQueue and process. This way you don't have to wait till processing completes.
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
After setting above property , if you want to process in batch then you can follow followimg configurations.
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
// you can set either Manual or MANUAL_IMMEDIATE because //KafkaMessageListenerContainer invokes
//ConsumerBatchAcknowledgment for any kind of manual ackmode
factory.getContainerProperties().setAckOnError(true);
//specifying batch error handler because i have enabled to listen records in batch
factory.setBatchErrorHandler(new SeekToCurrentBatchErrorHandler());
factory.setBatchListener(true);
factory.getContainerProperties().setSyncCommits(false);
I'm building a kafka streams application with spring-kafka to group records by key and apply some business logic. I'm following the configuration stated on spring-kafka-streams doc, but the problem is that when I want to retrieve a value from the local store I get the following error:
org.apache.kafka.streams.errors.InvalidStateStoreException: The state store, user-data-response-count, may have migrated to another instance.
at org.apache.kafka.streams.state.internals.QueryableStoreProvider.getStore(QueryableStoreProvider.java:60)
at org.apache.kafka.streams.KafkaStreams.store(KafkaStreams.java:1053)
at com.umantis.management.service.UserDataManagementService.broadcastUserDataRequest(UserDataManagementService.java:121)
Here is my KafkaStreamsConfiguration:
#Configuration
#EnableConfigurationProperties(EventsKafkaProperties.class)
#EnableKafka
#EnableKafkaStreams
public class KafkaConfiguration {
#Value("${app.kafka.streams.application-id}")
private String applicationId;
// This contains both the bootstrap servers and the schema registry url
#Autowired
private EventsKafkaProperties eventsKafkaProperties;
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public StreamsConfig streamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, this.eventsKafkaProperties.getBrokers());
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, this.eventsKafkaProperties.getSchemaRegistryUrl());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new StreamsConfig(props);
}
#Bean
public KGroupedStream<String, UserDataResponse> responseKStream(StreamsBuilder streamsBuilder, TopicUtils topicUtils) {
final Map<String, String> serdeConfig = Collections.singletonMap("schema.registry.url", this.eventsKafkaProperties.getSchemaRegistryUrl());
final Serde<UserDataResponse> valueSpecificAvroSerde = new SpecificAvroSerde<>();
valueSpecificAvroSerde.configure(serdeConfig, false);
return streamsBuilder
.stream("myTopic", Consumed.with(Serdes.String(), valueSpecificAvroSerde))
.groupByKey();
}
And here is my service code failing on getKafkaStreams().store:
#Slf4j
#Service
public class UserDataManagementService {
private static final String RESPONSE_COUNT_STORE = "user-data-response-count";
#Autowired
private StreamsBuilderFactoryBean streamsBuilderFactory;
public UserDataResponse broadcastUserDataRequest() {
this.responseGroupStream.count(Materialized.as(RESPONSE_COUNT_STORE));
if (!this.streamsBuilderFactory.isRunning()) {
throw new KafkaStoreNotAvailableException();
}
// here we should have a single running kafka instance
ReadOnlyKeyValueStore<String, Long> countStore =
this.streamsBuilderFactory.getKafkaStreams().store(RESPONSE_COUNT_STORE, QueryableStoreTypes.keyValueStore());
...
}
Context: I'm running the app on a single instance in a spring boot test and I'm ensuring the kafka instance is on a running state. I've searched on documentation from apache on this issue, but my case does not appear to match.
Can anyone point me what I'm doing wrong and a possible solution?
I'm quite new on Kafka Streams, so any help would be highly appreciated.
Ok, just saw that I was asking if the streams factory was running but I wasn't asking if the kakfa streams instance was actually running.
Polling over streamsBuilderFactory.getKafkaStreams().state solved the issue.