Spring Kafka Rejoin After Kick - java

Our Spring Kafka consumer group run on multiple instances, each instance with about 10 concurrent processes (e.g. multiple consumers).
The problem is that sometimes, maybe due to stuck / long processes - some of the kafka consumers get kicked out of the group - and therefore, the consumer group gets smaller and smaller, until it becomes practically not operative.
The primary symptom is, of course, a frequent rebalance, and a shrinking consumer group size. (Notice the very long max.poll.interval.ms, as we lost hope at some point...)
To battle rebalancing, we've used CooperativeStickyAssignor and made sure our consumers have a static group.instance.id; We're running on a StatefulSet on K8S (GKE autopilot)
The question is, how do we get the consumers to re-join the group, or somehow compensate on getting kick out of the consumer group?
Here's our configuration:
public Map<String, Object> getConsumerProps(String groupId, Class keyDeserializerClass, Class valueDeserializerClass) {
Map<String, Object> props = new HashMap<>();
props.put(
ConsumerConfig.GROUP_ID_CONFIG,
groupId);
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
keyDeserializerClass);
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
valueDeserializerClass);
props.put(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
kafkaAddress);
props.put(
ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
30*60*1000);
props.put(
ConsumerConfig.HEARTBEAT_INTERVAL_MS_CONFIG,
5000);
props.put(
ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG,
29*60*1000);
props.put(
ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG,
30*60*1000);
props.put(
ConsumerConfig.MAX_POLL_RECORDS_CONFIG,
5);
props.put(
ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG,
3*60*1000);
props.put(
ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
org.apache.kafka.clients.consumer.CooperativeStickyAssignor.class.getName());
props.put(
ConsumerConfig.GROUP_INSTANCE_ID_CONFIG,
serverManagementService.getPodId());
LOGGER.info(String.format("Pod id is %s", serverManagementService.getPodId()));
return props;
}
Consumer Factory
#Bean
public ConsumerFactory<String, String> externalResourceIndexerKafkaConsumerFactory() {
Map<String, Object> props = kafkaTopicConfig.getConsumerProps("externalResourceIndexer", StringDeserializer.class, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> externalResourceIndexerKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(externalResourceIndexerKafkaConsumerFactory());
factory.setConcurrency(10);
return factory;
}
The KafkaListener
#Override
#Transactional
#KafkaListener(
id = "externalResourceIndexer",
autoStartup = "false",
topics = "externalResourceIndexer",
containerFactory = "externalResourceIndexerKafkaListenerContainerFactory")
public void run(String payload) throws JsonProcessingException {
// ...
}
Here's how we start the KafkaListener (yes, it's a bit of a hack)
// schedule to start kafka
#Override
#Scheduled(initialDelay = 1000 * 10, fixedDelay = Long.MAX_VALUE)
public synchronized void loadKafka() {
kafkaListenerEndpointRegistry.getListenerContainer("externalResourceIndexer").start();
}
}

Related

Kafka is falling when consumer start

I use kafka on windows, run zookeeper first through the console, then kafka. Everything starts perfectly. The producer runs fine as well. But as soon as I start the consumer, logs start pouring into the console and I get the map failed error. I tried to change the allocated memory in the kafka server start file.
At the moment my file kafka-server-start.sh it looks like this:
export KAFKA_HEAP_OPTS="-Xmx1G -Xms512M"
And if i delete KafkaListener everything starts up perfectly as well, but the interaction between the topics is important to me.
Kafka version: 2.13-3.2.1
Consumer property:
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapService;
public Map<String, Object> getDefaultConsumerConfig() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapService);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "*");
return props;
}
Consumer config:
#Bean
public ConsumerFactory<String, ConfigurationEventDto> configurationEventDtoConsumerFactory() {
return new DefaultKafkaConsumerFactory<>(kafkaService.getDefaultConsumerConfig(),
new JsonDeserializer<>(),
new JsonDeserializer<>(ConfigurationEventDto.class, false));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, ConfigurationEventDto> configurationEventDtoKafkaFactory(
ConsumerFactory<String, ConfigurationEventDto> configurationEventDtoConsumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, ConfigurationEventDto> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(configurationEventDtoConsumerFactory);
return factory;
}
Kafka listener:
#KafkaListener(topics = "activity-record-configuration-event",
groupId = "activity-record-configuration-event",
containerFactory = "configurationEventDtoKafkaFactory")
void listen(ConfigurationEventDto configurationEventDto) {
log.info("new configurationEventDto received");
service.save(configurationEventDto);
}
And when i start my consumer microservice kafka logs are:
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:938)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:124)
at kafka.log.OffsetIndex.<init>(OffsetIndex.scala:54)
at kafka.log.LazyIndex$.$anonfun$forOffset$1(LazyIndex.scala:106)
at kafka.log.LazyIndex.$anonfun$get$1(LazyIndex.scala:63)
at kafka.log.LazyIndex.get(LazyIndex.scala:60)
at kafka.log.LogSegment.offsetIndex(LogSegment.scala:64)
at kafka.log.LogSegment.readNextOffset(LogSegment.scala:453)
at kafka.log.LogLoader.$anonfun$recoverLog$6(LogLoader.scala:457)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
at scala.Option.getOrElse(Option.scala:201)
at kafka.log.LogLoader.recoverLog(LogLoader.scala:457)
at kafka.log.LogLoader.load(LogLoader.scala:162)
at kafka.log.UnifiedLog$.apply(UnifiedLog.scala:1810)
at kafka.log.LogManager.$anonfun$getOrCreateLog$1(LogManager.scala:901)
at scala.Option.getOrElse(Option.scala:201)
at kafka.log.LogManager.getOrCreateLog(LogManager.scala:852)
at kafka.cluster.Partition.createLog(Partition.scala:372)
at kafka.cluster.Partition.maybeCreate$1(Partition.scala:347)
at kafka.cluster.Partition.createLogIfNotExists(Partition.scala:354)
at kafka.cluster.Partition.$anonfun$makeLeader$1(Partition.scala:566)
at kafka.cluster.Partition.makeLeader(Partition.scala:543)
at kafka.server.ReplicaManager.$anonfun$makeLeaders$5(ReplicaManager.scala:1592)
at kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)
at scala.collection.mutable.HashMap$Node.foreachEntry(HashMap.scala:633)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:499)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1590)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:269)
at kafka.server.KafkaApis.handle(KafkaApis.scala:176)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
at java.lang.Thread.run(Thread.java:750)
And of course i cleaned the folders with logs!

Kafka consumer batch does not return as per max poll record config

MY kafka Listener detail as below:
#KafkaListener(
topics="test",
groupId="groupid",
containerFactory="kafkaBatchListenerContainerFactory" )
public void onMessage(List<ConsumerRecord<String,String>> messages, Acknowledgment acknowledgment) throws Exception {
log.info("Batch size is {}",messages.size());
acknowledgment.acknowledge();
}
Bean Configuration detail:
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaBatchListenerContainerFactory() throws IOException {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(3); //Equal to partition
factory.getContainerProperties().setAckMode("MANUAL");
factory.getContainerProperties().setIdleBetweenPolls((long) 1 * 1000L);
factory.getContainerProperties().setPollTimeout(kafkaConfigProperties.getPollTimeout() * 1000L);
factory.setMissingTopicsFatal(false);
factory.setBatchListener(true);
factory.getContainerProperties().setAsyncAcks(true);
factory.setCommonErrorHandler(new DefaultErrorHandler(new FixedBackOff(1000L, 4)));
}
Consumer Factory:
#Bean
public ConsumerFactory<String, String> consumerFactory() throws IOException {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9080");
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ErrorHandlingDeserializer.class);
config.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, StringDeserializer.class);
config.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, JsonDeserializer.class.getName());
config.put(ConsumerConfig.GROUP_ID_CONFIG, "groupid");
config.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
config.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 60 * 1000);
config.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 2 * 1000);
config.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 30 * 1000);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 900 * 1000);
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 100);
config.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, read_committed);
return new DefaultKafkaConsumerFactory<>(config, new StringDeserializer(), new StringDeserializer());
}
When stop and start the server every time, able to receive batch of records. However during live transaction, received one record instead of batch. Let me know any suggestion.
My logs as below :
Batch size is 1
Batch size is 1
Batch size is 1
Batch size is 1

Spring Kafka polling using Consumer

I am using consumer.poll(Duration d) for fetching the records. I have only 10 records for testing purpose in Kafka topic spread across 6 partitions. I have disabled auto commit and not committing manually either (again for testing purpose only). When poll is executed it is not fetching data from all partitions. I need to run the poll in a loop to get all the data. I haven't overridden the parameters like max.poll.size or max.fetch.bytes from its default values. What could be the reason? Please note that I have only this consumer for the given topic and group id hence I hope all the partitions will be assigned to this
private Consumer<String, Object> createConsumer() {
ConsumerFactory<String, Object> consumerFactory = deadLetterConsumerFactory();
Consumer<String, Object> consumer = consumerFactory.createConsumer();
consumer.subscribe(Collections.singletonList(kafkaConfigProperties.getDeadLetterTopic()));
return consumer;
}
try {
consumer = createConsumer();
ConsumerRecords<String, Object> records = consumer.poll(Duration.ofMillis(5000));
processMessages (records , .,....);
} catch (Exception e) {
....
} finally {
if (consumer != null) {
consumer.unsubscribe();
consumer.close();
}
}
EDIT
Here is the details
ConsumerFactory<String, Object> deadLetterConsumerFactory() {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
properties.put(SCHEMA_REGISTRY_URL, url);
properties.put(ProducerConfig.CLIENT_ID_CONFIG,
"myid" + "-" + CONSUMER_CLIENT_ID_SEQUENCE.getAndIncrement());
properties.put(SSL_ENDPOINT_IDFN_ALGM, alg);
properties.put(SaslConfigs.SASL_MECHANISM, saslmech);
properties.put(REQUEST_TIMEOUT, timeout);
properties.put(SaslConfigs.SASL_JAAS_CONFIG, config);
properties.put(SECURITY_PROTOCOL, protocol);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
consumerProperties.put(ConsumerConfig.GROUP_ID_CONFIG, "groupid");
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProperties.forEach((key, value) -> {
map.put((String) key, value);
});
return new DefaultKafkaConsumerFactory<>(map);
}

Spring Kafka transaction causes producer per message offset increased by two

I have a consume-transform-produce workflow in a micro service using Spring(boot) Kafka. I need to achieve the exactly-once scemantics provided by Kafka transaction.
Here's the code snippet below:
Config
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 1024 * 1024);
DefaultKafkaProducerFactory<String, String> defaultKafkaProducerFactory = new DefaultKafkaProducerFactory<>(props);
defaultKafkaProducerFactory.setTransactionIdPrefix("kafka-trx-");
return defaultKafkaProducerFactory;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 5000);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.ISOLATION_LEVEL_CONFIG, "read_committed");
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
#Bean
public KafkaTransactionManager<String, String> kafkaTransactionManager() {
return new KafkaTransactionManager<>(producerFactory());
}
#Bean
#Qualifier("chainedKafkaTransactionManager")
public ChainedKafkaTransactionManager<String, Object> chainedKafkaTransactionManager(KafkaTransactionManager<String, String> kafkaTransactionManager) {
return new ChainedKafkaTransactionManager<>(kafkaTransactionManager);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> concurrentKafkaListenerContainerFactory(ChainedKafkaTransactionManager<String, Object> chainedKafkaTransactionManager) {
ConcurrentKafkaListenerContainerFactory<String, String> concurrentKafkaListenerContainerFactory = new ConcurrentKafkaListenerContainerFactory<>();
concurrentKafkaListenerContainerFactory.setConsumerFactory(consumerFactory());
concurrentKafkaListenerContainerFactory.setBatchListener(true);
concurrentKafkaListenerContainerFactory.setConcurrency(nexusConsumerConcurrency);
//concurrentKafkaListenerContainerFactory.setReplyTemplate(kafkaTemplate());
concurrentKafkaListenerContainerFactory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.BATCH);
concurrentKafkaListenerContainerFactory.getContainerProperties().setTransactionManager(chainedKafkaTransactionManager);
return concurrentKafkaListenerContainerFactory;
}
Listener
#KafkaListener(topics = "${kafka.xxx.consumerTopic}", groupId = "${kafka.xxx.consumerGroup}", containerFactory = "concurrentKafkaListenerContainerFactory")
public void listen(#Payload List<String> msgs, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions, #Header(KafkaHeaders.OFFSET) List<Integer> offsets) {
int i = -1;
for (String msg : msgs) {
++i;
LOGGER.debug("partition={}; offset={}; msg={}", partitions.get(i), offsets.get(i), msg);
String json = transform(msg);
kafkaTemplate.executeInTransaction(kt -> kt.send(producerTopic, json));
}
}
However in the product environment, I encounter a weird problem. The offset is increased by two per message sent by the producer and consumer doesn't commit the consuming offset.
Consumer Offsets from topic1
Topic1 consumer detail
Produce to topic2
However the count of messages sent by the producer is the same as the consumed. The downstream of the producer can receive the msgs from topic2 continuously. There's no error or exception found in the log.
I wonder why consume-transform-produce workflow seems ok(exactly-once scemantics also guaranteed), but the consumed offset isn't committed and the produced msg offset increment is two instead of 1 for per single msg.
How to fix it? Thx!
That's the way it's designed. Kafka logs are immutable so an extra "slot" is used at the end of the transaction to indicate whether the transaction was committed or rolled back. This allows consumers with read_committed isolation level to skip over rolled-back transactions.
If you publish 10 records in a transaction, you will see the offset increase by 11. If you only publish one, it will increase by two.
if you want the publish to participate in the consumer-started transaction (for exactly-once), you should not be using executeInTransaction; that will start a new transaction.
/**
* Execute some arbitrary operation(s) on the operations and return the result.
* The operations are invoked within a local transaction and do not participate
* in a global transaction (if present).
* #param callback the callback.
* #param <T> the result type.
* #return the result.
* #since 1.1
*/
<T> T executeInTransaction(OperationsCallback<K, V, T> callback);
I don't see why the consumer offset would not be still sent to the consumer-started transaction though. You should turn on DEBUG logging to see what's happening (if it still happens after you fix the template code).
EDIT
The consumed offset (+1) is sent to the transaction by the listener container when the listener exits; turn on commit logging and you will see it...
#SpringBootApplication
public class So59152915Application {
public static void main(String[] args) {
SpringApplication.run(So59152915Application.class, args);
}
#Autowired
private KafkaTemplate<String, String> template;
#KafkaListener(id = "foo", topics = "so59152915-1", clientIdPrefix = "so59152915")
public void listen1(String in, #Header(KafkaHeaders.OFFSET) long offset) throws InterruptedException {
System.out.println(in + "#" + offset);
this.template.send("so59152915-2", in.toUpperCase());
Thread.sleep(2000);
}
#KafkaListener(id = "bar", topics = "so59152915-2")
public void listen2(String in) {
System.out.println(in);
}
#Bean
public NewTopic topic1() {
return new NewTopic("so59152915-1", 1, (short) 1);
}
#Bean
public NewTopic topic2() {
return new NewTopic("so59152915-2", 1, (short) 1);
}
#Bean
public ApplicationRunner runner(KafkaListenerEndpointRegistry registry) {
return args -> {
this.template.executeInTransaction(t -> {
IntStream.range(0, 11).forEach(i -> t.send("so59152915-1", "foo" + i));
try {
System.out.println("Hit enter to commit sends");
System.in.read();
}
catch (IOException e) {
e.printStackTrace();
}
return null;
});
};
}
}
#Component
class Configurer {
Configurer(ConcurrentKafkaListenerContainerFactory<?, ?> factory) {
factory.getContainerProperties().setCommitLogLevel(Level.INFO);
}
}
and
spring.kafka.producer.transaction-id-prefix=tx-
spring.kafka.consumer.properties.isolation.level=read_committed
spring.kafka.consumer.auto-offset-reset=earliest
and
foo0#56
2019-12-04 10:07:18.551 INFO 55430 --- [ foo-0-C-1] essageListenerContainer$ListenerConsumer : Sending offsets to transaction: {so59152915-1-0=OffsetAndMetadata{offset=57, leaderEpoch=null, metadata=''}}
foo1#57
FOO0
2019-12-04 10:07:18.558 INFO 55430 --- [ bar-0-C-1] essageListenerContainer$ListenerConsumer : Sending offsets to transaction: {so59152915-2-0=OffsetAndMetadata{offset=63, leaderEpoch=null, metadata=''}}
2019-12-04 10:07:20.562 INFO 55430 --- [ foo-0-C-1] essageListenerContainer$ListenerConsumer : Sending offsets to transaction: {so59152915-1-0=OffsetAndMetadata{offset=58, leaderEpoch=null, metadata=''}}
foo2#58
Please pay attention for your auto commit setup. As I see you set it false:
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
so, in this situation you need to commit "manually" or set the auto commit true.

Kafka Consumer: Stop processing messages when exception was raised

I'm a bit confused about the poll() behaviour of (Spring) Kafka after/when stopping the ConcurrentMessageListenerContainer.
What I want to achieve:
Stop the consumer after an exception was raised (for example message could not be saved to the database), do not commit offset, restart it after a given time and start processing again from the previously failed message.
I read this article which says that the container will call the listener with the remaining records from the poll (https://github.com/spring-projects/spring-kafka/issues/451) which means that there is no guarantee that after the failed message a further message which was processed successfully will commit the offset. This could end up in lost/skipped messages.
Is this really the case and if yes is there a solution to solve this without upgrading the newer versions? (DLQ is not a solution for my case)
What I already did:
Setting the setErrorHandler() and setAckOnError(false)
private Map<String, Object> getConsumerProps(CustomKafkaProps kafkaProps, Class keyDeserializer) {
Map<String, Object> props = new HashMap<>();
//Set common props
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProps.getBootstrapServers());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProps.getConsumerGroupId());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); // Start with the first message when a new consumer group (app) arrives at the topic
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); // We will use "RECORD" AckMode in the Spring Listener Container
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, keyDeserializer);
if (kafkaProps.isSslEnabled()) {
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
props.put("ssl.keystore.location", kafkaProps.getKafkaKeystoreLocation());
props.put("ssl.keystore.password", kafkaProps.getKafkaKeystorePassword());
props.put("ssl.key.password", kafkaProps.getKafkaKeyPassword());
}
return props;
}
Consumer
public ConcurrentMessageListenerContainer<String, byte[]> kafkaReceiverContainer(CustomKafkaProps kafkaProps) throws Exception {
StoppingErrorHandler stoppingErrorHandler = new StoppingErrorHandler();
ContainerProperties containerProperties = new ContainerProperties(...);
containerProperties.setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
containerProperties.setAckOnError(false);
containerProperties.setErrorHandler(stoppingErrorHandler);
ConcurrentMessageListenerContainer<String, byte[]> container = ...
container.setConcurrency(1); //use only one container
stoppingErrorHandler.setConcurrentMessageListenerContainer(container);
return container;
}
Error Handler
public class StoppingErrorHandler implements ErrorHandler {
#Setter
private ConcurrentMessageListenerContainer concurrentMessageListenerContainer;
#Value("${backends.kafka.consumer.halt.timeout}")
int consumerHaltTimeout;
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> record) {
if (concurrentMessageListenerContainer != null) {
concurrentMessageListenerContainer.stop();
}
new Timer().schedule(new TimerTask() {
#Override
public void run() {
if (concurrentMessageListenerContainer != null && !concurrentMessageListenerContainer.isRunning()) {
concurrentMessageListenerContainer.start();
}
}
}, consumerHaltTimeout);
}
}
What I'm using:
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-kafka</artifactId>
<version>2.1.2.RELEASE</version>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>1.1.7.RELEASE</version>
without upgrading the newer versions?
2.1 introduced the ContainerStoppingErrorHandler which is a ContainerAwareErrorHandler, the remaining unconsumed messages are discarded (and will be re-fetched when the container is restarted).
With earlier versions, your listener will need to reject (fail) the remaining messages in the batch (or set max.records.per.poll=1).

Categories