I have a multi module maven project with several modules (parent, service, updater1, updater2).
The #SpringBootApplication is in 'service' module and the others doesn't have artifacts.
'updater1' is a module which have a Kafka listener and a http client, and when receives a kafka event launches a request to an external API. I want to create integration tests in this module with testcontainers, so I've created the containers and a Kafka producer to send a KafkaTemplate to my consumer.
My problem is the Kafka producer is autowiring to null, so the tests throws a NullPointerException. I think it should be a Spring configuration problem, but I can't find the problem. Can you help me? Thank's!
This is my test class:
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = {KafkaConfiguration.class, CacheConfiguration.class, ClientConfiguration.class})
public class InvoicingTest {
#ClassRule
public static final Containers containers = Containers.Builder.aContainer()
.withKafka()
.withServer()
.build();
private final MockHttpClient mockHttpClient =
new MockHttpClient(containers.getHost(SERVER),
containers.getPort(SERVER));
#Autowired
private KafkaEventProducer kafkaEventProducer;
#BeforeEach
#Transactional
void setUp() {
mockHttpClient.reset();
}
#Test
public void createElementSuccesfullResponse() throws ExecutionException, InterruptedException, TimeoutException {
mockHttpClient.whenPost("/v1/endpoint")
.respond(HttpStatusCode.OK_200);
kafkaEventProducer.produce("src/test/resources/event/invoiceCreated.json");
mockHttpClient.verify();
}
And this is the event producer:
#Component
public class KafkaEventProducer {
private final KafkaTemplate<String, String> kafkaTemplate;
private final String topic;
#Autowired
KafkaInvoicingEventProducer(KafkaTemplate<String, String> kafkaTemplate,
#Value("${kafka.topic.invoicing.name}") String topic){
this.kafkaTemplate = kafkaTemplate;
this.topic = topic;
}
public void produce(String event){
kafkaTemplate.send(topic, event);
}
}
You haven't detailed how KafkaEventProducer is implemented (is it a #Component?), neither your test class is annotated with #SpringBootTest and the runner #RunWith.
Check out this sample, using Apache KakfaProducer:
import org.apache.kafka.clients.producer.KafkaProducer;
public void sendRecord(String topic, String event) {
try (KafkaProducer<String, byte[]> producer = new KafkaProducer<>(producerProps(bootstrapServers, false))) {
send(producer, topic, event);
}
}
where
public void send(KafkaProducer<String, byte[]> producer, String topic, String event) {
try {
ProducerRecord<String, byte[]> record = new ProducerRecord<>(topic, event.getBytes());
producer.send(record).get();
} catch (InterruptedException | ExecutionException e) {
fail("Not expected exception: " + e.getMessage());
}
}
protected Properties producerProps(String bootstrapServer, boolean transactional) {
Properties producerProperties = new Properties();
producerProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
producerProperties.put(KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
producerProperties.put(VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class.getName());
if (transactional) {
producerProperties.put(TRANSACTIONAL_ID_CONFIG, "my-transactional-id");
}
return producerProperties;
}
and bootstrapServers is taken from kafka container:
KafkaContainer kafka = new KafkaContainer();
kafka.start();
bootstrapServers = kafka.getBootstrapServers();
Related
I have one scheduler which produces one event. My consumer consumes this event. The payload of this event is a json with below fields:
private String topic;
private String partition;
private String filterKey;
private long CustId;
Now I need to trigger one more consumer which will take all this information which I get a response from first consumer.
#KafkaListener(topics = "<**topic-name-from-first-consumer-response**>", groupId = "group" containerFactory = "kafkaListenerFactory")
public void consumeJson(List<User> data, Acknowledgment acknowledgment,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
// consumer code goes here...}
I need to create some dynamic variable which I can pass in place of topic name.
similarly, I am using the filtering in the configuration file and I need to pass key dynamically in the configuration.
factory.setRecordFilterStrategy(new RecordFilterStrategy<String, Object>() {
#Override
public boolean filter(ConsumerRecord<String, Object> consumerRecord) {
if(consumerRecord.key().equals("**Key will go here**")) {
return false;
}
else {
return true;
}
}
});
How can we dynamically inject these values from the response of first consumer and trigger the second consumer. Both the consumers are in same application
You cannot do that with an annotated listener, the configuration is only used during initialization; you would need to create a listener container yourself (using the ConcurrentKafkaListenerContainerFactory) to dynamically create a listener.
EDIT
Here's an example.
#SpringBootApplication
public class So69134055Application {
public static void main(String[] args) {
SpringApplication.run(So69134055Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so69134055").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
spring.kafka.consumer.auto-offset-reset=earliest
Output - showing that the filter was applied to the records with bar in the key.
Others: [test0, test2, test4, test6, test8]
In my spring kafka application, I want to trigger the consumer at run time according to input of some scheduler. Scheduler will tell the listener from which topic it can start consuming messages. There is springboot application with custom ConcurrentKafkaListenerContainerFactory class. I need to perform three tasks:
close the container, After successfully reading all the messages available on topic.
It will store the current offset in DB or file system.
Next time when consumer up again, the stored offset can be used to process the records instead of default offset managed by Kafka. So that in future we can change the offset value in DB and get get desired reports.
I know how to handle all these with #KafkaListener but not sure how to hook with ConcurrentKafkaListenerContainerFactory. The current code is listed below:
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
}
EDIT
#SpringBootApplication
public class KafkaApp{
public static void main(String[] args) {
SpringApplication.run(KafkaApp.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("testTopic").partitions(1).replicas(1).build();
}
}
#Component
class Listener {
private static final Logger log = LoggerFactory.getLogger(Listener.class);
private static final Method otherListen;
static {
try {
otherListen = Listener.class.getDeclaredMethod("otherListen", List.class);
}
catch (NoSuchMethodException | SecurityException ex) {
throw new IllegalStateException(ex);
}
}
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
private final MessageHandlerMethodFactory methodFactory;
private final KafkaAdmin admin;
private final KafkaTemplate<String, String> template;
public Listener(ConcurrentKafkaListenerContainerFactory<String, String> factory, KafkaAdmin admin,
KafkaTemplate<String, String> template, KafkaListenerAnnotationBeanPostProcessor<?, ?> bpp) {
this.factory = factory;
this.admin = admin;
this.template = template;
this.methodFactory = bpp.getMessageHandlerMethodFactory();
}
#KafkaListener(id = "myId", topics = "testTopic")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.setBeanName(topicName + ".container");
container.getContainerProperties().setIdleEventInterval(3000L);
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
#EventListener
public void eventHandler(ListenerContainerIdleEvent event) {
logger.info("No messages received for " + event.getIdleTime() + " milliseconds");
}
}
You can receive ListenerContainerIdleEvents when there are no messages left to process; you can use this event to stop the container; you should perform the stop() on a different thread (not the one that publishes the event).
See How to check if Kafka is empty using Spring Kafka?
You can get the partition/offset in several ways.
void otherListen<List<ConsumerRecord<..., ...>>)
or
void otherListen(List<String> others,
#Header(KafkaHeaders.RECEIVED_PARTITION) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets)
You can specify the starting offset in the
new TopicPartitionOffset(topicName, 0), startOffset);
when creating the container.
EDIT
To stop the container when it is idle, set the idleEventInterval and add an #EventListener method and stop the container.
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
void idle(ListenerContainerIdleEvent event) {
log...
this.exec.execute(() -> event.getContainer(ConcurrentMessageListenerContainer.class).stop());
}
If you add concurrency to your containers, you would need for each child container to go idle before stopping the parent container.
EDIT2
I just added it to the code I wrote for the answer to your other question and it works exactly as expected.
#KafkaListener(id = "so69134055", topics = "so69134055")
public void listen(String topicName) {
try (AdminClient client = AdminClient.create(this.admin.getConfigurationProperties())) {
NewTopic topic = TopicBuilder.name(topicName).build();
client.createTopics(List.of(topic)).all().get(10, TimeUnit.SECONDS);
}
catch (Exception e) {
log.error("Failed to create topic", e);
}
ConcurrentMessageListenerContainer<String, String> container =
this.factory.createContainer(new TopicPartitionOffset(topicName, 0));
BatchMessagingMessageListenerAdapter<String, String> adapter =
new BatchMessagingMessageListenerAdapter<>(this, otherListen);
adapter.setHandlerMethod(new HandlerAdapter(
this.methodFactory.createInvocableHandlerMethod(this, otherListen)));
FilteringBatchMessageListenerAdapter<String, String> filtered =
new FilteringBatchMessageListenerAdapter<>(adapter, record -> !record.key().equals("foo"));
container.getContainerProperties().setMessageListener(filtered);
container.getContainerProperties().setGroupId("group.for." + topicName);
container.getContainerProperties().setIdleEventInterval(3000L);
container.setBeanName(topicName + ".container");
container.start();
IntStream.range(0, 10).forEach(i -> this.template.send(topicName, 0, i % 2 == 0 ? "foo" : "bar", "test" + i));
}
void otherListen(List<String> others) {
log.info("Others: {}", others);
}
TaskExecutor exec = new SimpleAsyncTaskExecutor();
#EventListener
public void idle(ListenerContainerIdleEvent event) {
log.info(event.toString());
this.exec.execute(() -> {
ConcurrentMessageListenerContainer container = event.getContainer(ConcurrentMessageListenerContainer.class);
log.info("stopping container: " + container.getBeanName());
container.stop();
});
}
[foo.container-0-C-1] Others: [test0, test2, test4, test6, test8]
[foo.container-0-C-1] ListenerContainerIdleEvent [idleTime=5.007s, listenerId=foo.container-0, container=KafkaMessageListenerContainer [id=foo.container-0, clientIndex=-0, topicPartitions=[foo-0]], paused=false, topicPartitions=[foo-0]]
[SimpleAsyncTaskExecutor-1] stopping container: foo.container
[foo.container-0-C-1] [Consumer clientId=consumer-group.for.foo-2, groupId=group.for.foo] Unsubscribed all topics or patterns and assigned partitions
[foo.container-0-C-1] Metrics scheduler closed
[foo.container-0-C-1] Closing reporter org.apache.kafka.common.metrics.JmxReporter
[foo.container-0-C-1] Metrics reporters closed
[foo.container-0-C-1] App info kafka.consumer for consumer-group.for.foo-2 unregistered
[foo.container-0-C-1] group.for.foo: Consumer stopped
I have following Binding
public interface KafkaBinding {
String DATA_OUT = "dataout";
#Output(DATA_OUT)
MessageChannel dataOut();
}
Here is kafka utility
#EnableBinding(KafkaBinding.class)
public class KafkaStreamUtil {
private final MessageChannel out;
public KafkaStreamUtil(KafkaBinding binding) {
this.out = binding .dataOut();
}
public void SendToKafkaTopic(List<data> dataList){
dataList
.stream()
.forEach(this::sendMessgae);
}
private void sendMessgae(Data data) {
Message<Data> message = MessageBuilder
.withPayload(data)
.setHeader(KafkaHeaders.MESSAGE_KEY, data.getRequestId().getBytes())
.build();
try {
this.out.send(message);
} catch (Exception e) {
log.error(e.getMessage(),e);
}
}
My test class
#RunWith(SpringRunner.class)
#ActiveProfiles("test")
#SpringBootTest(classes = {KafkaStreamUtil.class,KafkaBinding.class})
#DirtiesContext(classMode= DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
public class KafkaStreamUtilTest {
private static final String TEST_TOPIC1 = "data";
private static final String GROUP_NAME = "embeddedKafkaApplication";
#ClassRule
public static EmbeddedKafkaRule kafkaRule =
new EmbeddedKafkaRule(1, true, TEST_TOPIC1);
public static EmbeddedKafkaBroker embeddedKafka = kafkaRule.getEmbeddedKafka();
#Autowired
private KafkaStreamUtil kUtil;
#Autowired
private KafkaBinding binding;
#BeforeClass
public static void setupProperties() {
System.setProperty("spring.cloud.stream.kafka.binder.brokers", embeddedKafka.getBrokersAsString());
System.setProperty("spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms", "1000");
System.setProperty("spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde", "org.apache.kafka.common.serialization.Serdes$StringSerde");
System.setProperty("spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde", "org.apache.kafka.common.serialization.Serdes$StringSerde");
System.setProperty("spring.cloud.stream.bindings.dataout.destination", "data");
System.setProperty("spring.cloud.stream.bindings.dataout.producer.header-mode", "raw");
System.setProperty("spring.autoconfigure.exclude","org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration");
System.setProperty("spring.kafka.consumer.value-deserializer","org.apache.kafka.common.serialization.StringDeserializer");
System.setProperty("spring.cloud.stream.bindings.input.consumer.headerMode","raw");
System.setProperty("spring.cloud.stream.bindings.input.group","embeddedKafkaApplication");
System.setProperty("spring.kafka.consumer.group-id","EmbeddedKafkaIntTest");
}
#Before
public void setUp() throws Exception {
kUtil = new KafkaStreamUtil(binding);
}
#Test
public void sendToKafkaTopic() {
kUtil.SendToKafkaTopic(dataList);
Map<String, Object> consumerProps = KafkaTestUtils.consumerProps(GROUP_NAME, "false", embeddedKafka);
consumerProps.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumerProps.put("key.deserializer", StringDeserializer.class);
consumerProps.put("value.deserializer", StringDeserializer.class);
DefaultKafkaConsumerFactory<String, String> cf = new DefaultKafkaConsumerFactory<>(consumerProps);
Consumer<String, String> consumer = cf.createConsumer();
consumer.subscribe(Collections.singleton(TEST_TOPIC1));
ConsumerRecords<String, String> records = consumer.poll(10_000);
consumer.commitSync();
consumer.close();
Assert.assertNotNull(records);
}
}
I am unable to get binding with Kafka util in test the binding is always null. Please let know what I am missing. I am able to test with springboottest but it is loading all beans I just want to load necessary componens required for this test.
I have started an instance of EmbeddedKafka in a JUnit test. I can read the records that I have pushed to my stream correctly in my application, but one thing I have noticed is that I only have one partition per topic. Can anyone explain why?
In my application I have the following:
List<PartitionInfo> partitionInfos = consumer.partitionsFor(topic);
This returns a list with one item. When running against local Kafka with 3 partitions, it returns a list with 3 items as expected.
And my test looks like:
#RunWith(SpringRunner.class)
#SpringBootTest
#EmbeddedKafka(partitions = 3)
#ActiveProfiles("inmemory")
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#TestPropertySource(
locations = "classpath:application-test.properties",
properties = {"app.onlyMonitorIfDataUpdated=true"})
public class MonitorRestKafkaIntegrationTest {
#Autowired
private EmbeddedKafkaBroker embeddedKafkaBroker;
#Value("${spring.embedded.kafka.brokers}")
private String embeddedBrokers;
#Autowired
private WebApplicationContext wac;
#Autowired
private JsonUtility jsonUtility;
private MockMvc mockMvc;
#Before
public void setup() {
mockMvc = webAppContextSetup(wac).build();
UserGroupInformation.setLoginUser(UserGroupInformation.createRemoteUser("dummyUser"));
}
private ResultActions interactiveMonitoringREST(String eggID, String monitoringParams) throws Exception {
return mockMvc.perform(post(String.format("/eggs/%s/interactive", eggID)).contentType(MediaType.APPLICATION_JSON_VALUE).content(monitoringParams));
}
#Test
#WithMockUser("super_user")
public void testEmbeddedKafka() throws Exception {
Producer<String, String> producer = getKafkaProducer();
sendRecords(producer, 3);
updateConn();
interactiveMonitoringREST(EGG_KAFKA, monitoringParams)
.andExpect(status().isOk())
.andDo(print())
.andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsProcessed").value(3))
.andExpect(jsonPath("$.taskResults[0].resultDetails.numberOfRecordsSkipped").value(0));
}
private void sendRecords(Producer<String, String> producer, int records) {
for (int i = 0; i < records; i++) {
String val = "{\"auto_age\":" + String.valueOf(i + 10) + "}";
producer.send(new ProducerRecord<>(testTopic, String.valueOf(i), val));
}
producer.flush();
}
private Producer<String, String> getKafkaProducer() {
Map<String, Object> prodConfigs = new HashMap<>(KafkaTestUtils.producerProps(embeddedKafkaBroker));
return new DefaultKafkaProducerFactory<>(prodConfigs, new StringSerializer(), new StringSerializer()).createProducer();
}
private void updateConn() throws Exception {
String conn = getConnectionREST(CONN_KAFKA).andReturn().getResponse().getContentAsString();
ConnectionDetail connectionDetail = jsonUtility.fromJson(conn, ConnectionDetail.class);
connectionDetail.getDetails().put(ConnectionDetailConstants.CONNECTION_SERVER, embeddedBrokers);
String updatedConn = jsonUtility.toJson(connectionDetail);
updateConnectionREST(CONN_KAFKA, updatedConn).andExpect(status().isOk());
}
}
You need to tell the broker to pre-create the topics...
#SpringBootTest
#EmbeddedKafka(topics = "foo", partitions = 3)
class So57481979ApplicationTests {
#Test
void testPartitions(#Autowired KafkaAdmin admin) throws InterruptedException, ExecutionException {
AdminClient client = AdminClient.create(admin.getConfig());
Map<String, TopicDescription> map = client.describeTopics(Collections.singletonList("foo")).all().get();
System.out.println(map.values().iterator().next().partitions().size());
}
}
Or set the num.partitions broker property if you want the broker to auto-create the topics for you on first use.
We should probably automatically do that, based on the partitions property.
I found bootstrapServersProperty is important in #EmbeddedKafka, which is used to populate the property in the application-test.yml, which then can be used to create consumers/listener containers.
I have a Spring Boot application and it needs to process some Kafka streaming data. I added an infinite loop to a CommandLineRunner class that will run on startup. In there is a Kafka consumer that can be woken up. I added a shutdown hook with Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));. Will I run into any problems? Is there a more idiomatic way of doing this in Spring? Should I use #Scheduled instead? The code below is stripped of specific Kafka-implementation stuff but otherwise complete.
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.errors.WakeupException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.CommandLineRunner;
import org.springframework.stereotype.Component;
import java.time.Duration;
import java.util.Properties;
#Component
public class InfiniteLoopStarter implements CommandLineRunner {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
#Override
public void run(String... args) {
Consumer<AccountKey, Account> consumer = new KafkaConsumer<>(new Properties());
Runtime.getRuntime().addShutdownHook(new Thread(consumer::wakeup));
try {
while (true) {
ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
//process records
}
} catch (WakeupException e) {
logger.info("Consumer woken up for exiting.");
} finally {
consumer.close();
logger.info("Closed consumer, exiting.");
}
}
}
I'm not sure if you'll run into any issues there but it's a bit dirty - Spring has really nice built in support for working with Kafka so I would lean towards that (there's plenty of documentation on that on the web, but a nice one is: https://www.baeldung.com/spring-kafka).
You'll need the following dependency:
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.2.2.RELEASE</version>
</dependency>
Configuration is as easy adding the #EnableKafka annotation to a config class and then setting up Listener and ConsumerFactory beans
Once configured you can setup a consumer easily as follows:
#KafkaListener(topics = "topicName")
public void listenWithHeaders(
#Payload String message,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {
System.out.println("Received Message: " + message"+ "from partition: " + partition);
}
Implementation look ok but using CommandLineRunner is not made for this. CommandLineRunner is used to run some task on startup only once. From Design perspective it's not very elegant. I would rather use spring integration adapter component with kafka. You can find example here https://github.com/raphaelbrugier/spring-integration-kafka-sample/blob/master/src/main/java/com/github/rbrugier/esb/consumer/Consumer.java .
To just answer my own question, I had a look at Kafka integration libraries like Spring-Kafka and Spring Cloud Stream but the integration with Confluent's Schema Registry is either not finished or not quite clear to me. It's simply enough for primitives but we need it for typed Avro objects that are validated by the schema registry. I now implemented a Kafka-agnostic solution, based on the answer at Spring Boot - Best way to start a background thread on deployment
The final code looks like this:
#Component
public class AccountStreamConsumer implements DisposableBean, Runnable {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private final AccountService accountService;
private final KafkaProperties kafkaProperties;
private final Consumer<AccountKey, Account> consumer;
#Autowired
public AccountStreamConsumer(AccountService accountService, KafkaProperties kafkaProperties,
ConfluentProperties confluentProperties) {
this.accountService = accountService;
this.kafkaProperties = kafkaProperties;
if (!kafkaProperties.getEnabled()) {
consumer = null;
return;
}
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, confluentProperties.getSchemaRegistryUrl());
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, kafkaProperties.getSecurityProtocolConfig());
props.put(SaslConfigs.SASL_MECHANISM, kafkaProperties.getSaslMechanism());
props.put(SaslConfigs.SASL_JAAS_CONFIG, PlainLoginModule.class.getName() + " required username=\"" + kafkaProperties.getUsername() + "\" password=\"" + kafkaProperties.getPassword() + "\";");
props.put(KafkaAvroDeserializerConfig.SPECIFIC_AVRO_READER_CONFIG, true);
props.put(ConsumerConfig.GROUP_ID_CONFIG, kafkaProperties.getAccountConsumerGroupId());
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, KafkaAvroDeserializer.class);
consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(kafkaProperties.getAccountsTopicName()));
Thread thread = new Thread(this);
thread.start();
}
#Override
public void run() {
if (!kafkaProperties.getEnabled())
return;
logger.debug("Started account stream consumer");
try {
//noinspection InfiniteLoopStatement
while (true) {
ConsumerRecords<AccountKey, Account> records = consumer.poll(Duration.ofSeconds(10L));
List<Account> accounts = new ArrayList<>();
records.iterator().forEachRemaining(record -> accounts.add(record.value()));
if (accounts.size() != 0)
accountService.store(accounts);
}
} catch (WakeupException e) {
logger.info("Account stream consumer woken up for exiting.");
} finally {
consumer.close();
}
}
#Override
public void destroy() {
if (consumer != null)
consumer.wakeup();
logger.info("Woke up account stream consumer, exiting.");
}
}