kafka-log4j-appender 0.9 does not work - java

I added a log4j kafka appender in my log4j.properties, but it did not work as my expected.
Before I post this question, I checked my log4j.properties based on this similar question on stackoverflow, which is about 0.8. However, I'm not luck.
Here is my log4j.properties
log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=my-topic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
When I started my app, I could see Kafka producer was started:
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
But the appender did not work and threw an exception:
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
I also checked my Kafka + Zookeeper env, it is correct in my log4j.properties. Now, I have no idea about it. And hope someone could give me a hand. the following is the whole output:
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [localhost:9092]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 60000
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id =
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 30000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = 1
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bufferpool-wait-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name buffer-exhausted-records
[main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster metadata version 1 to Cluster(nodes = [Node(-1, localhost, 9092)], partitions = [])
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time:client-id-producer-1
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name batch-size
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name compression-rate
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name queue-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name request-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name produce-throttle-time
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name records-per-request
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-retries
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name errors
[main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name record-size-max
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.9.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : fc7243c2af4b2b4a
[main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Kafka producer started
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.internals.Sender - Starting Kafka producer I/O thread.
...
[kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.producer.KafkaProducer - Exception occurred during message send:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Thanks

Finally, I fixed it. Here is a my new log4j.properties
log4j.rootLogger=DEBUG, Console
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false
log4j.appender.Kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.Kafka.topic=logTopic
log4j.appender.Kafka.brokerList=localhost:9092
log4j.appender.Kafka.layout=org.apache.log4j.EnhancedPatternLayout
log4j.appender.Kafka.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
log4j.logger.io.vertx=WARN
log4j.logger.io.netty=WARN
I also created an example to show how to use this appender on github.
The changes I made:
remove kafka appender from rootLogger. In my old log4j.properties
log4j.rootLogger=DEBUG, Console, Kafka
add a log category for those packages whose log output will go to Kafka
log4j.category.foxgem=DEBUG, Kafka
log4j.additivity.foxgem=false
I think the reason is: with old rootLogger, the log output from Kafka also went to Kafka, which caused time out.

I had a problem like this with both log4j and logback.
When appender level was INFO everything worked fine but when I was changing level to DEBUG I was getting this error after some time:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
The problem was that KafkaProducer itself had trace and debug logs and was trying to append these log to Kafka too, so it was stuck in a loop.
Changing org.apache.kafka package log level to INFO or changing its appender to write logs to file or stdout solved the problem.

Related

Kafka Spring boot automatically starting idempotent producer in the consumer application

We are experiencing the issue in dev environment which was working well previously. In the local environment, the application is running without starting an idempotent producer and that is the expected behavior here.
Issue: Sometimes an Idempotent producer is starting to run
automatically when compiling the spring boot application. And hence
the consumer fails to consume the message produced by the actual
producer.
Adding a snippet of relevant log info:
2022-07-05 15:17:54.449 WARN 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Destination resolver returned non-existent partition consumer-topic.DLT-1, KafkaProducer will determine partition to use for this topic
2022-07-05 15:17:54.853 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [kafka server urls]
buffer.memory = 33554432
.
.
.
2022-07-05 15:17:55.047 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.producer.KafkaProducer : [Producer clientId=producer-1] Instantiated an idempotent producer.
2022-07-05 15:17:55.347 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 3.0.1
2022-07-05 15:17:55.348 INFO 7 --- [ntainer#0-0-C-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId:
2022-07-05 15:17:57.162 INFO 7 --- [ad | producer-1] org.apache.kafka.clients.Metadata : [Producer clientId=producer-1] Cluster ID: XFGlM9HVScGD-PafRlFH7g
2022-07-05 15:17:57.169 INFO 7 --- [ad | producer-1] o.a.k.c.p.internals.TransactionManager : [Producer clientId=producer-1] ProducerId set to 6013 with epoch 0
2022-07-05 15:18:56.681 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='byte[63]' to topic consumer-topic.DLT:
org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
2022-07-05 15:18:56.748 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.DeadLetterPublishingRecoverer : Dead-letter publication to consumer-topic.DLTfailed for: consumer-topic-1#28
org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.TimeoutException: Topic consumer-topic.DLT not present in metadata after 60000 ms.
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:660) ~[spring-kafka-2.8.5.jar!/:2.8.5]
.
.
.
2022-07-05 15:18:56.751 ERROR 7 --- [ntainer#0-0-C-1] o.s.kafka.listener.DefaultErrorHandler : Recovery of record (consumer-topic-1#28) failed
2022-07-05 15:18:56.758 INFO 7 --- [ntainer#0-0-C-1] o.a.k.clients.consumer.KafkaConsumer : [Consumer clientId=consumer-consumer-topic-group-test-1, groupId=consumer-topic-group-test] Seeking to offset 28 for partition c-1
2022-07-05 15:18:56.761 ERROR 7 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Error handler threw an exception
org.springframework.kafka.KafkaException: Seek to current after exception; nested exception is org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed; nested exception is org.springframework.kafka.support.serializer.DeserializationException: failed to deserialize; nested exception is java.lang.IllegalStateException: No type information in headers and no default type provided
As we can see on the logs that the application has started idempotent producer automatically and after starting it started throwing some errors.
Context: We have two microservices, one microservice publish the messages and contains the producer config. Second microservice only consuming the messages and does not contains any producer cofig.
The yml configurations for producer application:
kafka:
bootstrap-servers: "kafka urls"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
properties:
spring.json.trusted.packages: "*"
acks: 1
YML configuration for consumer application:
kafka:
bootstrap-servers: "kafka URL, kafka url2"
properties:
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-512
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"${KAFKA_USER}\" password=\"${KAFKA_PWD}\";"
consumer:
enable-auto-commit: true
auto-offset-reset: latest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
spring.deserializer.value.delegate.class: org.springframework.kafka.support.serializer.JsonDeserializer
spring.json.trusted.packages: "*"
consumer-topic:
topic: consumer-topic
group-abc:
group-id: consumer-topic-group-abc
The kafka bean for default error handler
#Bean
public CommonErrorHandler errorHandler(KafkaOperations<Object, Object> kafkaOperations) {
return new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaOperations));
}
We know a temporary fix, if we delete the group id and recreate it then the application works successfully. But after some deployment, this issue is raising back and we don't know the root cause for it.
Please guide.

Creating kafka topics with Java doesn't work

I'm trying to create Kafka topics using Java. But I get a Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES and I can't fix it.
My goal is to create a topic so that when I run my Kafka server to display my topics using this command bin/kafka-topics.sh --list --bootstrap-server localhost:9092, I can actually see my topic in the list.
(the command is from Kafka's official website https://kafka.apache.org/quickstart)
I looked up to this problem How to create a Topic in Kafka through Java which actually inspired my code, but not only it doesn't really help, but it seems like it uses deprecated classes and methods.
I tried to use what I believed to be more recent classes such as ZooKeeperClient, KafkaZkClient and AdminZkClient, but from what I understand, the method adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$); is what brings the exception. I don't know what it is about that function that creates the exception, whether I forgot something from my application.properties file or something else.
Here's my code
import java.util.Properties;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.ZkConnection;
import org.apache.kafka.common.utils.Time;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.data.jpa.repository.config.EnableJpaAuditing;
import kafka.admin.AdminUtils;
import kafka.admin.RackAwareMode;
import kafka.utils.ZkUtils;
import kafka.zk.AdminZkClient;
import kafka.zk.KafkaZkClient;
import kafka.zookeeper.ZooKeeperClient;
import kafka.utils.*;
#SpringBootApplication
#EnableJpaAuditing
public class ChatApplication {
#SuppressWarnings("deprecation")
public static void main(String[] args) {
try {
String zookeeperHost = "127.0.0.1:2181";
int sessionTimeOutInMs = 15 * 1000;
int connectionTimeOutInMs = 10 * 1000;
ZooKeeperClient zooKeeperClient = new ZooKeeperClient(zookeeperHost, sessionTimeOutInMs, connectionTimeOutInMs, 2, Time.SYSTEM, "BytesInPerSec", "BytesOutPerSec");
KafkaZkClient kafkaZkClient = new KafkaZkClient(zooKeeperClient, true, Time.SYSTEM);
String topic = "superTopic";
int noOfPartitions = 2;
int noOfReplication = 1;
Properties topicConfiguration = new Properties();
AdminZkClient adminZkClient = new AdminZkClient(kafkaZkClient);
adminZkClient.createTopic(topic, noOfPartitions, noOfReplication, topicConfiguration, RackAwareMode.Disabled$.MODULE$);
} catch (Exception ex) {
ex.printStackTrace();
}
}
//SpringApplication.run(ChatApplication.class, args);
}
Here's the output I get :
06:50:06.796 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
06:50:07.101 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Initializing a new session to 127.0.0.1:2181.
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=kunta
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.3
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/robscientist/STS-WORKSPACE/poc_chat/target/classes:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-web/2.1.3.RELEASE/spring-boot-starter-web-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter/2.1.3.RELEASE/spring-boot-starter-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-logging/2.1.3.RELEASE/spring-boot-starter-logging-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/robscientist/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.11.2/log4j-to-slf4j-2.11.2.jar:/home/robscientist/.m2/repository/org/apache/logging/log4j/log4j-api/2.11.2/log4j-api-2.11.2.jar:/home/robscientist/.m2/repository/org/slf4j/jul-to-slf4j/1.7.25/jul-to-slf4j-1.7.25.jar:/home/robscientist/.m2/repository/javax/annotation/javax.annotation-api/1.3.2/javax.annotation-api-1.3.2.jar:/home/robscientist/.m2/repository/org/yaml/snakeyaml/1.23/snakeyaml-1.23.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-json/2.1.3.RELEASE/spring-boot-starter-json-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jsr310/2.9.8/jackson-datatype-jsr310-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/module/jackson-module-parameter-names/2.9.8/jackson-module-parameter-names-2.9.8.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-tomcat/2.1.3.RELEASE/spring-boot-starter-tomcat-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.16/tomcat-embed-core-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-el/9.0.16/tomcat-embed-el-9.0.16.jar:/home/robscientist/.m2/repository/org/apache/tomcat/embed/tomcat-embed-websocket/9.0.16/tomcat-embed-websocket-9.0.16.jar:/home/robscientist/.m2/repository/org/hibernate/validator/hibernate-validator/6.0.14.Final/hibernate-validator-6.0.14.Final.jar:/home/robscientist/.m2/repository/javax/validation/validation-api/2.0.1.Final/validation-api-2.0.1.Final.jar:/home/robscientist/.m2/repository/org/jboss/logging/jboss-logging/3.3.2.Final/jboss-logging-3.3.2.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/classmate/1.4.0/classmate-1.4.0.jar:/home/robscientist/.m2/repository/org/springframework/spring-web/5.1.5.RELEASE/spring-web-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-webmvc/5.1.5.RELEASE/spring-webmvc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aop/5.1.5.RELEASE/spring-aop-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-expression/5.1.5.RELEASE/spring-expression-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.9.8/jackson-core-2.9.8.jar:/home/robscientist/.m2/repository/io/debezium/debezium-core/0.9.5.Final/debezium-core-0.9.5.Final-tests.jar:/home/robscientist/.m2/repository/org/springframework/spring-jdbc/5.1.5.RELEASE/spring-jdbc-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-beans/5.1.5.RELEASE/spring-beans-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-core/5.1.5.RELEASE/spring-core-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-jcl/5.1.5.RELEASE/spring-jcl-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-tx/5.1.5.RELEASE/spring-tx-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-websocket/2.1.3.RELEASE/spring-boot-starter-websocket-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-messaging/5.1.5.RELEASE/spring-messaging-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-websocket/5.1.5.RELEASE/spring-websocket-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-data-jpa/2.1.3.RELEASE/spring-boot-starter-data-jpa-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-aop/2.1.3.RELEASE/spring-boot-starter-aop-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/aspectj/aspectjweaver/1.9.2/aspectjweaver-1.9.2.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-jdbc/2.1.3.RELEASE/spring-boot-starter-jdbc-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/com/zaxxer/HikariCP/3.2.0/HikariCP-3.2.0.jar:/home/robscientist/.m2/repository/javax/transaction/javax.transaction-api/1.3/javax.transaction-api-1.3.jar:/home/robscientist/.m2/repository/javax/xml/bind/jaxb-api/2.3.1/jaxb-api-2.3.1.jar:/home/robscientist/.m2/repository/javax/activation/javax.activation-api/1.2.0/javax.activation-api-1.2.0.jar:/home/robscientist/.m2/repository/org/hibernate/hibernate-core/5.3.7.Final/hibernate-core-5.3.7.Final.jar:/home/robscientist/.m2/repository/javax/persistence/javax.persistence-api/2.2/javax.persistence-api-2.2.jar:/home/robscientist/.m2/repository/org/javassist/javassist/3.23.1-GA/javassist-3.23.1-GA.jar:/home/robscientist/.m2/repository/net/bytebuddy/byte-buddy/1.9.10/byte-buddy-1.9.10.jar:/home/robscientist/.m2/repository/antlr/antlr/2.7.7/antlr-2.7.7.jar:/home/robscientist/.m2/repository/org/jboss/jandex/2.0.5.Final/jandex-2.0.5.Final.jar:/home/robscientist/.m2/repository/org/dom4j/dom4j/2.1.1/dom4j-2.1.1.jar:/home/robscientist/.m2/repository/org/hibernate/common/hibernate-commons-annotations/5.0.4.Final/hibernate-commons-annotations-5.0.4.Final.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-jpa/2.1.5.RELEASE/spring-data-jpa-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/data/spring-data-commons/2.1.5.RELEASE/spring-data-commons-2.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-orm/5.1.5.RELEASE/spring-orm-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-aspects/5.1.5.RELEASE/spring-aspects-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/com/h2database/h2/1.4.197/h2-1.4.197.jar:/home/robscientist/.m2/repository/org/webjars/stomp-websocket/2.3.3/stomp-websocket-2.3.3.jar:/home/robscientist/.m2/repository/org/webjars/bootstrap/3.3.7/bootstrap-3.3.7.jar:/home/robscientist/.m2/repository/org/webjars/jquery/1.11.1/jquery-1.11.1.jar:/home/robscientist/.m2/repository/org/springframework/kafka/spring-kafka/2.2.4.RELEASE/spring-kafka-2.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/spring-context/5.1.5.RELEASE/spring-context-5.1.5.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/retry/spring-retry/1.2.4.RELEASE/spring-retry-1.2.4.RELEASE.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka-clients/2.0.1/kafka-clients-2.0.1.jar:/home/robscientist/.m2/repository/org/lz4/lz4-java/1.4.1/lz4-java-1.4.1.jar:/home/robscientist/.m2/repository/org/xerial/snappy/snappy-java/1.1.7.1/snappy-java-1.1.7.1.jar:/home/robscientist/.m2/repository/org/apache/kafka/kafka_2.12/2.2.0/kafka_2.12-2.2.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.8/jackson-databind-2.9.8.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.9.0/jackson-annotations-2.9.0.jar:/home/robscientist/.m2/repository/com/fasterxml/jackson/datatype/jackson-datatype-jdk8/2.9.8/jackson-datatype-jdk8-2.9.8.jar:/home/robscientist/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/robscientist/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-library/2.12.8/scala-library-2.12.8.jar:/home/robscientist/.m2/repository/org/scala-lang/scala-reflect/2.12.8/scala-reflect-2.12.8.jar:/home/robscientist/.m2/repository/com/typesafe/scala-logging/scala-logging_2.12/3.9.0/scala-logging_2.12-3.9.0.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-api/1.7.25/slf4j-api-1.7.25.jar:/home/robscientist/.m2/repository/com/101tec/zkclient/0.11/zkclient-0.11.jar:/home/robscientist/.m2/repository/org/apache/zookeeper/zookeeper/3.4.13/zookeeper-3.4.13.jar:/home/robscientist/.m2/repository/org/slf4j/slf4j-log4j12/1.7.25/slf4j-log4j12-1.7.25.jar:/home/robscientist/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/robscientist/.m2/repository/jline/jline/0.9.94/jline-0.9.94.jar:/home/robscientist/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/robscientist/.m2/repository/io/netty/netty/3.10.6.Final/netty-3.10.6.Final.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-starter-reactor-netty/2.1.3.RELEASE/spring-boot-starter-reactor-netty-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/io/projectreactor/netty/reactor-netty/0.8.5.RELEASE/reactor-netty-0.8.5.RELEASE.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http/4.1.33.Final/netty-codec-http-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-common/4.1.33.Final/netty-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-buffer/4.1.33.Final/netty-buffer-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport/4.1.33.Final/netty-transport-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-resolver/4.1.33.Final/netty-resolver-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec/4.1.33.Final/netty-codec-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-http2/4.1.33.Final/netty-codec-http2-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler/4.1.33.Final/netty-handler-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-handler-proxy/4.1.33.Final/netty-handler-proxy-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-codec-socks/4.1.33.Final/netty-codec-socks-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-epoll/4.1.33.Final/netty-transport-native-epoll-4.1.33.Final-linux-x86_64.jar:/home/robscientist/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.33.Final/netty-transport-native-unix-common-4.1.33.Final.jar:/home/robscientist/.m2/repository/io/projectreactor/reactor-core/3.2.6.RELEASE/reactor-core-3.2.6.RELEASE.jar:/home/robscientist/.m2/repository/org/reactivestreams/reactive-streams/1.0.2/reactive-streams-1.0.2.jar:/home/robscientist/.m2/repository/org/postgresql/postgresql/42.2.5/postgresql-42.2.5.jar:/home/robscientist/.m2/repository/org/projectlombok/lombok/1.18.6/lombok-1.18.6.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-devtools/2.0.0.RELEASE/spring-boot-devtools-2.0.0.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot/2.1.3.RELEASE/spring-boot-2.1.3.RELEASE.jar:/home/robscientist/.m2/repository/org/springframework/boot/spring-boot-autoconfigure/2.1.3.RELEASE/spring-boot-autoconfigure-2.1.3.RELEASE.jar
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
06:50:07.116 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-50-generic
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/robscientist
06:50:07.117 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/robscientist/STS-WORKSPACE/poc_chat
06:50:07.118 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:2181 sessionTimeout=15000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$#5bda8e08
06:50:07.120 [main] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
06:50:07.129 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler.
06:50:07.131 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Waiting until connected.
06:50:07.138 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
06:50:07.144 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session
06:50:07.145 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:2181
06:50:07.403 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1000016739e0016, negotiated timeout = 15000
06:50:07.409 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Received event: WatchedEvent state:SyncConnected type:None path:null
06:50:07.415 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient] Connected.
06:50:07.469 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 1,12 replyHeader:: 1,237,0 request:: '/brokers/ids,F response:: v{'0},s{5,5,1559484903664,1559484903664,0,5,0,0,0,1,191}
06:50:07.492 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/ids/0 serverPath:/brokers/ids/0 finished:false header:: 2,4 replyHeader:: 2,237,0 request:: '/brokers/ids/0,F response:: #7b226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b22504c41494e54455854223a22504c41494e54455854227d2c22656e64706f696e7473223a5b22504c41494e544558543a2f2f6b756e74613a39303932225d2c226a6d785f706f7274223a2d312c22686f7374223a226b756e7461222c2274696d657374616d70223a2231353539353332323533363131222c22706f7274223a393039322c2276657273696f6e223a347d,s{191,191,1559532253652,1559532253652,1,0,0,72057690466942976,180,0,191}
06:50:07.690 [main-SendThread(localhost:2181)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x1000016739e0016, packet:: clientPath:/brokers/topics/superTopic serverPath:/brokers/topics/superTopic finished:false header:: 3,3 replyHeader:: 3,237,-101 request:: '/brokers/topics/superTopic,F response::
Exception in thread "main" java.lang.NoSuchFieldError: DEFAULT_SSL_PRINCIPAL_MAPPING_RULES
at kafka.server.Defaults$.<init>(KafkaConfig.scala:224)
at kafka.server.Defaults$.<clinit>(KafkaConfig.scala)
at kafka.log.Defaults$.<init>(LogConfig.scala:36)
at kafka.log.Defaults$.<clinit>(LogConfig.scala)
at kafka.log.LogConfig$.<init>(LogConfig.scala:219)
at kafka.log.LogConfig$.<clinit>(LogConfig.scala)
at kafka.zk.AdminZkClient.validateTopicCreate(AdminZkClient.scala:128)
at kafka.zk.AdminZkClient.createTopicWithAssignment(AdminZkClient.scala:86)
at kafka.zk.AdminZkClient.createTopic(AdminZkClient.scala:56)
at com.predisurge.kafka.KafkaTopicCreator.createTopic(KafkaTopicCreator.java:41)
at com.predisurge.ChatApplication.main(ChatApplication.java:32)
After running the command to display my topics, I don't see the one I want to add.
IDE : STS 4,
Kafka version : 2.2.0,
ZK version : 3.5.5
not sure why you get this exception. Here's a solution that works for me, and just uses the KafkaAdminClient:
#SpringBootApplication
#EnableJpaAuditing
public class TopicCreator {
public static void main(String[] args) throws InterruptedException, ExecutionException {
Properties properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
AdminClient kafkaAdminClient = KafkaAdminClient.create(properties);
CreateTopicsResult result = kafkaAdminClient.createTopics(
Stream.of("foo", "bar", "baz").map(
name -> new NewTopic(name, 3, (short) 1)
).collect(Collectors.toList())
);
result.all().get();
}
}
As the above code shows, for newer versions of Kafka you can create topics directly through Kafka. Does this help?

Spring-Kafka consumer doesn't receive messages

I don't know whats going on that my java client consumer annotated with #KafkaListener doesn't receives any messages. When I create consumer via command line it works. Also Producer works as expected (also in java). Could someone help me to understand that behavior?
application.yml
kafka:
bootstrap-servers: localhost:9092
topic: my-topic
producer config:
#Configuration
public class KafkaProducerConfig {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public ProducerFactory<String, String> producerFactory(){
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate(){
return new KafkaTemplate<>(producerFactory());
}
}
consumer config:
#EnableKafka
#Configuration
class KafkaConsumerConfig {
#Value("${kafka.bootstrap-servers}")
String bootstrapServers;
#Bean
public ConsumerFactory<String, String> consumerFactory(){
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
Producer & Consumer:
#Service
class Producer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
#Value("${kafka.topic}")
String kafkaTopic;
public void send(String payload){
System.out.println("sending " + payload + " to " + kafkaTopic);
kafkaTemplate.send(kafkaTopic, payload);
}
}
#Service
public class Consumer {
#KafkaListener(topics = "${kafka.topic}")
public void receive(String payload){
System.out.println(payload + " aaaaaaaaaaaaaaaaaaaaaaaaaaa");
}
}
Spring controller:
#RestController
#RequestMapping(value = "/kafka")
class WebRestController {
#Autowired
Producer producer;
#GetMapping(value = "/producer")
public String producer(String data){
producer.send(data);
return "Done";
}
}
This is my console output, as you see It sends a message but method doesn't receive anything. It works if I'm not using spring-kafka, just pure kafka-api. It also works, when I bind consumer in command line - i see messages sent by java-code-producer.
2018-04-03 13:43:41.688 INFO 8068 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id =
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2018-04-03 13:43:41.743 INFO 8068 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.11.0.0
2018-04-03 13:43:41.743 INFO 8068 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : cb8625948210849f
2018-04-03 13:43:41.774 INFO 8068 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2018-04-03 13:43:41.777 INFO 8068 --- [ main] kafka.KafkaExample : Started KafkaExample in 3.653 seconds (JVM running for 4.195)
2018-04-03 13:43:47.245 INFO 8068 --- [nio-8080-exec-3] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring FrameworkServlet 'dispatcherServlet'
2018-04-03 13:43:47.245 INFO 8068 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization started
2018-04-03 13:43:47.264 INFO 8068 --- [nio-8080-exec-3] o.s.web.servlet.DispatcherServlet : FrameworkServlet 'dispatcherServlet': initialization completed in 19 ms
sending Hello to my-topic
2018-04-03 13:43:47.300 INFO 8068 --- [nio-8080-exec-3] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
2018-04-03 13:43:47.315 INFO 8068 --- [nio-8080-exec-3] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.11.0.0
2018-04-03 13:43:47.315 INFO 8068 --- [nio-8080-exec-3] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : cb8625948210849f
EDIT:
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 13 --topic my-topic
This is my server.properties files:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3
# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
############################# Group Coordinator Settings #############################
# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
group.id =
You need a group.id for the consumer.
Set it in the consumer factory properties.
BTW, when using boot, you don't need a consumer factory bean or container factory bean, you can use boot properties for that.
Logging can be enabled with logging.level... in the properties/yaml.

Consumer records iterating continuously even the data is not getting

When I'm running my kafka code, consumer records are iterating continuously, even the producer records are also iterating.
It is looping like
[ Topic : 123456, Offset Id : 6, Partition : 0]
Consumed at :Wed Aug 17 10:59:24 IST 2016
10:59:24.359 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Committing offsets: {123456-0=OffsetAndMetadata{offset=7, metadata=''}}
10:59:24.361 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group HYDTS001 committed offset 7 for partition 123456-0
10:59:24.460 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Committing offsets: {123456-0=OffsetAndMetadata{offset=7, metadata=''}}
10:59:24.465 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group HYDTS001 committed offset 7 for partition 123456-0
10:59:24.561 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Committing offsets: {123456-0=OffsetAndMetadata{offset=7, metadata=''}}
10:59:24.568 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group HYDTS001 committed offset 7 for partition 123456-0
10:59:24.662 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Committing offsets: {123456-0=OffsetAndMetadata{offset=7, metadata=''}}
10:59:24.665 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group HYDTS001 committed offset 7 for partition 123456-0
10:59:24.763 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - Committing offsets: {123456-0=OffsetAndMetadata{offset=7, metadata=''}}
10:59:24.767 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group HYDTS001 committed offset 7 for partition 123456-0
How to trigger my consumer when the data comes to the topic only ?
Is it possible to do like that ?

Spring Integration Kafka adaptor not producing message

I am struggling this for days now.
I am using SI adaptor for kafka under Spring-boot container.
I have configured zookeeper and kafka on my machine.
I also created console producer and consumer tested it and everything works fine(I manage to produce vis the console messages and have the console consumer to consume them).
I tried now to produce messages via Spring integration kafka outbound adapter but the console consumer wont consume the message
SI/Spring xd xml:
<int:publish-subscribe-channel id="inputToKafka"/>
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-producer-context-ref="kafkaProducerContext"
auto-startup="true"
order="1"
channel="inputToKafka">
</int-kafka:outbound-channel-adapter>
<int-kafka:producer-context id="kafkaProducerContext">
<int-kafka:producer-configurations>
<int-kafka:producer-configuration broker-list="localhost:9092"
async="true"
topic="zerg.hydra"
compression-codec="default"/>
</int-kafka:producer-configurations>
</int-kafka:producer-context>
<task:executor id="taskExecutor" pool-size="5" keep-alive="120" queue-capacity="500"/>
</beans>
Java:
#Named
public class KafkaProducer {
#Autowired
#Qualifier("inputToKafka")
MessageChannel inputToKafka;
public void sendMessageToKafka(String message)
{
inputToKafka.send(
MessageBuilder.withPayload(message)
.setHeader("messageKey", "3")
.setHeader("topic", "zerg.hydra").build());
}
}
this is how I run kafka console consumer:
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic zerg.hydra --from-beginning
logs:
Testing started at 12:49 PM ...
12:49:54 PM: Executing external tasks 'cleanTest test'...
:cleanTest
:compileJava
:processResources UP-TO-DATE
:classes
:compileTestJava
:processTestResources UP-TO-DATE
:testClasses
:test
12:50:07,165 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
12:50:07,166 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
12:50:07,166 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/test/logback.xml]
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs multiple times on the classpath.
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/test/logback.xml]
12:50:07,167 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] occurs at [file:/Users/idan/dev/Projects/CalcMicroService/build/resources/main/logback.xml]
12:50:07,247 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set
12:50:07,250 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
12:50:07,259 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [stdout]
12:50:07,281 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
12:50:07,327 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [reactor] to INFO
12:50:07,327 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.projectreactor] to INFO
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework] to WARN
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.springframework.integration] to DEBUG
12:50:07,328 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
12:50:07,328 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [stdout] to Logger[ROOT]
12:50:07,331 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
12:50:07,332 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#3de433b4 - Registering current configuration as safe fallback point
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v1.1.9.RELEASE)
12:50:09.530 [Test worker] INFO o.s.i.config.IntegrationRegistrar - No bean named 'integrationHeaderChannelRegistry' has been explicitly defined. Therefore, a default DefaultHeaderChannelRegistry will be created.
12:50:09.544 [Test worker] DEBUG o.s.i.config.IntegrationRegistrar - SpEL function '#xpath' isn't registered: there is no spring-integration-xml.jar on the classpath.
12:50:10.717 [Test worker] INFO o.s.i.c.DefaultConfiguringBeanFactoryPostProcessor - No bean named 'errorChannel' has been explicitly defined. Therefore, a default PublishSubscribeChannel will be created.
12:50:10.719 [Test worker] INFO o.s.i.c.DefaultConfiguringBeanFactoryPostProcessor - No bean named 'taskScheduler' has been explicitly defined. Therefore, a default ThreadPoolTaskScheduler will be created.
12:50:10.973 DEBUG [Test worker][org.jboss.logging] Logging Provider: org.jboss.logging.Log4jLoggerProvider
12:50:10.974 INFO [Test worker][org.hibernate.validator.internal.util.Version] HV000001: Hibernate Validator 5.0.3.Final
12:50:10.991 DEBUG [Test worker][org.hibernate.validator.internal.engine.resolver.DefaultTraversableResolver] Cannot find javax.persistence.Persistence on classpath. Assuming non JPA 2 environment. All properties will per default be traversable.
12:50:11.006 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom MessageInterpolator of type org.springframework.validation.beanvalidation.LocaleContextMessageInterpolator
12:50:11.011 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ConstraintValidatorFactory of type org.springframework.validation.beanvalidation.SpringConstraintValidatorFactory
12:50:11.018 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ParameterNameProvider of type com.sun.proxy.$Proxy46
12:50:11.024 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] Trying to load META-INF/validation.xml for XML based Validator configuration.
12:50:11.032 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] No META-INF/validation.xml found. Using annotation based configuration only.
12:50:12.089 [Test worker] INFO o.a.catalina.core.StandardService - Starting service Tomcat
12:50:12.091 [Test worker] INFO o.a.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/7.0.56
12:50:14.162 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
12:50:16.567 [Test worker] INFO o.s.i.k.support.ProducerFactoryBean - Using producer properties => {metadata.broker.list=localhost:9092, compression.codec=0, producer.type=async}
12:50:17.036 INFO [Test worker][kafka.utils.VerifiableProperties] Verifying properties
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property compression.codec is overridden to 0
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property metadata.broker.list is overridden to localhost:9092
12:50:17.096 INFO [Test worker][kafka.utils.VerifiableProperties] Property producer.type is overridden to async
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.resolver.DefaultTraversableResolver] Cannot find javax.persistence.Persistence on classpath. Assuming non JPA 2 environment. All properties will per default be traversable.
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom MessageInterpolator of type org.springframework.validation.beanvalidation.LocaleContextMessageInterpolator
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ConstraintValidatorFactory of type org.springframework.validation.beanvalidation.SpringConstraintValidatorFactory
12:50:17.591 DEBUG [Test worker][org.hibernate.validator.internal.engine.ConfigurationImpl] Setting custom ParameterNameProvider of type com.sun.proxy.$Proxy46
12:50:17.592 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] Trying to load META-INF/validation.xml for XML based Validator configuration.
12:50:17.592 DEBUG [Test worker][org.hibernate.validator.internal.xml.ValidationXmlParser] No META-INF/validation.xml found. Using annotation based configuration only.
12:50:18.967 [Test worker] DEBUG o.s.i.c.GlobalChannelInterceptorProcessor - No global channel interceptors.
12:50:18.978 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {mongo:outbound-channel-adapter:mongoAdapter.adapter} as a subscriber to the 'mongoAdapter' channel
12:50:18.979 [Test worker] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.mongoAdapter' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started mongoAdapter.adapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {mongo:outbound-channel-adapter:adapterWithConverter.adapter} as a subscriber to the 'adapterWithConverter' channel
12:50:18.979 [Test worker] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.adapterWithConverter' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started adapterWithConverter.adapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {message-handler:kafkaOutboundChannelAdapter} as a subscriber to the 'inputToKafka' channel
12:50:18.979 [Test worker] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.inputToKafka' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started kafkaOutboundChannelAdapter
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - Adding {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
12:50:18.979 [Test worker] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.errorChannel' has 1 subscriber(s).
12:50:18.979 [Test worker] INFO o.s.i.endpoint.EventDrivenConsumer - started _org.springframework.integration.errorLogger
12:50:19.026 [Test worker] INFO o.a.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8091"]
12:50:19.040 [Test worker] INFO o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8091"]
12:50:19.047 [Test worker] INFO o.a.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
12:50:19.338 [Test worker] DEBUG o.s.i.c.PublishSubscribeChannel - preSend on channel 'inputToKafka', message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
12:50:19.338 [Test worker] DEBUG o.s.i.k.o.KafkaProducerMessageHandler - org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler#0 received message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
12:50:19.362 [Test worker] DEBUG o.s.i.c.PublishSubscribeChannel - postSend (sent=true) on channel 'inputToKafka', message: [Payload String content=Hello Kafka From SI][Headers={messageKey=3, topic=zerg.hydra, id=be46fb68-c762-f16e-6ccb-6841ef3fe868, timestamp=1416999019338}]
Empty test suite.
12:50:19.402 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {mongo:outbound-channel-adapter:mongoAdapter.adapter} as a subscriber to the 'mongoAdapter' channel
12:50:19.417 [Thread-4] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.mongoAdapter' has 0 subscriber(s).
12:50:19.419 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped mongoAdapter.adapter
12:50:19.420 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {mongo:outbound-channel-adapter:adapterWithConverter.adapter} as a subscriber to the 'adapterWithConverter' channel
12:50:19.422 [Thread-4] INFO o.s.i.channel.DirectChannel - Channel 'application:8091.adapterWithConverter' has 0 subscriber(s).
12:50:19.422 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped adapterWithConverter.adapter
12:50:19.423 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {message-handler:kafkaOutboundChannelAdapter} as a subscriber to the 'inputToKafka' channel
12:50:19.424 [Thread-4] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.inputToKafka' has 0 subscriber(s).
12:50:19.424 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped kafkaOutboundChannelAdapter
12:50:19.425 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - Removing {logging-channel-adapter:_org.springframework.integration.errorLogger} as a subscriber to the 'errorChannel' channel
12:50:19.426 [Thread-4] INFO o.s.i.c.PublishSubscribeChannel - Channel 'application:8091.errorChannel' has 0 subscriber(s).
12:50:19.426 [Thread-4] INFO o.s.i.endpoint.EventDrivenConsumer - stopped _org.springframework.integration.errorLogger
BUILD SUCCESSFUL
Total time: 25.469 secs
12:50:20 PM: External tasks execution finished 'cleanTest test'.
I tried in the same app to use the Offical Kafka client producer and it worked just fine:
#Named
public class KafkaProducerJava {
ProducerConfig config=null;
Producer<String, String> producer;
Properties props=null;
#PostConstruct
public void init()
{
props = new Properties();
props.put("metadata.broker.list", "localhost:9092");
props.put("serializer.class", "kafka.serializer.StringEncoder");
// props.put("partitioner.class", "example.producer.SimplePartitioner");
props.put("request.required.acks", "1");
}
public void sendMsgToKafka(String msg)
{
config= new ProducerConfig(props);
producer=new Producer<String, String>(config);
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test", "", msg);
producer.send(data);
producer.close();
}
Any idea why the message never reached to my consumer via the Spring Integration kafka adaptor??
I just ran it in XD and it worked fine for me...
$ bin/xd-shell
_____ __ _______
/ ___| (-) \ \ / / _ \
\ `--. _ __ _ __ _ _ __ __ _ \ V /| | | |
`--. \ '_ \| '__| | '_ \ / _` | / ^ \| | | |
/\__/ / |_) | | | | | | | (_| | / / \ \ |/ /
\____/| .__/|_| |_|_| |_|\__, | \/ \/___/
| | __/ |
|_| |___/
eXtreme Data
1.1.0.BUILD-SNAPSHOT | Admin Server Target: http://localhost:9393
Welcome to the Spring XD shell. For assistance hit TAB or type "help".
xd:>stream create --name foo --definition "time | kafka --topic=test" --deploy
Created and deployed new stream 'foo'
xd:>stream destroy foo
Destroyed stream 'foo'
xd:>
.
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
2014-11-26 10:03:09
2014-11-26 10:03:10
2014-11-26 10:03:11
2014-11-26 10:03:12
2014-11-26 10:03:13
I also wrote this test case...
public class OutboundTests {
#Test
public void test() throws Exception {
KafkaProducerContext<String, String> kafkaProducerContext = new KafkaProducerContext<String, String>();
ProducerMetadata<String, String> producerMetadata = new ProducerMetadata<String, String>("test");
producerMetadata.setValueClassType(String.class);
producerMetadata.setKeyClassType(String.class);
Encoder<String> encoder = new StringEncoder<String>();
producerMetadata.setValueEncoder(encoder);
producerMetadata.setKeyEncoder(encoder);
ProducerFactoryBean<String, String> producer = new ProducerFactoryBean<String, String>(producerMetadata, "localhost:9092");
ProducerConfiguration<String, String> config = new ProducerConfiguration<String, String>(producerMetadata, producer.getObject());
kafkaProducerContext.setProducerConfigurations(Collections.singletonMap("test", config));
KafkaProducerMessageHandler<String, String> handler = new KafkaProducerMessageHandler<String, String>(kafkaProducerContext);
handler.handleMessage(MessageBuilder.withPayload("foo")
.setHeader("messagekey", "3")
.setHeader("topic", "test")
.build());
}
}
...to simulate what you are doing and that worked too. Are you seeing any logged exceptions? I see you are not setting the key/value types and encoders.
EDIT:
As discussed in the comments, the issue is that you are using an async producer. In your test example that "works", you are closing the producer, which flushes the queue. By default, the queue won't be flushed for 5 seconds and your test case is not waiting long enough.
I updated my test to include an XML configured version, and reduced the queue.buffering.max.ms to 500ms and added code to the test to wait a couple of seconds before terminating.
See the new commit for details.
I was facing the same issue since morning and finally found the solution. If you prefix the topic with kafka as given below
.setHeader("kafka_topic", "zerg.hydra")
This is a issue with API as internally, while looking up the key from from the message header, kafka_ is getting prefixed.

Categories