Kafka producer giving TimeoutException - java

I have Kafka running in a remote server and I am using spring framework (java) to produce and consume messages. For testing on my local machine, I am just producing 1 event. Here is a simplified code of how I produce messages:
import org.springframework.kafka.core.KafkaTemplate;
...
#Autowired
KafkaTemplate<String, String> kafkaTemplate;
...
kafkaTemplate.send("sampletopic", "1234").get();
...
Here payload is just a user-id string. When I execute the send function, I get the following error:
kafka.. error:java.util.concurrent.ExecutionException:
org.springframework.kafka.core.KafkaProducerException: Failed to send;
nested exception is org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record(s) for sampletopic-0: 30028 ms has passed since
batch creation plus linger time
Here are the relevant logs I get before getting the error:
[http-nio-8080-exec-3] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [41.204.196.251:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[http-nio-8080-exec-3] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.2.0
[http-nio-8080-exec-3] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 576d93a8ds0cf421

Related

Logback messages not sent to Kafka topic after batch job execution is finished

I am doing a POC where I have to send batch job's execution summary to Kafka via Logback.
During the processing of the batch job, I create a summary object and when the processing is done for all the records, I am sending the summary to Kafka. But the logs are not sent to the Topic.
With the same logback configurations, if I send the logs during the processing of the batch job, the logs are sent successfully to the Topic.
There is no error or timeout in the logs, so the issue is not clear.
Batch duration : 5-7 min
Configuration logs:
SLF4J: A number (135) of logging calls during the initialization phase have been intercepted and are
SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system.
SLF4J: See also http://www.slf4j.org/codes.html#replay
2022-09-02 14:53:33,403 INFO [main] o.a.k.c.p.ProducerConfig [NativeMethodAccessorImpl.java:-2] ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [xxx]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = IndividualDataProducer
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 7206000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SASL_PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 7206000
socket.connection.setup.timeout.ms = 720600
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer

How to disable Kafka internal log?

I want to disable the Kafka internal log which I didn't write.
Below code is my basic kafka producer code.
package me.sclee.kafka.basic.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Properties;
import java.util.UUID;
import static me.sclee.kafka.basic.config.BasicKafkaConfig.*;
public class BasicKafkaProducer {
private static final Logger logger = LoggerFactory.getLogger(BasicKafkaProducer.class);
private static KafkaProducer<String, String> getKafkaProducer() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ProducerConfig.ACKS_CONFIG, ACK);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, COMPRESSION_TYPE);
props.put(ProducerConfig.RETRIES_CONFIG, RETIRES);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, BATCH_SIZE);
props.put(ProducerConfig.LINGER_MS_CONFIG, LINGER_MS);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, BUFFER_MEMORY);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, KEY_SER);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, VALUE_SER);
return new KafkaProducer<>(props);
}
public void send() throws Exception {
logger.info("Sending kafka message..");
KafkaProducer<String, String> producer = null;
int line = 3;
try {
for (int n = 0; n < line; n++) {
producer = getKafkaProducer();
String uuid = UUID.randomUUID().toString();
ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME, uuid);
producer.send(record, new ProducerCallBack(n));
Thread.sleep(1000);
}
} catch (Exception e) {
logger.error("There was a problem while sending a message in producer, {}" + e.getMessage());
throw e;
} finally {
if (producer != null) {
producer.flush();
producer.close();
}
}
logger.info("Exited the Kafka sending..");
}
public static void main(String[] args) throws Exception {
BasicKafkaProducer basicKafkaProducer = new BasicKafkaProducer();
basicKafkaProducer.send();
}
}
As you can see, I put some logs for my trace but when I executed I saw many other logs which is generated by the internal log as below.
I want to see my log in the code for the better understanding.
How to achieve it?
I used the following slf4j libs.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.0-alpha1</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.0-alpha1</version>
</dependency>
console
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705323290
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-1] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705323550, partition : 0, offset : 139, line : 0
[Timer-0] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-2
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705324831
[kafka-producer-network-thread | producer-2] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-2] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-2] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705324834, partition : 0, offset : 140, line : 1
[Timer-0] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-3
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705325844
[kafka-producer-network-thread | producer-3] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-3] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-3] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705325848, partition : 0, offset : 141, line : 2
You can create your own src/main/resources/log4j.properties file and configure whatever levels/packages/formats you wish.
For example,
log4j.logger.kafka=OFF
log4j.logger.org.apache.kafka=OFF
Defaults (for the broker and clients) are defined here https://github.com/apache/kafka/blob/trunk/config/log4j.properties

Kafka Consumer reading messages only when two messages stack

We have a kafka producer that produces some messages once in a while.
I wrote a Consumer to consume these messages. Problem is, the messages are consumed only when 2 of them stack. For example if a message is produced at 13:00 the consumer doesn't do anything. If another message is produced at 13:01, the consumer consumes both messages. In kafkaTool, at consumer properties it's present a column called LAG that when the message is not consumed is 1.
Is there any config for this thing that I'm missing?
The Consumer Config:
16:43:04,472 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] (http--0.0.0.0-8180-1) ConsumerConfig values:
request.timeout.ms = 180001
check.crcs = true
retry.backoff.ms = 100
ssl.truststore.password = null
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
session.timeout.ms = 180000
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [mtxbuctra22.prod.orange.intra:9092]
client.id =
fetch.max.wait.ms = 180000
fetch.min.bytes = 1024
key.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
sasl.kerberos.kinit.cmd = /usr/bin/kinit
auto.offset.reset = earliest
value.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
ssl.endpoint.identification.algorithm = null
max.partition.fetch.bytes = 1048576
ssl.keystore.location = null
ssl.truststore.location = null
ssl.keystore.password = null
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
auto.commit.interval.ms = 1000
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
group.id = ifd_006
enable.auto.commit = true
metric.reporters = []
ssl.truststore.type = JKS
send.buffer.bytes = 131072
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
heartbeat.interval.ms = 3000
16:43:04,493 INFO [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
max.schemas.per.subject = 1000
specific.avro.reader = true
schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]
16:43:04,498 INFO [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
max.schemas.per.subject = 1000
specific.avro.reader = true
schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]
Kafka tool:
Figured it out.
In documentation for kafka 0.9.0.1 it's stated that fetch.min.bytes is 1. But i have kafka 0.9.0.0. And the default value is 1024. So, only after 2 messages this value was passed. Changed the fetch.min.bytes to 1 and now it works ok.

Sending Json Object to kafka topic in java

I want to send Json Object to my kafka topic but I am facing some problem
I use pojo with single instance variable as fileName where I am setting the filename and sending to Kafka Topic.
KafkaJsonSend objSend= new KafkaJsonSend();
objSend.setFileName(filename);
//Configure the Producer
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.10.51.10:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
Producer<String, JsonNode> producer = new KafkaProducer<String, JsonNode>(configProperties);
JsonNode jsonNode = objectMapper.valueToTree(objSend);
ProducerRecord<String, JsonNode> rec = new
ProducerRecord<String, JsonNode>("BlueShifts",jsonNode);
producer.send(rec);
producer.close();
But when I run this code I am getting exception which is continuously getting logged in my console.
Error: Uncaught error in kafka producer I/O thread
IllegalStateException: No entry found for connection 0
I have also tried with spring kafka but I got this in console
2019-02-04 16:43:13.938 INFO 4432 --- [nio-6020-exec-1]
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.0.2.15:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-02-04 16:43:14.158 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.2.0
2019-02-04 16:43:14.160 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
576d93a8dc0cf421 2019-02-04 16:44:14.206 ERROR 4432 ---
[nio-6020-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and
payload='KafkaJsonSend [fileName=a0a7caf336e8481fb
6db2de70d39029e_1549278789987.mp3]' to topic BlueShifts:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Worked added 10.10.51.10 kafka at C:\Windows\System32\drivers\etc in hosts file and it worked in both ways....can you tell why it's mapped with kafka why it couldn't find with ip
You need to configure your Kafka brokers correctly for your network: https://rmoff.net/2018/08/02/kafka-listeners-explained/

Camel-Kafka java.io.EOFException - NetworkReceive.readFromReadableChannel

I am trying to produce messages to Kafka (Cloudera) from an ActiveMQ-Camel bridge using Kerberos.
ActiveMQ v5.15.4
Camel: 2.21.1
Kafka Clients:1.1.0
Server version: Apache/2.4.6 (CentOS)
The camel.xml snipet is:
<log message="Started The Producer Route" />
<to uri="kafka://10.100.70.00:9092?topic=MyEvents.s1.v1&brokers=10.100.70.00:9092&requestTimeoutMs=305000&retries=3&keySerializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&saslMechanism=GSSAPI&serializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&securityProtocol=PLAINTEXT&saslKerberosServiceName=kafka"/>
This is the kafka client config from log:
acks = 1
batch.size = 16384
bootstrap.servers = [10.148.70.74:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT (**SASL_PLAINTEXT not supported**)
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
Log level: DEBUG
Jaas file:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="./user.keytab"
useTicketCache=false
serviceName="kafka"
principal=" Group/user#DOMAIN.LAN";
};
Export:
KAFKA_OPTS="-Djava.security.auth.login.config=/opt/activemq/conf/Jaas.conf"
When I send a message I receive the following log at DEBUG level and the message is not delivered:
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:557)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:495)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.poll(Selector.java:424)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)[kafka-clients-1.1.0.jar:]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_171
At INFO level I only see this in the log:
WARN | [Producer clientId=producer-1] Bootstrap broker 10.100.70.00:9092 (id: -1 rack: null) disconnected | org.apache.kafka.clients.NetworkClient | kafka-producer-network-thread | producer-1
Why am I getting this error? Please help!
This error is caused by not authorised user to produce messages to Kafka.
Such problem can be mitigated by verifying the keytab file as a prerequisite:
Verify service account name from keytab file: klist -k -t <keytabFile>
Authenticate to Active Directory: kinit -k -t <keytabFile> <servicePrincipal>
There shall not be any errors.

Categories