I am trying to produce messages to Kafka (Cloudera) from an ActiveMQ-Camel bridge using Kerberos.
ActiveMQ v5.15.4
Camel: 2.21.1
Kafka Clients:1.1.0
Server version: Apache/2.4.6 (CentOS)
The camel.xml snipet is:
<log message="Started The Producer Route" />
<to uri="kafka://10.100.70.00:9092?topic=MyEvents.s1.v1&brokers=10.100.70.00:9092&requestTimeoutMs=305000&retries=3&keySerializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&saslMechanism=GSSAPI&serializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&securityProtocol=PLAINTEXT&saslKerberosServiceName=kafka"/>
This is the kafka client config from log:
acks = 1
batch.size = 16384
bootstrap.servers = [10.148.70.74:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT (**SASL_PLAINTEXT not supported**)
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
Log level: DEBUG
Jaas file:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="./user.keytab"
useTicketCache=false
serviceName="kafka"
principal=" Group/user#DOMAIN.LAN";
};
Export:
KAFKA_OPTS="-Djava.security.auth.login.config=/opt/activemq/conf/Jaas.conf"
When I send a message I receive the following log at DEBUG level and the message is not delivered:
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:557)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:495)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.poll(Selector.java:424)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)[kafka-clients-1.1.0.jar:]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_171
At INFO level I only see this in the log:
WARN | [Producer clientId=producer-1] Bootstrap broker 10.100.70.00:9092 (id: -1 rack: null) disconnected | org.apache.kafka.clients.NetworkClient | kafka-producer-network-thread | producer-1
Why am I getting this error? Please help!
This error is caused by not authorised user to produce messages to Kafka.
Such problem can be mitigated by verifying the keytab file as a prerequisite:
Verify service account name from keytab file: klist -k -t <keytabFile>
Authenticate to Active Directory: kinit -k -t <keytabFile> <servicePrincipal>
There shall not be any errors.
Related
We have a kafka producer that produces some messages once in a while.
I wrote a Consumer to consume these messages. Problem is, the messages are consumed only when 2 of them stack. For example if a message is produced at 13:00 the consumer doesn't do anything. If another message is produced at 13:01, the consumer consumes both messages. In kafkaTool, at consumer properties it's present a column called LAG that when the message is not consumed is 1.
Is there any config for this thing that I'm missing?
The Consumer Config:
16:43:04,472 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] (http--0.0.0.0-8180-1) ConsumerConfig values:
request.timeout.ms = 180001
check.crcs = true
retry.backoff.ms = 100
ssl.truststore.password = null
ssl.keymanager.algorithm = SunX509
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.key.password = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.provider = null
sasl.kerberos.service.name = null
session.timeout.ms = 180000
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [mtxbuctra22.prod.orange.intra:9092]
client.id =
fetch.max.wait.ms = 180000
fetch.min.bytes = 1024
key.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
sasl.kerberos.kinit.cmd = /usr/bin/kinit
auto.offset.reset = earliest
value.deserializer = class io.confluent.kafka.serializers.KafkaAvroDeserializer
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]
ssl.endpoint.identification.algorithm = null
max.partition.fetch.bytes = 1048576
ssl.keystore.location = null
ssl.truststore.location = null
ssl.keystore.password = null
metrics.sample.window.ms = 30000
metadata.max.age.ms = 300000
security.protocol = PLAINTEXT
auto.commit.interval.ms = 1000
ssl.protocol = TLS
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
ssl.trustmanager.algorithm = PKIX
group.id = ifd_006
enable.auto.commit = true
metric.reporters = []
ssl.truststore.type = JKS
send.buffer.bytes = 131072
reconnect.backoff.ms = 50
metrics.num.samples = 2
ssl.keystore.type = JKS
heartbeat.interval.ms = 3000
16:43:04,493 INFO [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
max.schemas.per.subject = 1000
specific.avro.reader = true
schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]
16:43:04,498 INFO [io.confluent.kafka.serializers.KafkaAvroDeserializerConfig] (http--0.0.0.0-8180-1) KafkaAvroDeserializerConfig values:
max.schemas.per.subject = 1000
specific.avro.reader = true
schema.registry.url = [http://mtxbuctra22.prod.orange.intra:8081]
Kafka tool:
Figured it out.
In documentation for kafka 0.9.0.1 it's stated that fetch.min.bytes is 1. But i have kafka 0.9.0.0. And the default value is 1024. So, only after 2 messages this value was passed. Changed the fetch.min.bytes to 1 and now it works ok.
I can't seem to get my Java Kafka client to work. Symptoms:
"Discovered coordinator" is seen in logs, then less than one second later, "Marking the coordinator ... dead" is seen. No more output appears after that.
Debugging the code shows that org.apache.kafka.clients.consumer.KafkaConsumer.poll() never returns. The code is stuck in this do-while loop in the ConsumerNetworkClient class:
public boolean awaitMetadataUpdate(long timeout) {
long startMs = this.time.milliseconds();
int version = this.metadata.requestUpdate();
do {
this.poll(timeout);
} while(this.metadata.version() == version && this.time.milliseconds() - startMs < timeout);
return this.metadata.version() > version;
}
The logs say:
2019-09-25 15:25:45.268 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = foo
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class com.mycompany.KafkaMessageJsonNodeDeserializer
2019-09-25 15:25:45.312 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.3
2019-09-25 15:25:45.312 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 26ddb9e3197be39a
2019-09-25 15:25:47.700 [pool-2-thread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) for group foo.
2019-09-25 15:25:47.705 [pool-2-thread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Marking the coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) dead for group foo
.. if debug was turned on, then logs would also have a message like:
Coordinator discovery failed for group foo, refreshing metadata
More details:
I'm running kafka inside a docker container. When running the console consumer within the docker container, all is well. Messages are received just fine by the console consumer. My app (where the issues occur) is running outside the docker container.
The docker run command includes -p 2181:2181 -p 9001:9001 -p 9092:9092.
The stack looks like this when the Kafka client gets stuck in the loop:
awaitMetadataUpdate:134, ConsumerNetworkClient (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:226, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:203, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
poll:286, ConsumerCoordinator (org.apache.kafka.clients.consumer.internals)
pollOnce:1078, KafkaConsumer (org.apache.kafka.clients.consumer)
poll:1043, KafkaConsumer (org.apache.kafka.clients.consumer)
Looks like, your broker is advertising itself as ad0c03f60f39. And you seem to be running the client from your host machine, which can not resolve ad0c03f60f39 for obvious reason. You need to configure the broker to advertise itself as somthing which is resolvable from the host. Look for "advertised.listeners" in server.properties, you can set something like PLAINTEXT://localhost:9092
I want to send Json Object to my kafka topic but I am facing some problem
I use pojo with single instance variable as fileName where I am setting the filename and sending to Kafka Topic.
KafkaJsonSend objSend= new KafkaJsonSend();
objSend.setFileName(filename);
//Configure the Producer
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.10.51.10:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
Producer<String, JsonNode> producer = new KafkaProducer<String, JsonNode>(configProperties);
JsonNode jsonNode = objectMapper.valueToTree(objSend);
ProducerRecord<String, JsonNode> rec = new
ProducerRecord<String, JsonNode>("BlueShifts",jsonNode);
producer.send(rec);
producer.close();
But when I run this code I am getting exception which is continuously getting logged in my console.
Error: Uncaught error in kafka producer I/O thread
IllegalStateException: No entry found for connection 0
I have also tried with spring kafka but I got this in console
2019-02-04 16:43:13.938 INFO 4432 --- [nio-6020-exec-1]
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.0.2.15:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-02-04 16:43:14.158 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.2.0
2019-02-04 16:43:14.160 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
576d93a8dc0cf421 2019-02-04 16:44:14.206 ERROR 4432 ---
[nio-6020-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and
payload='KafkaJsonSend [fileName=a0a7caf336e8481fb
6db2de70d39029e_1549278789987.mp3]' to topic BlueShifts:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Worked added 10.10.51.10 kafka at C:\Windows\System32\drivers\etc in hosts file and it worked in both ways....can you tell why it's mapped with kafka why it couldn't find with ip
You need to configure your Kafka brokers correctly for your network: https://rmoff.net/2018/08/02/kafka-listeners-explained/
I'm having a problem with sending a serialized XML to my Kafka topic. Whenever I run my code, I don't get any exceptions or error message, but still I can't see any of my messages in the Kafka-topic.
My Kafka-Producer settings are:
def WartungsdbKafkaConnector(args: Array[String]): Unit = {
val xmlFile = args(0)
val record = getRecord(xmlFile)
val kafkaProducer = getKafkaProducer
kafkaProducer.send(record)
}
protected def getRecord(xmlFile: String): ProducerRecord[String, String] = {
val lines = scala.io.Source.fromFile(xmlFile).mkString
val xml = scala.xml.XML.loadString(lines)
val paramPress = xml \ "PARAMETER" \ "PRESS"
val databaseId = allCatch.opt {paramPress.\#("NUMBER")}
val key = databaseId.get
val topic = args(1)
new ProducerRecord(topic, key, lines)
}
protected def getKafkaProducer: KafkaProducer[String, String] = {
val props = new Properties
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"ec-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092")
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.LINGER_MS_CONFIG, "100")
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy")
props.put(ProducerConfig.RETRIES_CONFIG, "20")
props.put(ProducerConfig.ACKS_CONFIG, "all")
new KafkaProducer[String, String](props)
}
When I run the code, I get:
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 100
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer
clientId=producer-1] Instantiated an idempotent producer.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version :
2.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId :
3402a8361b734732
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.Metadata - Cluster ID: xeb6oWNpTgSQ_9FHctZ2ng
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.producer.internals.TransactionManager - [Producer
clientId=producer-1] ProducerId set to 150671 with epoch 0
Any Idea how to make it work?
Thanks in advance!
You're not flushing, waiting for, or closing the producer, so the app just stops without sending data.
Producers batch data for a configurable amount of time and messages to reduce the number of send requests actually get to the brokers.
Try
kafkaProducer.send(record) // optionally call get() on this to capture the result and potential errors
kafkaProducer.flush()
kafkaProducer.close()
Most importantly, never forget to close the producer (or a consumer)
I have Kafka running in a remote server and I am using spring framework (java) to produce and consume messages. For testing on my local machine, I am just producing 1 event. Here is a simplified code of how I produce messages:
import org.springframework.kafka.core.KafkaTemplate;
...
#Autowired
KafkaTemplate<String, String> kafkaTemplate;
...
kafkaTemplate.send("sampletopic", "1234").get();
...
Here payload is just a user-id string. When I execute the send function, I get the following error:
kafka.. error:java.util.concurrent.ExecutionException:
org.springframework.kafka.core.KafkaProducerException: Failed to send;
nested exception is org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record(s) for sampletopic-0: 30028 ms has passed since
batch creation plus linger time
Here are the relevant logs I get before getting the error:
[http-nio-8080-exec-3] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [41.204.196.251:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[http-nio-8080-exec-3] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.10.2.0
[http-nio-8080-exec-3] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 576d93a8ds0cf421