I am using Kafka with Spring-boot:
Kafka Producer class:
#Service
public class MyKafkaProducer {
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
private static Logger LOGGER = LoggerFactory.getLogger(NotificationDispatcherSender.class);
// Send Message
public void sendMessage(String topicName, String message) throws Exception {
LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
LOGGER.debug("sent message='{}' with offset={}", message, result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + " : " + ex.getMessage());
}
});
}
}
Kafka-configuration:
spring.kafka.producer.retries=0
spring.kafka.producer.batch-size=100000
spring.kafka.producer.request.timeout.ms=30000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=0
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.bootstrap-servers=192.168.1.161:9092,192.168.1.162:9093
Problem:
I have 5 partitions of a topic let's say my-topic.
What happens is, I get success(i.e message is sent to Kafka successfully) logs but offset of none partition of topic my-topic get incremented.
As you can see above, I have added logs onSuccess and onFailure.
What I expect is, when Kafka is not able to send a message to Kafka I should get an error, but I do not receive any error message in this case.
The above behavior of Kafka happens at a ratio of 100: 5 (i.e after every 100 successful message sent to kafka).
Edit1: Adding Kafka producer logs for a successful case(i.e successfully received message on consumer side)
ProducerConfig - logAll:180] ProducerConfig values:
acks = 0
batch.size = 1000
block.on.buffer.full = false
bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 10
max.block.ms = 5000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 60000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
2017-10-24 14:30:09, [INFO] [karma-unified-notification-manager - ProducerConfig - logAll:180] ProducerConfig values:
acks = 0
batch.size = 1000
block.on.buffer.full = false
bootstrap.servers = [10.20.1.19:9092, 10.20.1.20:9093, 10.20.1.26:9094]
buffer.memory = 33554432
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 10
max.block.ms = 5000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 60000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
It is not showing the errors because you have set spring.kafka.producer.acks as 0.
Set it as 1 and your callback function should work. Then you can see if the offset is getting incremented or not.
Your code works fine for me...
#SpringBootApplication
public class So46892185Application {
public static void main(String[] args) {
SpringApplication.run(So46892185Application.class, args);
}
private static final Logger LOGGER = LoggerFactory.getLogger(So46892185Application.class);
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
for (int i = 0; i < 10; i++) {
send(template, "foo" + i);
}
};
}
public void send(KafkaTemplate<String, String> template, String message) {
ListenableFuture<SendResult<String, String>> result = template.send(topic().name(), message);
result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
LOGGER.info("sent message='{}'"
+ " to partition={}"
+ " with offset={}", message, result.getRecordMetadata().partition(),
result.getRecordMetadata().offset());
}
#Override
public void onFailure(Throwable ex) {
LOGGER.error("Ex : " + ex.getMessage());
}
});
}
#Bean
public NewTopic topic() {
return new NewTopic("so46892185-3", 5, (short) 1);
}
}
Result
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo3' to partition=1 with offset=0
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo8' to partition=1 with offset=1
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo1' to partition=2 with offset=0
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo6' to partition=2 with offset=1
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo0' to partition=0 with offset=0
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo5' to partition=0 with offset=1
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo4' to partition=3 with offset=0
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo9' to partition=3 with offset=1
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo2' to partition=4 with offset=0
2017-10-23 11:12:05.907 INFO 86390 --- [ad | producer-1] com.example.So46892185Application
: sent message='foo7' to partition=4 with offset=1
Related
I want to disable the Kafka internal log which I didn't write.
Below code is my basic kafka producer code.
package me.sclee.kafka.basic.producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Properties;
import java.util.UUID;
import static me.sclee.kafka.basic.config.BasicKafkaConfig.*;
public class BasicKafkaProducer {
private static final Logger logger = LoggerFactory.getLogger(BasicKafkaProducer.class);
private static KafkaProducer<String, String> getKafkaProducer() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS);
props.put(ProducerConfig.ACKS_CONFIG, ACK);
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, COMPRESSION_TYPE);
props.put(ProducerConfig.RETRIES_CONFIG, RETIRES);
props.put(ProducerConfig.BATCH_SIZE_CONFIG, BATCH_SIZE);
props.put(ProducerConfig.LINGER_MS_CONFIG, LINGER_MS);
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, BUFFER_MEMORY);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, KEY_SER);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, VALUE_SER);
return new KafkaProducer<>(props);
}
public void send() throws Exception {
logger.info("Sending kafka message..");
KafkaProducer<String, String> producer = null;
int line = 3;
try {
for (int n = 0; n < line; n++) {
producer = getKafkaProducer();
String uuid = UUID.randomUUID().toString();
ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC_NAME, uuid);
producer.send(record, new ProducerCallBack(n));
Thread.sleep(1000);
}
} catch (Exception e) {
logger.error("There was a problem while sending a message in producer, {}" + e.getMessage());
throw e;
} finally {
if (producer != null) {
producer.flush();
producer.close();
}
}
logger.info("Exited the Kafka sending..");
}
public static void main(String[] args) throws Exception {
BasicKafkaProducer basicKafkaProducer = new BasicKafkaProducer();
basicKafkaProducer.send();
}
}
As you can see, I put some logs for my trace but when I executed I saw many other logs which is generated by the internal log as below.
I want to see my log in the code for the better understanding.
How to achieve it?
I used the following slf4j libs.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.0-alpha1</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.0-alpha1</version>
</dependency>
console
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705323290
[kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-1] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705323550, partition : 0, offset : 139, line : 0
[Timer-0] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-2
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705324831
[kafka-producer-network-thread | producer-2] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-2] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-2] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705324834, partition : 0, offset : 140, line : 1
[Timer-0] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-3
compression.type = snappy
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 5
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 1
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.0.0
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 8cb0a5e9d3441962
[Timer-0] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1642705325844
[kafka-producer-network-thread | producer-3] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-3] Cluster ID: oXvU2NlRS8m6jKC3Zh1FcA
[kafka-producer-network-thread | producer-3] INFO me.sclee.kafka.basic.producer.ProducerCallBack - Producer sends a message. Topic : 1642705325848, partition : 0, offset : 141, line : 2
You can create your own src/main/resources/log4j.properties file and configure whatever levels/packages/formats you wish.
For example,
log4j.logger.kafka=OFF
log4j.logger.org.apache.kafka=OFF
Defaults (for the broker and clients) are defined here https://github.com/apache/kafka/blob/trunk/config/log4j.properties
I want to send Json Object to my kafka topic but I am facing some problem
I use pojo with single instance variable as fileName where I am setting the filename and sending to Kafka Topic.
KafkaJsonSend objSend= new KafkaJsonSend();
objSend.setFileName(filename);
//Configure the Producer
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.10.51.10:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
Producer<String, JsonNode> producer = new KafkaProducer<String, JsonNode>(configProperties);
JsonNode jsonNode = objectMapper.valueToTree(objSend);
ProducerRecord<String, JsonNode> rec = new
ProducerRecord<String, JsonNode>("BlueShifts",jsonNode);
producer.send(rec);
producer.close();
But when I run this code I am getting exception which is continuously getting logged in my console.
Error: Uncaught error in kafka producer I/O thread
IllegalStateException: No entry found for connection 0
I have also tried with spring kafka but I got this in console
2019-02-04 16:43:13.938 INFO 4432 --- [nio-6020-exec-1]
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.0.2.15:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-02-04 16:43:14.158 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.2.0
2019-02-04 16:43:14.160 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
576d93a8dc0cf421 2019-02-04 16:44:14.206 ERROR 4432 ---
[nio-6020-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and
payload='KafkaJsonSend [fileName=a0a7caf336e8481fb
6db2de70d39029e_1549278789987.mp3]' to topic BlueShifts:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Worked added 10.10.51.10 kafka at C:\Windows\System32\drivers\etc in hosts file and it worked in both ways....can you tell why it's mapped with kafka why it couldn't find with ip
You need to configure your Kafka brokers correctly for your network: https://rmoff.net/2018/08/02/kafka-listeners-explained/
I'm having a problem with sending a serialized XML to my Kafka topic. Whenever I run my code, I don't get any exceptions or error message, but still I can't see any of my messages in the Kafka-topic.
My Kafka-Producer settings are:
def WartungsdbKafkaConnector(args: Array[String]): Unit = {
val xmlFile = args(0)
val record = getRecord(xmlFile)
val kafkaProducer = getKafkaProducer
kafkaProducer.send(record)
}
protected def getRecord(xmlFile: String): ProducerRecord[String, String] = {
val lines = scala.io.Source.fromFile(xmlFile).mkString
val xml = scala.xml.XML.loadString(lines)
val paramPress = xml \ "PARAMETER" \ "PRESS"
val databaseId = allCatch.opt {paramPress.\#("NUMBER")}
val key = databaseId.get
val topic = args(1)
new ProducerRecord(topic, key, lines)
}
protected def getKafkaProducer: KafkaProducer[String, String] = {
val props = new Properties
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"ec-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092")
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.LINGER_MS_CONFIG, "100")
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy")
props.put(ProducerConfig.RETRIES_CONFIG, "20")
props.put(ProducerConfig.ACKS_CONFIG, "all")
new KafkaProducer[String, String](props)
}
When I run the code, I get:
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 100
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer
clientId=producer-1] Instantiated an idempotent producer.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version :
2.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId :
3402a8361b734732
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.Metadata - Cluster ID: xeb6oWNpTgSQ_9FHctZ2ng
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.producer.internals.TransactionManager - [Producer
clientId=producer-1] ProducerId set to 150671 with epoch 0
Any Idea how to make it work?
Thanks in advance!
You're not flushing, waiting for, or closing the producer, so the app just stops without sending data.
Producers batch data for a configurable amount of time and messages to reduce the number of send requests actually get to the brokers.
Try
kafkaProducer.send(record) // optionally call get() on this to capture the result and potential errors
kafkaProducer.flush()
kafkaProducer.close()
Most importantly, never forget to close the producer (or a consumer)
I am trying to produce messages to Kafka (Cloudera) from an ActiveMQ-Camel bridge using Kerberos.
ActiveMQ v5.15.4
Camel: 2.21.1
Kafka Clients:1.1.0
Server version: Apache/2.4.6 (CentOS)
The camel.xml snipet is:
<log message="Started The Producer Route" />
<to uri="kafka://10.100.70.00:9092?topic=MyEvents.s1.v1&brokers=10.100.70.00:9092&requestTimeoutMs=305000&retries=3&keySerializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&saslMechanism=GSSAPI&serializerClass=org.apache.kafka.common.serialization.ByteArraySerializer&securityProtocol=PLAINTEXT&saslKerberosServiceName=kafka"/>
This is the kafka client config from log:
acks = 1
batch.size = 16384
bootstrap.servers = [10.148.70.74:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retries = 3
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT (**SASL_PLAINTEXT not supported**)
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
Log level: DEBUG
Jaas file:
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="./user.keytab"
useTicketCache=false
serviceName="kafka"
principal=" Group/user#DOMAIN.LAN";
};
Export:
KAFKA_OPTS="-Djava.security.auth.login.config=/opt/activemq/conf/Jaas.conf"
When I send a message I receive the following log at DEBUG level and the message is not delivered:
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:93)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:235)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:196)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:557)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:495)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.common.network.Selector.poll(Selector.java:424)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)[kafka-clients-1.1.0.jar:]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)[kafka-clients-1.1.0.jar:]
at java.lang.Thread.run(Thread.java:748)[:1.8.0_171
At INFO level I only see this in the log:
WARN | [Producer clientId=producer-1] Bootstrap broker 10.100.70.00:9092 (id: -1 rack: null) disconnected | org.apache.kafka.clients.NetworkClient | kafka-producer-network-thread | producer-1
Why am I getting this error? Please help!
This error is caused by not authorised user to produce messages to Kafka.
Such problem can be mitigated by verifying the keytab file as a prerequisite:
Verify service account name from keytab file: klist -k -t <keytabFile>
Authenticate to Active Directory: kinit -k -t <keytabFile> <servicePrincipal>
There shall not be any errors.
Recently I found that kafka consumers require a lot of ram.
For tests I've just started locally a single-threaded consumer that listens a single topic.Topic has 4 partitions. Kafka has only one broker.
From producer I sent only 10 small messages (it was around 11:44:30 PM, see at the image I attached at the link). Since then nobody has sent any more messages to this topic.
From then I've been seeing on the diagram with constantly growing memory consumption during the consumer polling work. Line is growing untill GC is not called.
Consumer just sends poll-requests and returns nothing but require a lot of memory.
I think it's problem.
I tried to do some tuning, i.e. configuring some params as FETCH_MAX_BYTES_CONFIG/MAX_PARTITION_FETCH_BYTES_CONFIG/MAX_POLL_RECORDS_CONFIG but nothing actually worked out.
SSCCE:
KafkaConsumerConfig:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.ByteArrayDeserializer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.AbstractMessageListenerContainer;
import java.util.HashMap;
import java.util.Map;
#Configuration
#EnableKafka
public class KafkaConsumerConfig {
#Bean
public ConcurrentKafkaListenerContainerFactory<String, byte[]> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, byte[]> factory
= new ConcurrentKafkaListenerContainerFactory<>();
factory.setConcurrency(1);
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
return factory;
}
private ConsumerFactory<String, byte[]> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerProperties());
}
private Map<String, Object> consumerProperties() {
final Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "local-test-consumer-group");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");
//props.put(ConsumerConfig.FETCH_MAX_BYTES_CONFIG, 1000000); //1mb
//props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 5);
//props.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, 256000); //256kb
return props;
}
}
KafkaDataListener
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.stereotype.Component;
import java.util.Arrays;
#Component
public class KafkaDataListener {
#KafkaListener(topics = "local-test-topic", containerFactory = "kafkaListenerContainerFactory")
public void consumeEvent(byte[] eventData, final Acknowledgment ack) {
try {
System.out.println("consumer received message:" + Arrays.toString(eventData));
}finally {
ack.acknowledge();
}
}
}
Main App
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
#SpringBootApplication
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
}
}
build.gradle:
group 'org.test.kafka'
version '1.0-SNAPSHOT'
apply plugin: 'java'
sourceCompatibility = 1.8
repositories {
mavenLocal()
mavenCentral()
}
dependencies {
compile group: 'org.springframework.boot', name: 'spring-boot-starter', version: '1.5.4.RELEASE'
compile group: 'org.springframework.kafka', name: 'spring-kafka', version: '1.2.2.RELEASE'
}
output:
2018-02-02 23:38:51.506 INFO 14888 --- [ main] org.kafka.tests.App : Starting App on so-workstation with PID 14888 (/home/.../projects/custom/kafka-consumer-test/out/production/classes started by ... in /home/.../projects/custom/kafka-consumer-test)
2018-02-02 23:38:51.508 INFO 14888 --- [ main] org.kafka.tests.App : No active profile set, falling back to default profiles: default
2018-02-02 23:38:51.591 INFO 14888 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#281e3708: startup date [Fri Feb 02 23:38:51 MSK 2018]; root of context hierarchy
2018-02-02 23:38:52.028 INFO 14888 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$2d249aa5] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-02-02 23:38:52.205 INFO 14888 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-02-02 23:38:52.207 INFO 14888 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2018-02-02 23:38:52.221 INFO 14888 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = local-test-consumer-group
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-02-02 23:38:52.225 INFO 14888 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = local-test-consumer-group
heartbeat.interval.ms = 3000
interceptor.classes = null
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2018-02-02 23:38:52.258 INFO 14888 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.2.0
2018-02-02 23:38:52.258 INFO 14888 --- [ main] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : 576d93a8dc0cf421
2018-02-02 23:38:52.268 INFO 14888 --- [ main] org.kafka.tests.App : Started App in 1.056 seconds (JVM running for 1.337)
2018-02-02 23:38:52.308 INFO 14888 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator kafka:9092 (id: 2147483646 rack: null) for group local-test-consumer-group.
2018-02-02 23:38:52.312 INFO 14888 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Revoking previously assigned partitions [] for group local-test-consumer-group
2018-02-02 23:38:52.313 INFO 14888 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked:[]
2018-02-02 23:38:52.313 INFO 14888 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : (Re-)joining group local-test-consumer-group
2018-02-02 23:38:52.319 INFO 14888 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Successfully joined group local-test-consumer-group with generation 1
2018-02-02 23:38:52.320 INFO 14888 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : Setting newly assigned partitions [local-test-topic-3, local-test-topic-2, local-test-topic-1, local-test-topic-0] for group local-test-consumer-group
2018-02-02 23:38:52.328 INFO 14888 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned:[local-test-topic-3, local-test-topic-2, local-test-topic-1, local-test-topic-0]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
consumer received message:[...]
Does anybody know how to tune it properly?
Or maybe on some higher version memory usage is optimized?
Kafka server: 0.10.2.0
Kafka client: 0.10.2.0
See the images:
UPD:
For kafka consumer 1.0.0 (spring kafka 2.1.2) memory usage diagram looks a bit better. Now the consumption line is growing not so fast as before.
But now RMI TCP Connection thread consumes even more memory that kafka consumer thread.
Moreover it seems that consumer's params are getting affects on memory usage.
With consumer's params FETCH_MAX_BYTES_CONFIG = 1mb and MAX_PARTITION_FETCH_BYTES_CONFIG = 256kb consumption gets lower.
One cause of this "sawtooth pattern" is the application Java VisualVM itself. It is asking your JVM every second for information. The JVM then creates a lot of object for this process, which gets obsolete after sending to VisualVM and can therefore easily garbage collected.
Try to decrease the polling rate in the settings of VisualVM. It should minimize the effect.