KafkaStreams - recover stream after broker failure - java

I've implemented a KafkaStreams app with the following properties
application.id = KafkaStreams
application.server =
bootstrap.servers = [localhost:9092,localhost:9093]
buffered.records.per.partition = 1000
cache.max.bytes.buffering = 10485760
client.id =
commit.interval.ms = 30000
connections.max.idle.ms = 540000
default.key.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
key.serde = null
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
num.standby.replicas = 0
num.stream.threads = 1
partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
poll.ms = 100
processing.guarantee = at_least_once
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
replication.factor = 1
request.timeout.ms = 40000
retry.backoff.ms = 100
rocksdb.config.setter = null
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
state.cleanup.delay.ms = 600000
state.dir = /tmp/kafka-streams
timestamp.extractor = null
value.serde = null
windowstore.changelog.additional.retention.ms = 86400000
zookeeper.connect =
My kafka version is 0.11.0.1. I launched two kafka brokers on localhost:9092 and 9093 respectively. In both brokers default.replication.factor=2 and num.partitions=4 (the rest of configuration properties are default).
My app receives streaming data from a specific topic, makes some transformations and sends data back to another topic. As soon as the second broker is down, it stops receiving data printing the following:
INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator localhost:9093 (id: 2147483646 rack: null) for group KafkaStreams.
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Marking the coordinator localhost:9093 (id: 2147483646 rack: null) dead for group KafkaStreams
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator localhost:9093 (id: 2147483646 rack: null) for group KafkaStreams.
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] WARN org.apache.kafka.clients.NetworkClient - Connection to node 2147483646 could not be established. Broker may not be available.
[kafka-coordinator-heartbeat-thread | KafkaStreams] WARN org.apache.kafka.clients.NetworkClient - Connection to node 1 could not be established. Broker may not be available.
For some reason it doesn't rebalance in order to connect to the first broker. Any suggestions why is this happening?

Related

Apache camel kafka consumer performance problem

We have observed that our Apache Camel Kafka is able to process only 100Kb / second though our kafka producing rate is high. We have 6 kafka partitions and 6 Apache Camel Kafka consumer instances .
We have tried with multithreading options - threads(300, 500) . But we could not see any improvements
It will be really helpful if someone could help to understand the correct configurations to improve the kafka consumer rate.
We are using all the default settings for Apache Camel Kafka consumer.
I could see the below kafka configuration in spring boot application start :
INFO 1 — [ main] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [kafka1.xxx.net:9093, kafka2.xxx.net:9093, kafka3.xxx.net:9093, kafka4.xxx.net:9093, kafka5.xxx.net:9093, kafka6.xxx.net:9093]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 104857600
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = [org.apache.kafka.common.metrics.JmxReporter]
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 65536000
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = [hidden]
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = /tmp/certs/XXX.jks
ssl.keystore.password = [hidden]
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = /tmp/certsFolder/kafka.truststore.jks
ssl.truststore.password = [hidden]
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
We have tried with multithreading options - .threads(300, 500) . But we could not see any improvements.
Is it something related with kafka consumer properties ?
spring-boot-starter-parent version - 2.6.8
org.apache.camel.springboot version - 3.14.1
Grafana Dashboard -
Producer Rate and Consumer Rates Overview :
Kafka Partition 1 – Producer and Consumer: Producer is a constant line . But Apache camel consumer has ups & down in the consumption rate
Kafka Partition 2 – Producer and Consumer: Producer is a constant line. But Apache camel consumer has ups & down in the consumption rate
Kafka Partition 3 – Producer and Consumer: Producer is a constant line. But Apache camel consumer has ups & down in the consumption rate
Kafka Partition 4 – Producer and Consumer: Producer is a constant line. But Apache camel consumer is only having a consumption rate of 30 B/s.
Kafka Partition 5 – Producer and Consumer: Producer is a constant line. But Apache camel consumer is only having a consumption rate of 30 B/s.
Kafka Partition 6 – Producer and Consumer: Producer is a constant line. But Apache camel consumer has ups & down in the consumption rate.

Kafka Consumer Coordinator connection issues, Kafka 0.11.0.3

I can't seem to get my Java Kafka client to work. Symptoms:
"Discovered coordinator" is seen in logs, then less than one second later, "Marking the coordinator ... dead" is seen. No more output appears after that.
Debugging the code shows that org.apache.kafka.clients.consumer.KafkaConsumer.poll() never returns. The code is stuck in this do-while loop in the ConsumerNetworkClient class:
public boolean awaitMetadataUpdate(long timeout) {
long startMs = this.time.milliseconds();
int version = this.metadata.requestUpdate();
do {
this.poll(timeout);
} while(this.metadata.version() == version && this.time.milliseconds() - startMs < timeout);
return this.metadata.version() > version;
}
The logs say:
2019-09-25 15:25:45.268 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = foo
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class com.mycompany.KafkaMessageJsonNodeDeserializer
2019-09-25 15:25:45.312 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 0.11.0.3
2019-09-25 15:25:45.312 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : 26ddb9e3197be39a
2019-09-25 15:25:47.700 [pool-2-thread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) for group foo.
2019-09-25 15:25:47.705 [pool-2-thread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Marking the coordinator ad0c03f60f39:9092 (id: 2147483647 rack: null) dead for group foo
.. if debug was turned on, then logs would also have a message like:
Coordinator discovery failed for group foo, refreshing metadata
More details:
I'm running kafka inside a docker container. When running the console consumer within the docker container, all is well. Messages are received just fine by the console consumer. My app (where the issues occur) is running outside the docker container.
The docker run command includes -p 2181:2181 -p 9001:9001 -p 9092:9092.
The stack looks like this when the Kafka client gets stuck in the loop:
awaitMetadataUpdate:134, ConsumerNetworkClient (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:226, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
ensureCoordinatorReady:203, AbstractCoordinator (org.apache.kafka.clients.consumer.internals)
poll:286, ConsumerCoordinator (org.apache.kafka.clients.consumer.internals)
pollOnce:1078, KafkaConsumer (org.apache.kafka.clients.consumer)
poll:1043, KafkaConsumer (org.apache.kafka.clients.consumer)
Looks like, your broker is advertising itself as ad0c03f60f39. And you seem to be running the client from your host machine, which can not resolve ad0c03f60f39 for obvious reason. You need to configure the broker to advertise itself as somthing which is resolvable from the host. Look for "advertised.listeners" in server.properties, you can set something like PLAINTEXT://localhost:9092

Sending Json Object to kafka topic in java

I want to send Json Object to my kafka topic but I am facing some problem
I use pojo with single instance variable as fileName where I am setting the filename and sending to Kafka Topic.
KafkaJsonSend objSend= new KafkaJsonSend();
objSend.setFileName(filename);
//Configure the Producer
Properties configProperties = new Properties();
configProperties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"10.10.51.10:9092");
configProperties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProperties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,JsonSerializer.class);
Producer<String, JsonNode> producer = new KafkaProducer<String, JsonNode>(configProperties);
JsonNode jsonNode = objectMapper.valueToTree(objSend);
ProducerRecord<String, JsonNode> rec = new
ProducerRecord<String, JsonNode>("BlueShifts",jsonNode);
producer.send(rec);
producer.close();
But when I run this code I am getting exception which is continuously getting logged in my console.
Error: Uncaught error in kafka producer I/O thread
IllegalStateException: No entry found for connection 0
I have also tried with spring kafka but I got this in console
2019-02-04 16:43:13.938 INFO 4432 --- [nio-6020-exec-1]
o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.0.2.15:9092]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 540000
interceptor.classes = null
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 60000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 0
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 30000
value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
2019-02-04 16:43:14.158 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka version : 0.10.2.0
2019-02-04 16:43:14.160 INFO 4432 --- [nio-6020-exec-1]
o.a.kafka.common.utils.AppInfoParser : Kafka commitId :
576d93a8dc0cf421 2019-02-04 16:44:14.206 ERROR 4432 ---
[nio-6020-exec-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and
payload='KafkaJsonSend [fileName=a0a7caf336e8481fb
6db2de70d39029e_1549278789987.mp3]' to topic BlueShifts:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Worked added 10.10.51.10 kafka at C:\Windows\System32\drivers\etc in hosts file and it worked in both ways....can you tell why it's mapped with kafka why it couldn't find with ip
You need to configure your Kafka brokers correctly for your network: https://rmoff.net/2018/08/02/kafka-listeners-explained/

Java/Scala Kafka Producer does not send message to topic

I'm having a problem with sending a serialized XML to my Kafka topic. Whenever I run my code, I don't get any exceptions or error message, but still I can't see any of my messages in the Kafka-topic.
My Kafka-Producer settings are:
def WartungsdbKafkaConnector(args: Array[String]): Unit = {
val xmlFile = args(0)
val record = getRecord(xmlFile)
val kafkaProducer = getKafkaProducer
kafkaProducer.send(record)
}
protected def getRecord(xmlFile: String): ProducerRecord[String, String] = {
val lines = scala.io.Source.fromFile(xmlFile).mkString
val xml = scala.xml.XML.loadString(lines)
val paramPress = xml \ "PARAMETER" \ "PRESS"
val databaseId = allCatch.opt {paramPress.\#("NUMBER")}
val key = databaseId.get
val topic = args(1)
new ProducerRecord(topic, key, lines)
}
protected def getKafkaProducer: KafkaProducer[String, String] = {
val props = new Properties
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"ec-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092," +
"ec2-x.eu-west-1.compute.amazonaws.com:9092")
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true")
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, classOf[StringSerializer].getName)
props.put(ProducerConfig.LINGER_MS_CONFIG, "100")
props.put(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy")
props.put(ProducerConfig.RETRIES_CONFIG, "20")
props.put(ProducerConfig.ACKS_CONFIG, "all")
new KafkaProducer[String, String](props)
}
When I run the code, I get:
[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = all
batch.size = 16384
bootstrap.servers = [ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092,
ec2-x.eu-west-1.compute.amazonaws.com:9092]
buffer.memory = 33554432
client.id =
compression.type = snappy
connections.max.idle.ms = 540000
enable.idempotence = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 100
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
[main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer
clientId=producer-1] Instantiated an idempotent producer.
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version :
2.0.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId :
3402a8361b734732
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.Metadata - Cluster ID: xeb6oWNpTgSQ_9FHctZ2ng
[kafka-producer-network-thread | producer-1] INFO
org.apache.kafka.clients.producer.internals.TransactionManager - [Producer
clientId=producer-1] ProducerId set to 150671 with epoch 0
Any Idea how to make it work?
Thanks in advance!
You're not flushing, waiting for, or closing the producer, so the app just stops without sending data.
Producers batch data for a configurable amount of time and messages to reduce the number of send requests actually get to the brokers.
Try
kafkaProducer.send(record) // optionally call get() on this to capture the result and potential errors
kafkaProducer.flush()
kafkaProducer.close()
Most importantly, never forget to close the producer (or a consumer)

Kafka Consumer stuck joining cluster

I'm using Kafka with the Consumer API (v. 0.10.0.0). Kafka is running in Docker using the image from http://wurstmeister.github.io/kafka-docker/
Also I'm running this simple test:
#Test
public void test2() {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", RandomStringUtils.randomAlphabetic(8));
props.put("auto.offset.reset.config", "earliest");
props.put("enable.auto.commit", "false");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
Properties props1 = new Properties();
props1.put("bootstrap.servers", "localhost:9092");
props1.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props1.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer<String, String> producer1 = new KafkaProducer<>(props1);
KafkaProducer<String, String> producer = producer1;
consumer.subscribe(asList(TEST_TOPIC));
producer.send(new ProducerRecord<>(TEST_TOPIC, 0, "key", "value message"));
producer.flush();
boolean done = false;
while (!done) {
ConsumerRecords<String, String> msg = consumer.poll(1000);
if (msg.count() > 0) {
Iterator<ConsumerRecord<String, String>> msgIt = msg.iterator();
while (msgIt.hasNext()) {
ConsumerRecord<String, String> rec = msgIt.next();
System.out.println(rec.value());
}
consumer.commitSync();
done = true;
}
}
consumer.close();
producer.close();
}
Topic name and consumer id are Randomly generated at each execution.
The behaviour is very erratic... Sometimes it will work, sometimes it will start looping when calling .poll() with the following repeating output:
2017-04-20 12:01:46 DEBUG NetworkClient:476 - Completed connection to node 1003
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 3 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106738, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2bea5ab4, request=RequestSend(header={api_key=10,api_version=0,correlation_id=3,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106738, sendTimeMs=1492686106738), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 4 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106840, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#3d8314f0, request=RequestSend(header={api_key=10,api_version=0,correlation_id=5,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106839, sendTimeMs=1492686106839), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 5 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106941, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2df32bf7, request=RequestSend(header={api_key=10,api_version=0,correlation_id=7,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106940, sendTimeMs=1492686106940), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:47 DEBUG Metadata:180 - Updated cluster metadata version 6 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:47 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:47 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686107042, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#530612ba, request=RequestSend(header={api_key=10,api_version=0,correlation_id=9,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686107041, sendTimeMs=1492686107041), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:47 DEBUG Metadata:180 - Updated cluster metadata version 7 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:47 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:47 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686107144, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2a40cd94, request=RequestSend(header={api_key=10,api_version=0,correlation_id=11,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686107144, sendTimeMs=1492686107144), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
Does anyone know what's going on? It seems a fairly simple setup/test to me...
I've found the reason myself. So I was running the consumer on a topic with 1 partition only. Then, I was just killing the process with the consumer, so no clean shutdown.
In this situation the broker will keep the spot for the consumer until the session expires. Trying to join with another consumer results in that error until the expiry.
To solve one can do:
- Change group Id
- Wait till session expiry
- Restart the broker (?)
If someone with more knowledge can explain better, please do

Categories