I have a data like {name: "abc", age: 20} being read from kafka in a flink code using Java.
I am trying to sink this data into Kafka from flink
KafkaSink<AllIncidentsDataPOJO> sink = KafkaSink.<AllIncidentsDataPOJO>builder()
.setBootstrapServers(this.bootstrapServer)
.setKafkaProducerConfig(kafkaProps)
.setRecordSerializer(KafkaRecordSerializationSchema.builder()
.setTopic("flink-all-incidents")
.setValueSerializationSchema(new IncidentsSerializationSchema()).build())
.setDeliverGuarantee(DeliveryGuarantee.EXACTLY_ONCE)
.build();
IncidentSerializationSchema
public class IncidentsSerializationSchema implements SerializationSchema<AllIncidentsDataPOJO> {
static ObjectMapper objectMapper = new ObjectMapper();
#Override
public byte[] serialize(AllIncidentsDataPOJO element) {
if (objectMapper == null) {
objectMapper.setVisibility(PropertyAccessor.FIELD, JsonAutoDetect.Visibility.ANY);
objectMapper = new ObjectMapper();
}
try {
// System.out.println("Returned value: " + objectMapper.writeValueAsString(element).getBytes());
return objectMapper.writeValueAsString(element).getBytes();
} catch (
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonProcessingException e) {
System.out.println("Exception: " + e);
}
return new byte[0];
}
}
Now when I print within the above class it gives me a serialized message of say
Returned value: [B#6f29a884
Returned value: [B#743145c5
Returned value: [B#6aa6b431
Returned value: [B#208c0151
Returned value: [B#1a3a0c6d
Returned value: [B#7972da35
Returned value: [B#1097059f
Returned value: [B#6013a87b
Returned value: [B#27b252dc
Returned value: [B#288478b7
Returned value: [B#54185041
Returned value: [B#970646b
But when I try to use data.sinkTo(sink) after initializing KafkaSink, it starts disconnecting
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Node 8 disconnected.
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Cancelled in-flight API_VERSIONS request with correlation id 35 due to node 8 being disconnected (elapsed time since creation: 292ms, elapsed time since send: 292ms, request timeout: 30000ms)
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Node 8 disconnected.
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Cancelled in-flight API_VERSIONS request with correlation id 37 due to node 8 being disconnected (elapsed time since creation: 639ms, elapsed time since send: 639ms, request timeout: 30000ms)
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Node 8 disconnected.
[kafka-producer-network-thread | producer-kafka-sink-0-1] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-kafka-sink-0-1, transactionalId=kafka-sink-0-1] Cancelled in-flight API_VERSIONS request with correlation id 38 due to node 8 being disconnected (elapsed time since creation: 875ms, elapsed time since send: 875ms, request timeout: 30000ms)
Can someone please help me here?
I've created a user-defined stored procedure and I am trying to produce messages to Kafka inside the procedure. And I am getting the following error:
[kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected
Here is my code and it works perfectly when I run it in a sample project
class KafkaProducer(topic: String, message: String) {
init {
val properties = Properties()
properties[ProducerConfig.BOOTSTRAP_SERVERS_CONFIG] = "localhost:9092"
properties[ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java.name
properties[ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java.name
val producer = KafkaProducer<String, String>(properties)
val record = ProducerRecord(topic, "key", message)
producer.send(record)
producer.flush()
producer.close()
}
}
Read.java
public class Read {
public static void main(String[] args) {
String conn = "db_url";
String username = "*****";
String pwd = "*****";
String sql = "INSERT INTO table (coloumn) values (?)";
Properties props = new Properties();
props.put("bootstrap.servers", "10.247.36.174:3306");
props.put("acks", "all");
props.put("retries", 0);
props.put("batch.size", 16384);
props.put("linger.ms", 1);
props.put("buffer.memory", 33554432);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
Producer<String, String> producer = null;
try (Connection con = DriverManager.getConnection(conn, username, pwd);
PreparedStatement ps = con.prepareStatement(sql);
BufferedReader br = new BufferedReader(new FileReader("All.log"));) {
String line = null;
processMessages(line, br, listparam, ps);
break;
}
} catch (Exception ex) {
System.err.println("Error in \n" + ex);
} finally {
producer = new KafkaProducer<>(props);
String msg = "Done";
producer.send(new ProducerRecord<String, String>("HelloKafka", msg));
System.out.println("Sent: " + msg);
}
producer.close();
}
public static void processMessages(String line, BufferedReader br, List<String> list param, PreparedStatement ps)
throws Exception {
StringBuilder message = new StringBuilder();
message.append(line);
while ((line = br.readLine()) != null) {
String firstWord = line.split(" ", 2)[0];
if (listparam.contains(firstWord)) {
ps.setString(1, message.toString());
ps.executeUpdate();
message.setLength(0);
message.append(line);
} else {
message.append("\n" + line);
}
}
if (message.length() > 0) {
ps.setString(1, message.toString());
ps.executeUpdate();
}
}
}
Retrieve.java
public class Retrieve {
public static void main(String[] args) {
String conn = "db_url";
String username = "****";
String pwd = "****";
String sql = "SELECT * from table1";
Properties props = new Properties();
props.put("bootstrap.servers", "ipaddress");
props.put("group.id", "group-1");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("auto.offset.reset", "earliest");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(props);
kafkaConsumer.subscribe(Arrays.asList("HelloKafka"));
while (true) {
ConsumerRecords<String, String> records = kafkaConsumer.poll(100);
for (ConsumerRecord<String, String> record : records) {
if (record.value().equals("Done")) {
try (Connection con = ...;
PreparedStatement ps = ....;) {
ResultSet rs =.....;
while (rs.next()) {
String rawData = rs.getString("RawData");
}
} catch (Exception ex) {
System.err.println("Error in \n" + ex);
}
}
}
}
}
}
I am new to Kafka. can someone tell me which part did I wrong ? I don't know how to use Kafka in java. in ReadLg.java I want to read from the log file and insert it into DB and then when it finished I want to send a message to the RetrieveData.java so it can start. the retrieve data will be run but idle waiting for the message from the ReadLg.java. Is this a bad approach? or the old approach? any suggestions or help on fixing this? I keep getting error can't connect to the IP address
Below is the error message:
14:23:00.482 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Node -1 disconnected.
14:23:00.482 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.532 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.583 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.633 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.683 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.733 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.784 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.834 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.885 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.935 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:00.985 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.036 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.086 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.137 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.187 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.238 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.289 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.339 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.389 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.439 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Give up sending metadata request since no node is available
14:23:01.490 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Initialize connection to node 10.247.36.174:3306 (id: -1 rack: null) for sending metadata request
14:23:01.490 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Initiating connection to node 10.247.36.174:3306 (id: -1 rack: null)
14:23:02.012 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=producer-1] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
14:23:02.012 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Completed connection to node -1. Fetching API versions.
14:23:02.012 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Initiating API versions fetch from node -1.
14:23:02.520 [kafka-producer-network-thread | producer-1] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=producer-1] Connection with /10.247.36.174 disconnected
java.io.EOFException: null
Summarizing: To use Kafka Consumer/Producer at first you have to start Zookeeper and Kafka broker.
For testing or developing purpose you can start it by your own using:
documentation: https://kafka.apache.org/documentation/#quickstart_startserver
Docker image: https://hub.docker.com/r/wurstmeister/kafka
If your Kafka is ready you can start using it. You have to rember to set proper value for bootstrap.server (for local use it is usually localhost:9092)
I've implemented a KafkaStreams app with the following properties
application.id = KafkaStreams
application.server =
bootstrap.servers = [localhost:9092,localhost:9093]
buffered.records.per.partition = 1000
cache.max.bytes.buffering = 10485760
client.id =
commit.interval.ms = 30000
connections.max.idle.ms = 540000
default.key.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
default.value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
key.serde = null
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
num.standby.replicas = 0
num.stream.threads = 1
partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
poll.ms = 100
processing.guarantee = at_least_once
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
replication.factor = 1
request.timeout.ms = 40000
retry.backoff.ms = 100
rocksdb.config.setter = null
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
state.cleanup.delay.ms = 600000
state.dir = /tmp/kafka-streams
timestamp.extractor = null
value.serde = null
windowstore.changelog.additional.retention.ms = 86400000
zookeeper.connect =
My kafka version is 0.11.0.1. I launched two kafka brokers on localhost:9092 and 9093 respectively. In both brokers default.replication.factor=2 and num.partitions=4 (the rest of configuration properties are default).
My app receives streaming data from a specific topic, makes some transformations and sends data back to another topic. As soon as the second broker is down, it stops receiving data printing the following:
INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator localhost:9093 (id: 2147483646 rack: null) for group KafkaStreams.
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Marking the coordinator localhost:9093 (id: 2147483646 rack: null) dead for group KafkaStreams
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] INFO org.apache.kafka.clients.consumer.internals.AbstractCoordinator - Discovered coordinator localhost:9093 (id: 2147483646 rack: null) for group KafkaStreams.
[KafkaStreams-38259122-0ce7-41c3-8df6-7482626fec81-StreamThread-1] WARN org.apache.kafka.clients.NetworkClient - Connection to node 2147483646 could not be established. Broker may not be available.
[kafka-coordinator-heartbeat-thread | KafkaStreams] WARN org.apache.kafka.clients.NetworkClient - Connection to node 1 could not be established. Broker may not be available.
For some reason it doesn't rebalance in order to connect to the first broker. Any suggestions why is this happening?
I'm using Kafka with the Consumer API (v. 0.10.0.0). Kafka is running in Docker using the image from http://wurstmeister.github.io/kafka-docker/
Also I'm running this simple test:
#Test
public void test2() {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", RandomStringUtils.randomAlphabetic(8));
props.put("auto.offset.reset.config", "earliest");
props.put("enable.auto.commit", "false");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
Properties props1 = new Properties();
props1.put("bootstrap.servers", "localhost:9092");
props1.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props1.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer<String, String> producer1 = new KafkaProducer<>(props1);
KafkaProducer<String, String> producer = producer1;
consumer.subscribe(asList(TEST_TOPIC));
producer.send(new ProducerRecord<>(TEST_TOPIC, 0, "key", "value message"));
producer.flush();
boolean done = false;
while (!done) {
ConsumerRecords<String, String> msg = consumer.poll(1000);
if (msg.count() > 0) {
Iterator<ConsumerRecord<String, String>> msgIt = msg.iterator();
while (msgIt.hasNext()) {
ConsumerRecord<String, String> rec = msgIt.next();
System.out.println(rec.value());
}
consumer.commitSync();
done = true;
}
}
consumer.close();
producer.close();
}
Topic name and consumer id are Randomly generated at each execution.
The behaviour is very erratic... Sometimes it will work, sometimes it will start looping when calling .poll() with the following repeating output:
2017-04-20 12:01:46 DEBUG NetworkClient:476 - Completed connection to node 1003
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 3 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106738, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2bea5ab4, request=RequestSend(header={api_key=10,api_version=0,correlation_id=3,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106738, sendTimeMs=1492686106738), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 4 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106840, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#3d8314f0, request=RequestSend(header={api_key=10,api_version=0,correlation_id=5,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106839, sendTimeMs=1492686106839), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:46 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:46 DEBUG Metadata:180 - Updated cluster metadata version 5 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:46 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:46 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686106941, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2df32bf7, request=RequestSend(header={api_key=10,api_version=0,correlation_id=7,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686106940, sendTimeMs=1492686106940), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:47 DEBUG Metadata:180 - Updated cluster metadata version 6 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:47 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:47 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686107042, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#530612ba, request=RequestSend(header={api_key=10,api_version=0,correlation_id=9,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686107041, sendTimeMs=1492686107041), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
2017-04-20 12:01:47 DEBUG Metadata:180 - Updated cluster metadata version 7 to Cluster(nodes = [192.168.100.80:9092 (id: 1003 rack: null)], partitions = [Partition(topic = ByjSIH, partition = 0, leader = 1003, replicas = [1003,], isr = [1003,]])
2017-04-20 12:01:47 DEBUG AbstractCoordinator:476 - Sending coordinator request for group RHAdpuiv to broker 192.168.100.80:9092 (id: 1003 rack: null)
2017-04-20 12:01:47 DEBUG AbstractCoordinator:489 - Received group coordinator response ClientResponse(receivedTimeMs=1492686107144, disconnected=false, request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#2a40cd94, request=RequestSend(header={api_key=10,api_version=0,correlation_id=11,client_id=consumer-1}, body={group_id=RHAdpuiv}), createdTimeMs=1492686107144, sendTimeMs=1492686107144), responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}})
2017-04-20 12:01:47 DEBUG NetworkClient:640 - Sending metadata request {topics=[ByjSIH]} to node 1003
Does anyone know what's going on? It seems a fairly simple setup/test to me...
I've found the reason myself. So I was running the consumer on a topic with 1 partition only. Then, I was just killing the process with the consumer, so no clean shutdown.
In this situation the broker will keep the spot for the consumer until the session expires. Trying to join with another consumer results in that error until the expiry.
To solve one can do:
- Change group Id
- Wait till session expiry
- Restart the broker (?)
If someone with more knowledge can explain better, please do