Error while sending message in Kafka - java

I am trying to make a producer in Kafka
My Maven pom.xml is
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.1</version>
</dependency>
</dependencies>
My Producer Code is
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class KafkaProducer {
public static void main(String[] args) {
Properties props = new Properties();
props.put("zk.connect", "serverIp:2181");
props.put("serializer.class", "kafka.serializer.StringEncoder");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
producer.send(new KeyedMessage<String, String>("TestTopic", "Message1"));
}
While executing this code, I am getting this error
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Missing required property 'metadata.broker.list'
at scala.Predef$.require(Predef.scala:233)
at kafka.utils.VerifiableProperties.getString(VerifiableProperties.scala:177)
at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:66)
at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:56)
at kafkaPOC.KafkaProducer.main(KafkaProducer.java:16)
Can anyone please help me solve this exception.
Thanks in Advance.!!!

You are missing metadata.broker.list property.
props.put("metadata.broker.list", "<Kafka broker IP address>:9092");

Related

How to make kafka consumer stop retrying and crash when broker is not available?

Currently, when broker is not available, I see that java consumer will try to reconnect to broker for infinite amount of time(or at least I was not able to wait till it stops).
What I would like to have is an ability to have an exception thrown from poll() when broker is not available. Or maybe some other way, but I want my app to crash when broker is not available. It seems that it should be very easy to configure, but I'm unable to find info anywhere.
Here is an example. It'll always return 0 records from poll and will retry connection to broker forever.
package org.example;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.time.Duration;
import java.util.List;
import java.util.Properties;
public class Consumer {
public static void main(String[] args) {
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9094");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-consumer-group");
try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {
consumer.subscribe(List.of("game.journal"));
while (true) {
ConsumerRecords<String, String> recs = consumer.poll(Duration.ofMillis(100));
for (ConsumerRecord<String, String> rec : recs) {
System.out.printf("Recieved %s: %s", rec.key(), rec.value());
}
}
}
}
}
So how can I make consumer crash when broker is not available?

Error using Avrodeserializer in consumer code - apache kafka

I am using kafka confluent open source version. I am using avro for serialization and deserialization. My producer java code is able to push the record in broker. But when my java consumer tries to read the data, i get below error.
Exception in thread "main" org.apache.kafka.common.errors.RecordDeserializationException: Error deserializing key/value for partition clickRecordsEvents-0 at offset 1. If needed, please seek past the record to continue consumption.
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1429)
at org.apache.kafka.clients.consumer.internals.Fetcher.access$3400(Fetcher.java:134)
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.fetchRecords(Fetcher.java:1652)
at org.apache.kafka.clients.consumer.internals.Fetcher$CompletedFetch.access$1800(Fetcher.java:1488)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:721)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:672)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1304)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1238)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at com.ru.kafka.consumer.deserializer.avro.AvroConsumerDemo.main(AvroConsumerDemo.java:31)
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 1
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:156)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:79)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:55)
at org.apache.kafka.common.serialization.Deserializer.deserialize(Deserializer.java:60)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1420)
... 9 more
Caused by: org.apache.kafka.common.errors.SerializationException: Could not find class ClickRecord specified in writer's schema whilst finding reader's schema for a SpecificRecord.
I am able to read the records from consumer console command prompt.
Here's my consumer code:
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
public class AvroConsumerDemo {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "clicksCG");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "io.confluent.kafka.serializers.KafkaAvroDeserializer");
props.put("specific.avro.reader", "true");
props.put("schema.registry.url", "http://localhost:8083");
String topic = "clickRecordsEvents";
KafkaConsumer<String, ClickRecord> consumer = new KafkaConsumer<String, ClickRecord>(props);
consumer.subscribe(Collections.singletonList(topic));
System.out.println("Reading topic:" + topic);
while (true) {
ConsumerRecords<String, ClickRecord> records = consumer.poll(Duration.ofMillis(1000));
for (ConsumerRecord<String, ClickRecord> record : records) {
System.out.println("Current customer name is: " + record.value().getBrowser());
System.out.println("Session ID read " + record.value().getSessionId());
System.out.println("Browser read " + record.value().getBrowser());
System.out.println("Compaign read " + record.value().getCompaign());
System.out.println("IP read " + record.value().getIp());
System.out.println("Channel" + record.value().getChannel());
System.out.println("Refrrer" + record.value().getRefferer());
}
consumer.commitSync();
}
}
}
Note that, I have generated ClickRecord java class using avro schema file and avro tool plugin. Its not plain java pojo.
Here's my producer code:
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import com.ru.kafka.avro.pojo.ClickRecord;
public class AvroProducerDemo {
public static void main(String[] args) {
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
props.put("schema.registry.url", "http://localhost:8083"); // URL points to the schema registry.
String topic = "clickRecordsEvents";
Producer<String, ClickRecord> producer = new KafkaProducer<String, ClickRecord>(props);
ClickRecord clickRecord = new ClickRecord();
clickRecord.setSessionId("ABC1245");
clickRecord.setBrowser("Chrome");
clickRecord.setIp("192.168.32.56");
clickRecord.setChannel("HomePage");
System.out.println("Generated clickRecord " + clickRecord.toString());
ProducerRecord<String, ClickRecord> record = new ProducerRecord<String, ClickRecord>(topic,
clickRecord.getSessionId().toString(), clickRecord);
try {
RecordMetadata metdata = producer.send(record).get();
System.out.println("Record written to partition" + metdata.partition());
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
} finally {
producer.close();
}
}
}

Kafka sending Streams

hello i want to send a message from producer and received in consumer as capitalized. using CONFLUENT and KAFKA with a build.gradle file and StreamingApp.java containing this codes:
ProcessingApp.java:
package kafka_stream;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.Topology;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import java.util.Properties;
public class StreamingApp {
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG,"streaming_app_id");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
StreamsConfig config = new StreamsConfig(props);
StreamsBuilder builder = new StreamsBuilder();
Topology topology = builder.build();
KafkaStreams streams = new KafkaStreams(topology,config);
KStream<String, String> simpleFirstStream = builder.stream("src-topic");
KStream<String, String> upperCasedStream = simpleFirstStream.mapValues(String::toUpperCase);
upperCasedStream.to("out-topic");
System.out.println("Streaming App Started");
streams.start();
Thread.sleep(30000);
System.out.println("shutting downl the streaming app");
streams.close();
}
}
so hot to get this option solved?

Apache Spark Streaming with Java & Kafka

I'm trying to run Spark Streaming example from the official Spark website
Those are the dependencies I use in my pom file:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.3.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.2.0</version>
</dependency>
This is my Java code:
package com.myproject.spark;
import java.util.Arrays;
import java.util.Collection;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.spark.SparkConf;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext;
import org.apache.spark.streaming.kafka010.ConsumerStrategies;
import org.apache.spark.streaming.kafka010.KafkaUtils;
import org.apache.spark.streaming.kafka010.LocationStrategies;
import com.myproject.spark.serialization.JsonDeserializer;
import scala.Tuple2;
public class MainEntryPoint {
public static void main(String[] args) {
Map<String, Object> kafkaParams = new HashMap<String, Object>();
kafkaParams.put("bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer",JsonDeserializer.class.getName());
kafkaParams.put("group.id", "ttk-event-listener");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", false);
Collection<String> topics = Arrays.asList("topic1", "topic2");
SparkConf conf = new SparkConf()
.setMaster("local[*]")
.setAppName("EMSStreamingApp");
JavaStreamingContext streamingContext =
new JavaStreamingContext(conf, Durations.seconds(1));
JavaInputDStream<ConsumerRecord<String, String>> stream =
KafkaUtils.createDirectStream(
streamingContext,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
);
stream.mapToPair(record -> new Tuple2<>(record.key(), record.value()));
streamingContext.start();
try {
streamingContext.awaitTermination();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
When I try to run it from Eclipse I get following exception:
18/07/16 13:35:27 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.1.106, 51604, None)
18/07/16 13:35:27 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.1.106, 51604, None)
Exception in thread "main" java.lang.AbstractMethodError
at org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
at org.apache.spark.streaming.kafka010.KafkaUtils$.initializeLogIfNecessary(KafkaUtils.scala:39)
at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
at org.apache.spark.streaming.kafka010.KafkaUtils$.log(KafkaUtils.scala:39)
at org.apache.spark.internal.Logging$class.logWarning(Logging.scala:66)
at org.apache.spark.streaming.kafka010.KafkaUtils$.logWarning(KafkaUtils.scala:39)
at org.apache.spark.streaming.kafka010.KafkaUtils$.fixKafkaParams(KafkaUtils.scala:201)
at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.<init>(DirectKafkaInputDStream.scala:63)
at org.apache.spark.streaming.kafka010.KafkaUtils$.createDirectStream(KafkaUtils.scala:147)
at org.apache.spark.streaming.kafka010.KafkaUtils$.createDirectStream(KafkaUtils.scala:124)
at org.apache.spark.streaming.kafka010.KafkaUtils$.createDirectStream(KafkaUtils.scala:168)
at org.apache.spark.streaming.kafka010.KafkaUtils.createDirectStream(KafkaUtils.scala)
at com.myproject.spark.MainEntryPoint.main(MainEntryPoint.java:47)
18/07/16 13:35:28 INFO SparkContext: Invoking stop() from shutdown hook
I run this from my IDE (eclipse). Do I have to create and deploy the JAR into spark to make it run. If anyone knows about the exception, please share your experience. Thanks in advance
Try using 2.3.1 also for the spark-streaming-kafka dependency.
Check also other related questions and their answers about java.lang.AbstractMethodError.
It usually means a mismatch between used libraries and their interfaces/implementations.

Unable to push messages to apache kafka?

I am new to kafka and trying to run a sample apache java producer code to push data to kafka. I am able to create new topics through java but while pushing, I am getting an exception. Here is the code:
package kafkaTest;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import java.util.Properties;
import org.I0Itec.zkclient.ZkClient;
import org.I0Itec.zkclient.serialize.BytesPushThroughSerializer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class HelloKafkaProducer {
final static String TOPIC = "test_kafka1";
public static void main(String[] argv){
Properties properties = new Properties();
properties.put("metadata.broker.list", "172.25.37.66:9092");
ZkClient zkClient = new ZkClient("172.25.37.66:2181", 4000, 6000, new BytesPushThroughSerializer());
List<String> brokerList = zkClient.getChildren("/brokers/topics");
for(int i=0;i<brokerList.size();i++){
System.out.println(brokerList.get(i));
}
properties.put("zk.connect","172.25.37.66:2181");
properties.put("serializer.class","kafka.serializer.StringEncoder");
ProducerConfig producerConfig = new ProducerConfig(properties);
kafka.javaapi.producer.Producer<String,String> producer = new kafka.javaapi.producer.Producer<String, String>(producerConfig);
SimpleDateFormat sdf = new SimpleDateFormat();
KeyedMessage<String, String> message =new KeyedMessage<String, String>(TOPIC,"Test message from java program " + sdf.format(new Date()));
System.out.println(message);
producer.send(message);
/*Consumer consumerThread = new Consumer(TOPIC);
consumerThread.start();*/
}
}
And this is the stacktrace :
topic1
test_kafka1
topic11
test
test_kafka
KeyedMessage(test_kafka1,null,null,Test message from java program 4/5/15 1:30 PM)
Exception in thread "main" [2015-05-04 13:30:41,432] ERROR Failed to send requests for topics test_kafka1 with correlation ids in [0,12] (kafka.producer.async.DefaultEventHandler:97)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at kafkaTest.HelloKafkaProducer.main(HelloKafkaProducer.java:54)
On the console, I see the [2015-05-04 18:55:29,959] INFO Closing socket connection to /172.17.70.73. (kafka.network.Processor) everytime I run the program. I am able to push and pull using console to the topics.
All help would be appreciated .
Thanks.
You don't connect to zookeeper in case of Kafka Producer. You have to connect to the Broker. For that purpose use the following property.
props.put("metadata.broker.list", "localhost:9092, broker1:9092");
Here I have used localhost in your case it will be 172.25.37.66

Categories