Not able to publish messages to Kafka from Lambda function - java

I am trying to publish some messages to kafka from a aws lambda function. When I'm trying to test this feature on my local, the messages are not getting published and the function is getting timed out. I was able to connect to the local kafka instance from the same lambda function using a consumer and list the topics. Is there anything I'm missing here?
String bootstrapServer = "host.docker.internal:9092";
String topic = "test-topic";
Properties properties = new Properties();
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.ACKS_CONFIG, "all");
properties.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
Properties props = new Properties();
props.put("bootstrap.servers", bootstrapServer);
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> simpleConsumer = new KafkaConsumer<>(props);
// THIS IS WORKING
simpleConsumer.listTopics().forEach((t, v) -> logger.log(t + "\n"));
try (KafkaProducer<String, String> kafkaProducer = new KafkaProducer<>(properties)) {
ProducerRecord<String, String> record = new ProducerRecord<>(topic, value);
logger.log("Publishing '" + value + "' to kafka topic: " + topic + "\n");
Future<RecordMetadata> future = kafkaProducer.send(record);
logger.log("Waiting for flushing data\n");
kafkaProducer.flush();
logger.log("Waiting for the response\n");
RecordMetadata recordMetadata = future.get();
logger.log("Published record to kafka topic: " + recordMetadata.topic() + " partition: " + recordMetadata.partition() + " offset: " + recordMetadata.offset() + "\n");
I'm triggering the function in my local using sam cli as follows:
sam local invoke "MyLambdaFunction" -e events/event.json
This is the timeout message I get.
Invoking helloworld.App::handleRequest (java11)
Skip pulling image and use local one: public.ecr.aws/sam/emulation-java11:rapid-1.59.0-x86_64.
Mounting .aws-sam/build/MyLambdaFunction as /var/task:ro,delegated inside runtime container
START RequestId: c07609a4-7cab-4b58-97fc-cdddf20afd5c Version: $LATEST
Picked up JAVA_TOOL_OPTIONS: -XX:+TieredCompilation -XX:TieredStopAtLevel=1
test-topic
__consumer_offsets
Publishing 'abc' to kafka topic: test-topic
Function 'MyLambdaFunction' timed out after 20 seconds
No response from invoke container for MyLambdaFunction

Related

Confluent Control Center - Consumer is not listed

I have the following code to connect to Kafka
Properties props = new Properties();
props.put("bootstrap.servers", "myconfluentkafkabroker:9092");
props.put("group.id","test");
props.put("enable.auto.commit","true");
props.put("auto.commit.interval.ms","1000");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "my_CG");
props.put("group.instance.id", "my_instance_CG_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put("key.deserializer", Class.forName("org.apache.kafka.common.serialization.StringDeserializer"));
props.put("value.deserializer", Class.forName("org.apache.kafka.common.serialization.StringDeserializer"));
KafkaConsumer<String,String> consumer = new KafkaConsumer<String,String>(props);
consumer.subscribe(Arrays.asList("MyTopic"));
try {
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
{
log.debug("topic = %s, partition = %d, offset = %d,"
customer = %s, country = %s\n",
record.topic(), record.partition(), record.offset(),
record.key(), record.value());
int updatedCount = 1;
if (custCountryMap.countainsKey(record.value())) {
updatedCount = custCountryMap.get(record.value()) + 1;
}
custCountryMap.put(record.value(), updatedCount)
JSONObject json = new JSONObject(custCountryMap);
System.out.println(json.toString(4));
}
}
} finally {
consumer.close();
}
Code didn't throw any errors but I still don't see the consumer listed
would this be an issue?
props.put("group.instance.id", "my_instance_CG_id");
You should verify the information that you see with the built-in tools that Kafka provides like kafka-consumer-groups.sh
You'll also need to actually poll messages and commit offsets, not just subscribe before you will see anything.
Otherwise, for that specific Control Center dashboard, it may require you to add the Monitoring Interceptors into your client

Inconsistent data output from Kafka consumer

I need to pull data from Kafka consumer to pass it on to my application. Below is the code that I have written to access the consumer:
public class ConsumerGroup {
public static void main(String[] args) throws Exception {
String topic = "kafka_topic";
String group = "0";
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", group);
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("auto.offset.reset", "earliest");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
consumer.subscribe(Arrays.asList(topic));
System.out.println("Subscribed to topic: " + topic);
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
}
}
}
When I run this code, sometimes the data is getting generated and sometimes no data is generated. Why this behavior is inconsistent? Is there any issue with my code?
Your code is Ok. You have autocommit option enabled, so after you read the records, they are automatically committed to Kafka. Every time when you run the code you start from the last processed offset, which is stored in __consumer_offsets topic. So you always read only the new records, which have arrived to Kafka after last run. To print the data constantly in the consumer app, you should put constantly new records into your topic.

Consumer kafka java code is running without output

i am trying to run a kafka consumer program in order to get messages from topic named "test2"
i am using Kafka 0.9 API
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "test");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("test2"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
}
this code is from the official documentation of the Kafka Consumer API 0.9
the figure below clarify more the situation
Eclipse console
Any suggestions for resolving this issue
Thank you in advance
Just add slf4j jar in your build path and try

NoSuchElementException kafka

I am new to kafka.I read documentation to get started and now I am trying to do hands on using embedded kafka mode.I tried a sample program for the same.
public static void main(String args[]) throws InterruptedException, IOException {
// setup Zookeeper
EmbeddedZookeeper zkServer = new EmbeddedZookeeper();
String zkConnect = ZKHOST + ":" + zkServer.port();
ZkClient zkClient = new ZkClient(zkConnect, 30000, 30000, ZKStringSerializer$.MODULE$);
ZkUtils zkUtils = ZkUtils.apply(zkClient, false);
// setup Broker
Properties brokerProps = new Properties();
brokerProps.setProperty("zookeeper.connect", zkConnect);
brokerProps.setProperty("broker.id", "0");
brokerProps.setProperty("log.dirs", Files.createTempDirectory("kafka-").toAbsolutePath().toString());
brokerProps.setProperty("listeners", "PLAINTEXT://" + BROKERHOST +":" + BROKERPORT);
KafkaConfig config = new KafkaConfig(brokerProps);
Time mock = new MockTime();
KafkaServer kafkaServer = TestUtils.createServer(config, mock);
// create topic
AdminUtils.createTopic(zkUtils, TOPIC, 1, 1, new Properties(), RackAwareMode.Disabled$.MODULE$);
// setup producer
Properties producerProps = new Properties();
producerProps.setProperty("bootstrap.servers", BROKERHOST + ":" + BROKERPORT);
producerProps.setProperty("key.serializer","org.apache.kafka.common.serialization.IntegerSerializer");
producerProps.setProperty("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
KafkaProducer<Integer, byte[]> producer = new KafkaProducer<Integer, byte[]>(producerProps);
List<PartitionInfo> partitionInfo = producer.partitionsFor("test");
System.out.println(partitionInfo);
// setup consumer
Properties consumerProps = new Properties();
consumerProps.setProperty("bootstrap.servers", BROKERHOST + ":" + BROKERPORT);
consumerProps.setProperty("group.id", "group0");
consumerProps.setProperty("client.id", "consumer0");
consumerProps.setProperty("key.deserializer","org.apache.kafka.common.serialization.IntegerDeserializer");
consumerProps.setProperty("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
consumerProps.put("auto.offset.reset", "earliest"); // to make sure the consumer starts from the beginning of the topic
KafkaConsumer<Integer, byte[]> consumer = new KafkaConsumer<>(consumerProps);
consumer.subscribe(Arrays.asList(TOPIC));
// send message
ProducerRecord<Integer, byte[]> data = new ProducerRecord<>(TOPIC, 42, "test-message".getBytes(StandardCharsets.UTF_8));
producer.send(data);
producer.close();
// starting consumer
ConsumerRecords<Integer, byte[]> records = consumer.poll(1000);
Iterator<ConsumerRecord<Integer, byte[]>> recordIterator = records.iterator();
ConsumerRecord<Integer, byte[]> record = recordIterator.next();
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
kafkaServer.shutdown();
zkClient.close();
zkServer.shutdown();
}
}
but Iam not able fetch data for the topics.Iam getting the following exception while executing the programm
java.util.NoSuchElementException
at org.apache.kafka.common.utils.AbstractIterator.next(AbstractIterator.java:52)
at com.nuwaza.evlauation.embedded.kafka.EmbeddedKafka.main(EmbeddedKafka.java:105)
Can anyone guide me?
UPDATED-
WARN [main] (Logging.scala#warn:83) - No meta.properties file under dir C:\Users\bhavanak\AppData\Local\Temp\kafka-1238324273778000675\meta.properties
WARN [main] (Logging.scala#warn:83) - No meta.properties file under dir C:\Users\bhavanak\AppData\Local\Temp\kafka-1238324273778000675\meta.properties
WARN [kafka-producer-network-thread | producer-1] (NetworkClient.java#handleResponse:600) - Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE}
WARN [kafka-producer-network-thread | producer-1] (NetworkClient.java#handleResponse:600) - Error while fetching metadata with correlation id 1 : {test=LEADER_NOT_AVAILABLE}
WARN [kafka-producer-network-thread | producer-1] (NetworkClient.java#handleResponse:600) - Error while fetching metadata with correlation id 2 : {test=LEADER_NOT_AVAILABLE}
[Partition(topic = test, partition = 0, leader = 0, replicas = [0,], isr = [0,]]
ERROR [main] (NIOServerCnxnFactory.java#uncaughtException:44) - Thread Thread[main,5,main] died
java.util.NoSuchElementException
at org.apache.kafka.common.utils.AbstractIterator.next(AbstractIterator.java:52)
at com.nuwaza.evlauation.embedded.kafka.EmbeddedKafka.main(EmbeddedKafka.java:105)
Try to invoke producer.flush() before reading messages to ensure the produced messages indeed are persisted on disks.
This error signifies that your consumer is trying to read the message even before it gets persisted to kafka logs. Ideally you should run producer and consumer as separate process. I was facing the same issue but that was due to other reason being iterator.next() was invoked twice mistakenly. Just in case someone else facing same issue.

Kafka 0.10: what's wrong with my consumer?

i'm trying to do some easy demo in kafka-0.10.0.0.
my producer is ok , but consumer maybe not correct, code as below.
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("group.id", "group1");
props.put("enable.auto.commit", "false");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("topictest2"));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (TopicPartition partition : records.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
System.out.println("Thread = "+Thread.currentThread().getName()+" ");
System.out.printf("partition = %d, offset = %d, key = %s, value = %s",record.partition(), record.offset(), record.key(), record.value());
System.out.println("\n");
}
// consumer.commitSync();
long lastOffset = partitionRecords.get(partitionRecords.size() - 1).offset();
consumer.commitSync(Collections.singletonMap(partition, new OffsetAndMetadata(lastOffset + 1)));
}
}
but when i run this demo, No output! what is the problem in my code?
It looks valid.
I think the program is just waiting for new messages because
auto.offset.reset default is latest
If you have some messages in that topic and want to read them, try to add
props.put("auto.offset.reset", "earliest");
to start reading topic from beginning and reset your group.id to something unique to make sure it won't continue from saved offset or do not commit offset at all. Once its there for a group id the auto.offset.reset is skipped.
props.put("group.id", "group."+UUID.randomUUID().toString());

Categories