We have a Kafka cluster consists of 6 nodes. Five of the 6 nodes have zookeeper.
A spark streaming job is reading from a streaming server, do some processing, and send the result to Kafka.
From time to time the spark job got stuck, no data is sent to Kafka, and the job is restarted.
The job keeps stuck and restarted until we manually restart the Kafka cluster. After restarting Kafka everything is working smoothly.
Checking the Kafka logs we found this exception is thrown several times
2017-03-10 05:12:14,177 ERROR state.change.logger: Controller 133 epoch 616 initiated state change for partition [live_stream_2,52] from OfflinePartition to OnlinePartition failed
kafka.common.NoReplicaOnlineException: No broker in ISR for partition [gnip_live_stream_2,52] is alive. Live brokers are: [Set(133, 137, 134, 135, 143)], ISR brokers are: [142]
at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:66)
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:333)
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:164)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141)
at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:823)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
The exception above is thrown for an unused topic (live_stream_2) but it is thrown also for a used topic with a little difference.
Here is the exception for the used topic
2017-03-10 12:05:18,535 ERROR state.change.logger: Controller 133 epoch 620 initiated state change for partition [gnip_live_stream,3] from OfflinePartition to OnlinePartition failed
kafka.common.NoReplicaOnlineException: No broker in ISR for partition [live_stream,3] is alive. Live brokers are: [Set(133, 134, 135, 137)], ISR brokers are: [136]
at kafka.controller.OfflinePartitionLeaderSelector.selectLeader(PartitionLeaderSelector.scala:66)
at kafka.controller.PartitionStateMachine.electLeaderForPartition(PartitionStateMachine.scala:345)
at kafka.controller.PartitionStateMachine.kafka$controller$PartitionStateMachine$$handleStateChange(PartitionStateMachine.scala:205)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:120)
at kafka.controller.PartitionStateMachine$$anonfun$triggerOnlinePartitionStateChange$3.apply(PartitionStateMachine.scala:117)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:778)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:777)
at kafka.controller.PartitionStateMachine.triggerOnlinePartitionStateChange(PartitionStateMachine.scala:117)
at kafka.controller.PartitionStateMachine.startup(PartitionStateMachine.scala:70)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:333)
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:164)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:84)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141)
at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:823)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
In the first exception, it says the ISR broker list for partition 52 contains only the broker with ID 142 which is weird because the cluster has no broker with this id.
In the second exception, it says the ISR broker list for partition 3 contains only the broker with ID 136 which is not existing in the broker live list.
I suspect there is stale data in zookeeper that cause the first exception and for some reason broker 136 was down at specific time which causes the second exception.
My questions
1- Could those exceptions be the reason of Kafka (and consequently the spark job) to stuck?
2- How to solve it?
Related
I'm using kafka + redis in my project.
I get message from Kafka, process and save to redis, but it is giving error like below when my code runs after some time my code
io.smallrye.mutiny.TimeoutException
at io.smallrye.mutiny.operators.uni.UniBlockingAwait.await(UniBlockingAwait.java:64)
at io.smallrye.mutiny.groups.UniAwait.atMost(UniAwait.java:65)
at io.quarkus.redis.client.runtime.RedisClientImpl.await(RedisClientImpl.java:1046)
at io.quarkus.redis.client.runtime.RedisClientImpl.set(RedisClientImpl.java:687)
at worker.redis.process.implementation.ProductImplementation.refresh(ProductImplementation.java:34)
at worker.redis.Worker.refresh(Worker.java:51)
at
kafka.InComingProductKafkaConsume.lambda$consume$0(InComingProductKafkaConsume.java:38)
at business.core.hpithead.ThreadStart.doRun(ThreadStart.java:34)
at business.core.hpithead.core.NotifyingThread.run(NotifyingThread.java:27)
at java.base/java.lang.Thread.run(Thread.java:833)
The record 51761 from topic-partition 'mer-outgoing-master-item-0' has waited for 153 seconds to be acknowledged. This waiting time is greater than the configured threshold (150000 ms). At the moment 2 messages from this partition are awaiting acknowledgement. The last committed offset for this partition was 51760. This error is due to a potential issue in the application which does not acknowledged the records in a timely fashion. The connector cannot commit as a record processing has not completed.
#Incoming("mer_product")
#Blocking
public CompletionStage<Void> consume2(Message<String> payload) {
var objectDto = configThreadLocal.mapper.readValue(payload.getPayload(), new TypeReference<KafkaPayload<ItemKO>>(){});
worker.refresh(objectDto.payload.castDto());
return payload.ack();
}
Can anyone please tell me about this exception.
ERROR [kafka-producer-network-thread | producer-2] c.o.p.a.s.CalculatorAdapter [CalculatorAdapter.java:285]
Cannot send outgoingDto with decision id = 46d1-9491-123ce9c7a916 in kafka:
org.springframework.kafka.core.KafkaProducerException: Failed to send;
nested exception is org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record(s) for save-request-0:604351 ms has passed since batch creation
at org.springframework.kafka.core.KafkaTemplate.lambda$buildCallback$4(KafkaTemplate.java:602)
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer$1.onCompletion(DefaultKafkaProducerFactory.java:871)
at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1356)
at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197)
at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:676)
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:380)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:323)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.TimeoutException:
Expiring 1 record(s) for save-request-0:604351 ms has passed since batch creation
I have been fighting with him for the second week.
Revised a bunch of fix recipes, but none of the recipes helped.
My program sends messages about 60 kilobytes in size, but they do not reach the kafka server.
The entire java application log is filled with exceptions of this kind.
My guess is that the time to fill the batch size takes longer than the time of the transaction, so the message is not sent.
// example
Properties props = new Properties();
...
pros.put(ProducerConfig.BATCH_SIZE_CONFIG, 60000); // 60kb
...
Producer producer = new KafkaProducer<>(props);
Checkout this articles.
Kafka Producer Batch
Kafka Producer batch size
Batch size configuration
http://cloudurable.com/blog/kafka-tutorial-kafka-producer-advanced-java-examples/index.html
https://kafka.apache.org/26/javadoc/org/apache/kafka/clients/producer/ProducerConfig.html
One of my KafkaStream apps gives me the following error:
Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms
It is not likely a permission or connectivity issue. Because it consumes a batch of messages (maybe till the last message available) then I see Informed to shut down in logs and then State transition from RUNNING to PENDING_SHUTDOWN. Then I get the timeout error.
It is a simple single-threaded app with code as simple as this:
builder.stream("prod.company.scores",
Consumed.with(ScoreSerde.getGenericKeySerde(), ScoreSerde.getEnvelopeSerde()))
.filter((key,value) -> isRecordNew(value))
.filter((key,value) -> isScoreNew(value))
.filter((key,value) -> isScoreRedeem(value))
.peek(foreachAction)
.to("kafka-consumer.score.redeem");
INFO KafkaProducer: Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. this message does not shutdown the application. Need to check for other problems.
I'm trying to start a spark streaming session which consumes from a Kafka queue and I'm using Zookeeper for config mgt. However, when I try to start this following exception is being thrown.
18/03/26 09:25:49 INFO ZookeeperConnection: Checking Kafka topic core-data-tickets does exists ...
18/03/26 09:25:49 INFO Broker: Kafka topic core-data-tickets exists
18/03/26 09:25:49 INFO Broker: Processing topic : core-data-tickets
18/03/26 09:25:49 WARN ZookeeperConnection: Resetting Topic Offset
org.I0Itec.zkclient.exception.ZkNoNodeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/clt/offsets/core-data-tickets/4
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:766)
at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:761)
at kafka.utils.ZkUtils$.readData(ZkUtils.scala:443)
at kafka.utils.ZkUtils.readData(ZkUtils.scala)
at net.core.data.connection.ZookeeperConnection.readTopicPartitionOffset(ZookeeperConnection.java:145)
I have already created the relevant Kafka topic.
Any insights on this would be highly appreciated.
#
I'm using the following code to run the spark job
spark-submit --class net.core.data.compute.Broker --executor-memory 512M --total-executor-cores 2 --driver-java-options "-Dproperties.path=/ebs/tmp/continuous-loading-tool/continuous-loading-tool/src/main/resources/dev.properties" --conf spark.ui.port=4045 /ebs/tmp/dev/data/continuous-loading-tool/target/continuous-loading-tool-1.0-SNAPSHOT.jar
I guess that this error has to do with offsets retention. By default, offsets are stored for only 1440 minutes (i.e. 24 hours). Therefore, if the group has not committed offsets within a day, Kafka won't have information about it.
A possible workaround is to set the value of offsets.retention.minutes accordingly.
offsets.retention.minutes
Offsets older than this retention period will be discarded
I have got the below exception , I suspect heap memory is full so GC exception was thrown . Kindly explain if any other perspective for the below application solution
2017:06:07 21:18:36.275 [loginputtaskexecutor-7] ERROR o.s.i.handler.LoggingHandler - org.springframework.messaging.MessageHandlingException: nested exception is java.lang.IllegalStateException: Cannot process message
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:96)
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:89)
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.access$000(UnicastingDispatcher.java:53)
at org.springframework.integration.dispatcher.UnicastingDispatcher$3.run(UnicastingDispatcher.java:129)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: Cannot process message
at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:333)
at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:155)
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:93)
... 11 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
**Application flow in detail :**
Spring integration application build to listen message from ActiveMQ , after consuming message from ActiveMQ it will be handed over to input channel (Executor channel) which has subscriber as service activator . In Service Activator message is converted to json then stored to Cassandra . # transaction was mentioned on the Service activator method .
With the above solution , I thought of breaking Message transaction flow by implementing executor channel , after message consumed it will be handed over to executor channel and the transaction ends . after then threads in executor channel would take care of performing parallel write to cassandra.
Is there any better way to write as fast as possible for large volume of data to casandra using java spring integration
If the data sink can't keep up, add a limit to the queue size in the TaskExecutor and use a CallerRunsPolicy or CallerBlocksPolicy when the queue is full.
That will naturally throttle the workload at the rate the sink can deal with.