Kafka Streams error "TaskAssignmentException: unable to decode subscription data: version=4" - java

During deployment with only changed Kafka-Streams version from 1.1.1 to 2.x.x (without changing application.id), we got exceptions on app node with older Kafka-Streams version and, as a result, Kafka streams changed state to error and closed, meanwhile app node with new Kafka-Streams version consumes messages fine.
If we upgrade from 1.1.1 to 2.0.0, got error unable to decode subscription data: version=3; if from 1.1.1 to 2.3.0: unable to decode subscription data: version=4.
It might be really painful during canary deployment, e.g. we have 3 app nodes with previous Kafka-Streams version, and when we add one more node with a new version, all existing 3 nodes will be in error state. Error stack trace:
TaskAssignmentException: unable to decode subscription data: version=4
at org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:128)
at org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:297)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:358)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:520)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:93)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:472)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:455)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:822)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:802)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:204)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:127)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:563)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:390)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:293)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:364)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1149)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:831)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:788)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:749)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:719)
Issue is reproducible on both Kafka broker versions 1.1.0 and 2.1.1, even with the simple Kafka-Streams DSL example:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("default.key.serde", "org.apache.kafka.common.serialization.Serdes$StringSerde");
props.put("default.value.serde", "org.apache.kafka.common.serialization.Serdes$StringSerde");
props.put("application.id", "xxx");
StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.<String, String>stream("source")
.mapValues(value -> value + value)
.to("destination");
KafkaStreams kafkaStreams = new KafkaStreams(streamsBuilder.build(), props);
Is it a bug of kafka-streams? Does exist any workaround to prevent such failure?

Related

Kafka Connect - No converter present due to unexpected object type: java.lang.Double

I have a Kafka Streams application, which does some calculations based on values from topic, sing sliding windows. I read, that the best practice for persisting the data would be to push the values to another topic, and use Kafka Connect to get data from topic and save it in the database.
I downloaded the confluent/kafka-connect image and extended it by installing mongodb-kafka-connector using the dockerfile.
FROM confluentinc/cp-kafka-connect:7.0.3
COPY target/components/packages/mongodb-kafka-connect-mongodb-1.7.0.zip /tmp/mongodb-kafka-connect-mongodb-1.7.0.zip
RUN confluent-hub install --no-prompt /tmp/mongodb-kafka-connect-mongodb-1.7.0.zip
I sent a request with config to the kafka connect:
curl -X PUT http://localhost:8083/connectors/sink-mongodb-users/config -H "Content-Type: application/json" -d ' {
"connector.class":"com.mongodb.kafka.connect.MongoSinkConnector",
"tasks.max":"1",
"topics":"MOVINGAVG",
"connection.uri":"mongodb://mongo:27017",
"database":"Temperature",
"collection":"MovingAverage",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable":true,
"value.converter":"org.apache.kafka.connect.converters.DoubleConverter",
"value.converter.schemas.enable":true
}'
The Kafka Streams app producing records:
StreamsBuilder streamsBuilder = new StreamsBuilder();
KStream<String, Double> kafkaStreams = streamsBuilder.stream(topic, Consumed.with(Serdes.String(), Serdes.Double()));
Duration timeDifference = Duration.ofSeconds(30);
KTable table = kafkaStreams.groupByKey()
.windowedBy(SlidingWindows.ofTimeDifferenceWithNoGrace(timeDifference))
.aggregate(
() -> generateTuple(logger), // initializer
(key, value, aggregate) -> tempAggregator(key, value,aggregate, logger))
.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()))
.mapValues((ValueMapper<AggregationClass, Object>) tuple2 -> tuple2.getAverage());
table.toStream().peek((k,v) -> System.out.println("Value:" + v)).to(targetTopic, Produced.valueSerde(Serdes.Double()));
KafkaStreams streams = new KafkaStreams(streamsBuilder.build(), properties);
streams.cleanUp();
streams.start();
The messages are being put to the topic, and mongo connector is reading them, however an exception is thrown:
Caused by: org.apache.kafka.connect.errors.DataException: Could not convert value `2.1474836E7` into a BsonDocument.
at com.mongodb.kafka.connect.sink.converter.LazyBsonDocument.getUnwrapped(LazyBsonDocument.java:169)
at com.mongodb.kafka.connect.sink.converter.LazyBsonDocument.containsKey(LazyBsonDocument.java:83)
at com.mongodb.kafka.connect.sink.processor.DocumentIdAdder.shouldAppend(DocumentIdAdder.java:68)
at com.mongodb.kafka.connect.sink.processor.DocumentIdAdder.lambda$process$0(DocumentIdAdder.java:51)
at java.base/java.util.Optional.ifPresent(Optional.java:183)
at com.mongodb.kafka.connect.sink.processor.DocumentIdAdder.process(DocumentIdAdder.java:49)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModel$1(MongoProcessedSinkRecordData.java:90)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.lambda$buildWriteModel$2(MongoProcessedSinkRecordData.java:90)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.tryProcess(MongoProcessedSinkRecordData.java:105)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.buildWriteModel(MongoProcessedSinkRecordData.java:85)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.createWriteModel(MongoProcessedSinkRecordData.java:81)
at com.mongodb.kafka.connect.sink.MongoProcessedSinkRecordData.<init>(MongoProcessedSinkRecordData.java:51)
at com.mongodb.kafka.connect.sink.MongoSinkRecordProcessor.orderedGroupByTopicAndNamespace(MongoSinkRecordProcessor.java:45)
at com.mongodb.kafka.connect.sink.StartedMongoSinkTask.put(StartedMongoSinkTask.java:75)
at com.mongodb.kafka.connect.sink.MongoSinkTask.put(MongoSinkTask.java:90)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:601)
... 10 more
Caused by: org.apache.kafka.connect.errors.DataException: No converter present due to unexpected object type: java.lang.Double
at com.mongodb.kafka.connect.sink.converter.SinkConverter.getRecordConverter(SinkConverter.java:92)
at com.mongodb.kafka.connect.sink.converter.SinkConverter.lambda$convert$1(SinkConverter.java:60)
at com.mongodb.kafka.connect.sink.converter.LazyBsonDocument.getUnwrapped(LazyBsonDocument.java:166)
... 27 more
Why such exception would be thrown?

listTopics with kafka-clients-2.6.0 gives TimeoutException

I have a small code to check if a particular topic is already present in Kafka. It worked fine with kafka-clients-2.5.0. But after upgrading to kafka-clients to 2.6.0, it started giving TimeoutException.
This was my original code.
Properties adminProperties = new Properties();
adminProperties.put(ProducerConfig."bootstrap.servers", "localhost:9092");
AdminClient adminClient = KafkaAdminClient.create(adminProperties);
boolean topicExists = adminClient.listTopics().names().get().contains("myDataTopic");
For troubleshooting, I have splitted it and tried extending some timeout values like below. But no use. It works fine with 2.5.1 but not with 2.6.0.
Properties adminProperties = new Properties();
adminProperties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
adminProperties.put(AdminClientConfig.DEFAULT_API_TIMEOUT_MS_CONFIG, "900000");
AdminClient adminClient = KafkaAdminClient.create(adminProperties);
System.out.println("createKafkaTopic(): Listing Topics...");
ListTopicsResult listTopicsResult = adminClient.listTopics(new ListTopicsOptions().timeoutMs(900000));
System.out.println("createKafkaTopic(): Retrieve Topic names...");
KafkaFuture<Collection<TopicListing>> setKafkaFuture = listTopicsResult.listings();
System.out.println("createKafkaTopic(): Display existing Topics...");
while(!setKafkaFuture.isDone()) {
System.out.println("Waiting...");
Thread.sleep(10);
}
Collection<TopicListing> topicNames = setKafkaFuture.get(900,TimeUnit.SECONDS);
System.out.println(topicNames);
System.out.println("createKafkaTopic(): Check if Topic exists...");
boolean topicExists = topicNames.contains("myDataTopic");
Here is my output:
createKafkaTopic(): Listing Topics...
createKafkaTopic(): Retrieve Topic names...
createKafkaTopic(): Display existing Topics...
Waiting...
Waiting...
Waiting...
Waiting...
Exception in thread "main" java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1604903438670, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:104)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:272)
at KafkaUtil.createKafkaTopic(KafkaUtil.java:45)
at KafkaUtil.main(KafkaUtil.java:21)
Caused by: org.apache.kafka.common.errors.TimeoutException: Call(callName=listTopics, deadlineMs=1604903438670, tries=1, nextAllowedTryMs=-9223372036854775709) timed out at 9223372036854775807 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: The AdminClient thread has exited.
I saw a similar issue here (How to display topics using Kafka Clients in Java?). But it seems resolved by adding some dependencies. I too tried adding all dependencies to my pom.xml, and no luck.
Upgrading kafka-clients to version 2.6.3 worked for me.

Kafka producer ERROR using "security.protocol"

I have a Kerberized cluster with 2 Kafka brokers and 3 Zookeeper nodes.
I have a Java application, in local, ( using kafka-client 2.4.0 ) that must produce and read messages from topics in Kafka brokers.
In input at the VM I gave:
-Djava.security.auth.login.config=/Users/mypath/kafka_client_jaas.conf
I also have a schema registry ( confluent ), in local, connected to the cluster.
To create a Producer a set those options:
Properties producerProps = new Properties();
producerProps.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
producerProps.put("sasl.kerberos.service.name", "kafka");
producerProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, Constants.KAFKA_BROKERS);
producerProps.put(ProducerConfig.CLIENT_ID_CONFIG, user);
producerProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
producerProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class);
producerProps.put(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081");
error:
java.io.IOException: Configuration Error:
row 5: expected [option key], found [null]
at sun.security.provider.ConfigFile$Spi.<init>(ConfigFile.java:137)
at sun.security.provider.ConfigFile.<init>(ConfigFile.java:102)
The problem is caused by
producerProps.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
Without this line, the application compiles but I receive:
org.apache.kafka.common.errors.TimeoutException: Topic TEST not present in metadata after 60000 ms.
What should I do?
P.S. the "kafka_client_jaas.conf" should be right, since I use it also with schema registry and it works fine.
kafka_client_jaas.conf:
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/Users/path/kafka.service.keytab" \
principal="kafka/kafka-broker01.domain.xx#DOMAIN.XX"; };

Kafka Stream Subscription Error - Invalid Version

When attempting to connect to a topic from Java jetty microservice, I’m getting this Kafka internal version mismatch error:
stream-thread [App-94d44dcd-f1d4-49a6-9dd3-8d4eee06f82a-StreamThread-1] Encountered the following error during processing:
java.lang.IllegalArgumentException: version must be between 1 and 3; was: 4
at org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.<init>(SubscriptionInfo.java:67)
at org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.subscription(StreamsPartitionAssignor.java:312)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.metadata(ConsumerCoordinator.java:176)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.sendJoinGroupRequest(AbstractCoordinator.java:515)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.initiateJoinGroup(AbstractCoordinator.java:466)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:412)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:333)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:814)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Any ideas on what could cause such an exception?
I had come across this error myself and it is most likely because you have used non-unique APPLICATION_ID_CONFIG and/or CLIENT_ID_CONFIG
// Give the Streams application a unique name. The name must be unique in the Kafka cluster
// against which the application is run.
streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, "my-app");
streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, "my-client");

kafka -> storm -> flink : unexpected block data

Im moving a topology from storm to flink. The topology has been reduced to KafkaSpout->Bolt. The bolt is just counting packets and not trying to decode them.
The compiled .jar is submitted to flink via flink -c <entry point> <path to .jar> and hits the following error:
java.lang.Exception: Call to registerInputOutput() of invokable failed
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:529)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot instantiate user function.
at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:190)
at org.apache.flink.streaming.runtime.tasks.StreamTask.registerInputOutput(StreamTask.java:174)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:526)
... 1 more
Caused by: java.io.StreamCorruptedException: unexpected block data
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1365)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:294)
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:255)
at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperator(StreamConfig.java:175)
... 3 more
My question(s):
Did I miss a configuration step w/re the KafkaSpout? This was working when used in vanilla-storm.
Are there specific versions of the storm libraries that I need to use? I'm including 0.9.4 with my build.
Something else that I might have missed?
Should I be using the storm KafkaSpout or would I be better served by writing my own using the flink KafkaSource?
EDIT:
Here are the relevant pieces of code:
Topology:
BrokerHosts brokerHosts = new ZkHosts(configuration.getString("kafka/zookeeper"));
SpoutConfig kafkaConfig = new SpoutConfig(brokerHosts, configuration.getString("kafka/topic"), "/storm_env_values", "storm_env_DEBUG");
FlinkTopologyBuilder builder = new FlinkTopologyBuilder();
builder.setSpout("environment", new KafkaSpout(kafkaConfig), 1);
builder.setBolt("decode_bytes", new EnvironmentBolt(), 1).shuffleGrouping("environment");
Init:
FlinkLocalCluster cluster = new FlinkLocalCluster(); // replaces: LocalCluster cluster = new LocalCluster();
cluster.submitTopology("env_topology", conf, buildTopology());
The bolt is based on BaseRichBolt. The execute() fn just logs the presence of any packet to debug. No other code in there.
I just had look at this. There is one issues right now but I got it working locally. You can apply this hot fixed to your code and build the compatibility layer by yourself.
KafkaSpout registers metrics. However, metrics are currently not supported by the compatibility layer. You need to remove the exception in FlinkTopologyContext.registerMetric(...) and just return null. (There is already a open PR that work on the integration of metrics, thus I don't want to push this hot fix into master branch)
Furhtermore, you need to add some configuration parameters to your query manually:
I just made up some values here:
Config c = new Config();
List<String> zkServers = new ArrayList<String>();
zkServers.add("localhost");
c.put(Config.STORM_ZOOKEEPER_SERVERS, zkServers);
c.put(Config.STORM_ZOOKEEPER_PORT, 2181);
c.put(Config.STORM_ZOOKEEPER_SESSION_TIMEOUT, 30);
c.put(Config.STORM_ZOOKEEPER_CONNECTION_TIMEOUT, 30);
c.put(Config.STORM_ZOOKEEPER_RETRY_TIMES, 3);
c.put(Config.STORM_ZOOKEEPER_RETRY_INTERVAL, 5);
You need to add some additional dependencies to your project:
Additionally to flink-storm you need:
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>0.9.4</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.1.1</version>
</dependency>
This works for me, using Kafka_2.10-0.8.1.1 and FlinkLocalCluster execute within Eclipse.
It also works in a local Flink cluster started via bin/start-local-streaming.sh. For this, using bin/flink run command, you need to use FlinkSubmitter instead of FlinkLocalCluster. Furthermore, you need the following dependencies for your jar:
<include>org.apache.storm:storm-kafka</include>
<include>org.apache.kafka:kafka_2.10</include>
<include>org.apache.curator:curator-client</include>
<include>org.apache.curator:curator-framework</include>
<include>com.google.guava:guava</include>
<include>com.yammer.metrics:metrics-core</include>

Categories