Write Kafka Stream output to multiple directory using Apache Beam - java

I would like to persist the data from Kafka topic to google storage using Data flow.
I have written a sample code on local, it is working all good.
public static void main(String[] args) {
PipelineOptions options = PipelineOptionsFactory.create();
Pipeline p = Pipeline.create(options);
p.apply(KafkaIO.<Long, String>read().withBootstrapServers("localhost:9092").withTopic("my-topic")
.withKeyDeserializer(LongDeserializer.class).withValueDeserializer(StringDeserializer.class))
.apply(Window
.<KafkaRecord<Long, String>>
into(FixedWindows.of(Duration.standardMinutes(1)))
)
.apply(FlatMapElements.into(TypeDescriptors.strings())
.via((KafkaRecord<Long, String> line) -> TextUtil.splitLine(line.getKV().getValue())))
.apply(Filter.by((String word) -> StringUtils.isNotEmpty(word))).apply(Count.perElement())
.apply(MapElements.into(TypeDescriptors.strings())
.via((KV<String, Long> lineCount) -> lineCount.getKey() + ": " + lineCount.getValue()))
.apply(TextIO.write().withWindowedWrites().withNumShards(1)
.to("resources/temp/wc-kafka-op/wc"));
p.run().waitUntilFinish();
}
Above code works perfectly. But I would like to save output of each window in separate directory.
e.g. {BasePath}/{Window}/{prefix}{Suffice}
I could not able to get it working.

TextIO supports windowedWrites, when you can specify how the name is derived. See JavaDoc.

Related

How to add another process after TextIO.write on dataflow pipeline

I created a simple dataflow pipeline which consist of this process:
Fetch/read data from bigquery
Change the output to csv format
Create CSV file on Google Storage
//TODO send CSV file to third party
pipeline.apply("ReadFromBigQuery",
BigQueryIO.read(new MyCustomObject1(input))
.fromQuery(myCustomQuery)
.usingStandardSql()
).apply("ConvertToCsv",
ParDo.of(new myCustomObject2())
).apply("WriteToCSV",
TextIO.write().to(fileLocation)
.withSuffix(".csv")
.withoutSharding()
.withDelimiter(new char[] {'\r', '\n'})
.withHeader(csvHeader)
);
but after step 3 (write to GS), i can't add another process to dataflow
how can i achieve this?
Because TextIO.write() returns a PDone, instead of a PCollection in the prior PTransform's.
One of the possible solutions in your step 2, you could use a multi out with tags to write to different locations.
final TupleTag<String> csvOutTag= new TupleTag<String>(){};
final TupleTag<String> furtherProcessingTag= new TupleTag<String>(){};
PCollectionTuple mixedCollection =
bigQueryReadCollection.apply(ParDo
.of(new DoFn<TableRow,String>() {
#ProcessElement
public void processElement(ProcessContext c) {
// Emit to main output, which is the output
c.output(c.element().toString());
// Emit to output with tag furtherProcessing
c.output(furtherProcessingTag, c.element());
}
}
}).withOutputTags(csvOutTag,
TupleTagList.of(furtherProcessingTag)));
// Get output with tag csvOutTag.
mixedCollection.get(csvOutTag).apply("WriteToCSV",
TextIO.write().to(fileLocation)
.withSuffix(".csv")
.withoutSharding()
.withDelimiter(new char[] {'\r', '\n'})
.withHeader(csvHeader));
// Get output with tag furtherProcessingTag.
mixedCollection.get(furtherProcessingTag).apply(...);
Please add appropriate data types in TupleTag declaration, based on your output for further processing.

Side input in global window as slowly changing cache questions

Context:
We have some schema files in Cloud Storage. In our Dataflow job, we need to refer to these schema files to transform our data. These schema files change on a daily/weekly basis. Our data source is PubSub and we window PubSub messages into a fixed window of 1 minutes. The schema files we need fit well into memory, they are about 90 MB.
What I have tried:
Referring to this doc from Apache Beam, we created a side input that writes into a global window with a GenerateSequence like so:
// Creates a side input that refreshes the schema every minute
PCollectionView<Map<String, byte[]>> dataBlobView =
pipeline.apply(GenerateSequence.from(0).withRate(1, Duration.standardDays(1L)))
.apply(Window.<Long>into(new GlobalWindows()).triggering(
Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(ParDo.of(new DoFn<Long, Map<String, byte[]>>() {
#ProcessElement
public void processElement(ProcessContext ctx) throws Exception {
byte[] avroSchemaBlob = getAvroSchema();
byte[] fileDescriptorSetBlob = getFileDescriptorSet();
byte[] depsBlob = getFileDescriptorDeps();
Map<String, byte[]> dataBlobs = ImmutableMap.of(
"version", Longs.toByteArray(ctx.element().byteValue()),
"avroSchemaBlob", avroSchemaBlob,
"fileDescriptorSetBlob", fileDescriptorSetBlob,
"depsBlob", depsBlob);
ctx.output(dataBlobs);
}
}))
.apply(View.asSingleton());
"getAvroSchema", "getFileDescriptorSet" and "getFileDescriptorDeps" read files as byte[] from Cloud Storage.
However, this approach failed from the exception:
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalArgumentException: PCollection with more than one element accessed as a singleton view.
I then tried writing my own Combine Globally function like so:
static class GetLatestVersion implements SerializableFunction<Iterable<Map<String, byte[]>>, Map<String, byte[]>> {
#Override
public Map<String, byte[]> apply(Iterable<Map<String, byte[]>> versions) {
Map<String, byte[]> result = Maps.newHashMap();
Long maxVersion = Long.MIN_VALUE;
for (Map<String, byte[]> version: versions){
Long currentVersion = Longs.fromByteArray(version.get("version"));
logger.info("Side input version: " + currentVersion);
if (currentVersion > maxVersion) {
result = version;
maxVersion = currentVersion;
}
}
return result;
}
}
But it still triggers the same exception........
I then came across this and this Beam email archives and it seems like what's suggested in the Beam doc does not work. And I have to use a MultiMap to avoid the exception I ran into above. With a MultiMap, I will also have to iterate through the values and have my own logic to pick my desired value (latest).
My questions:
Why do I still get the exception "PCollection with more than one element accessed as a singleton view" even after I globally combine everything into 1 result?
If I go with the MultiMap approach, wouldn't the job eventually run out of memory? Because everyday we are basically increasing the MultiMap by 90 MB (the size of our data blob), unless Dataflow has some smart MultiMap implementation behind the scene.
What is the recommended way to do this?
Thanks
Use .apply(View.asMap()) instead of .apply(View.asSingleton());
This is the full example:
PCollectionView<Map<String, byte[]>> dataBlobView =
pipeline.apply(GenerateSequence.from(0).withRate(1, Duration.standardDays(1L)))
.apply(Window.<Long>into(new GlobalWindows()).triggering(
Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(ParDo.of(new DoFn<Long, KV<String, byte[]>>() {
#ProcessElement
public void processElement(ProcessContext ctx) throws Exception {
byte[] avroSchemaBlob = getAvroSchema();
byte[] fileDescriptorSetBlob = getFileDescriptorSet();
byte[] depsBlob = getFileDescriptorDeps();
ctx.output(KV.of("version", Longs.toByteArray(ctx.element().byteValue())));
ctx.output(KV.of("avroSchemaBlob", avroSchemaBlob));
ctx.output(KV.of("fileDescriptorSetBlob", fileDescriptorSetBlob));
ctx.output(KV.of("depsBlob", depsBlob));
}
}))
.apply(View.asMap());
You can use the map from the side inputs as described in documentation.
Apache Beam version 2.34.0

Flatten fails to return any data in CoGroupByKey

I am currently using apache beam 2.4. My pipeline includes a CoGroupByKey step which does not return any data.
As I expand the task in the UI, the data does not flow beyond the internal Flatten.PCollection of CoGroupByKey.
My code is structured as presented in the javadoc example
static final TupleTag<String> simpleEventTag1 = new TupleTag<String>(){};
static final TupleTag<String> simpleEventTag2 = new TupleTag<String>(){};
PCollection<KV<String, String>> pt1 = ...;
PCollection<KV<String, String>> pt2 = ...;
PCollection<KV<String, CoGbkResult>> coGbkResultCollection =
KeyedPCollectionTuple.of(simpleEventTag1, pt1)
.and(simpleEventTag2, pt2)
.apply(CoGroupByKey.create());
My window is:
Window.<String>into(FixedWindows.of(Duration.standardMinutes(1)))
.triggering(Repeatedly.forever(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(20))
)
)
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
pt1 and pt2 are both retrieved from a PCollectionTuple and are not empty. They share the same FixedWindow properties.
I don't know what I am doing wrong and could not find any way to debug this issue. Any idea ?
Expanded CoGroupBy

Spark SQL failed in Spark Streaming (KafkaStream)

I use Spark SQL in a Spark Streaming Job to search in a Hive table.
Kafka streaming works fine without problems. If I run hiveContext.runSqlHive(sqlQuery); outside directKafkaStream.foreachRDD it works fine without problems. But I need the Hive-Table lookup inside the streaming job. The use of JDBC (jdbc:hive2://) would work, but I want to use the Spark SQL.
The significant places of my source code looks as follows:
// set context
SparkConf sparkConf = new SparkConf().setAppName(appName).set("spark.driver.allowMultipleContexts", "true");
SparkContext sparkSqlContext = new SparkContext(sparkConf);
JavaStreamingContext streamingContext = new JavaStreamingContext(sparkConf, Durations.seconds(batchDuration));
HiveContext hiveContext = new HiveContext(sparkSqlContext);
// Initialize Direct Spark Kafka Stream. Starts from top
JavaPairInputDStream<String, String> directKafkaStream =
KafkaUtils.createDirectStream(streamingContext,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams,
topicsSet);
// work on stream
directKafkaStream.foreachRDD((Function<JavaPairRDD<String, String>, Void>) rdd -> {
rdd.foreachPartition(tuple2Iterator -> {
// get message
Tuple2<String, String> item = tuple2Iterator.next();
// lookup
String sqlQuery = "SELECT something FROM somewhere";
Seq<String> resultSequence = hiveContext.runSqlHive(sqlQuery);
List<String> result = scala.collection.JavaConversions.seqAsJavaList(resultSequence);
});
return null;
});
// Start the computation
streamingContext.start();
streamingContext.awaitTermination();
I get no meaningful error, even if I surround with try-catch.
I hope someone can help - Thanks.
//edit:
The solution looks like:
// work on stream
directKafkaStream.foreachRDD((Function<JavaPairRDD<String, String>, Void>) rdd -> {
// driver
Map<String, String> lookupMap = getResult(hiveContext); //something with hiveContext.runSqlHive(sqlQuery);
rdd.foreachPartition(tuple2Iterator -> {
// worker
while (tuple2Iterator != null && tuple2Iterator.hasNext()) {
// get message
Tuple2<String, String> item = tuple2Iterator.next();
// lookup
String result = lookupMap.get(item._2());
}
});
return null;
});
Just because you want to use Spark SQL it won't make it possible. Spark's rule number one is no nested actions, transformations or distributed data structures.
If you can express your query for example as join you can use push it to one level higher to foreachRDD and this pretty much exhaust your options to use Spark SQL here:
directKafkaStream.foreachRDD(rdd ->
hiveContext.runSqlHive(sqlQuery)
rdd.foreachPartition(...)
)
Otherwise direct JDBC connection can be a valid option.

Null value in spark streaming from Kafka

I have a simple program because I'm trying to receive data using kafka. When I start a kafka producer and I send data, for example: "Hello", I get this when I print the message: (null, Hello). And I don't know why this null appears. Is there any way to avoid this null? I think it's due to Tuple2<String, String>, the first parameter, but I only want to print the second parameter. And another thing, when I print that using System.out.println("inside map "+ message); it does not appear any message, does someone know why? Thanks.
public static void main(String[] args){
SparkConf sparkConf = new SparkConf().setAppName("org.kakfa.spark.ConsumerData").setMaster("local[4]");
// Substitute 127.0.0.1 with the actual address of your Spark Master (or use "local" to run in local mode
sparkConf.set("spark.cassandra.connection.host", "127.0.0.1");
// Create the context with 2 seconds batch size
JavaStreamingContext jssc = new JavaStreamingContext(sparkConf, new Duration(2000));
Map<String, Integer> topicMap = new HashMap<>();
String[] topics = KafkaProperties.TOPIC.split(",");
for (String topic: topics) {
topicMap.put(topic, KafkaProperties.NUM_THREADS);
}
/* connection to cassandra */
CassandraConnector connector = CassandraConnector.apply(sparkConf);
System.out.println("+++++++++++ cassandra connector created ++++++++++++++++++++++++++++");
/* Receive kafka inputs */
JavaPairReceiverInputDStream<String, String> messages =
KafkaUtils.createStream(jssc, KafkaProperties.ZOOKEEPER, KafkaProperties.GROUP_CONSUMER, topicMap);
System.out.println("+++++++++++++ streaming-kafka connection done +++++++++++++++++++++++++++");
JavaDStream<String> lines = messages.map(
new Function<Tuple2<String, String>, String>() {
public String call(Tuple2<String, String> message) {
System.out.println("inside map "+ message);
return message._2();
}
}
);
messages.print();
jssc.start();
jssc.awaitTermination();
}
Q1) Null values:
Messages in Kafka are Keyed, that means they all have a (Key, Value) structure.
When you see (null, Hello) is because the producer published a (null,"Hello") value in a topic.
If you want to omit the key in your process, map the original Dtream to remove the key: kafkaDStream.map( new Function<String,String>() {...})
Q2) System.out.println("inside map "+ message); does not print. A couple of classical reasons:
Transformations are applied in the executors, so when running in a cluster, that output will appear in the executors and not on the master.
Operations are lazy and DStreams need to be materialized for operations to be applied.
In this specific case, the JavaDStream<String> lines is never materialized i.e. not used for an output operation. Therefore the map is never executed.

Categories