Spark DataFrame aggregation - java

I have the following code:
public class IPCCodes {
public static class IPCCount implements Serializable {
public IPCCount(long permid, int year, int count, String ipc) {
this.permid = permid;
this.year = year;
this.count = count;
this.ipc = ipc;
}
public long permid;
public int year;
public int count;
public String ipc;
}
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf().setAppName("IPC codes");
JavaSparkContext sc = new JavaSparkContext(sparkConf);
HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(sc.sc());
DataFrame df = sqlContext.sql("SELECT * FROM test.some_table WHERE year>2004");
JavaRDD<Row> rdd = df.javaRDD();
JavaRDD<IPCCount> map = rdd.flatMap(new FlatMapFunction<Row, IPCCount>() {
#Override
public Iterable<IPCCount> call(Row row) throws Exception {
List<IPCCount> counts = new ArrayList<>();
try {
String codes = row.getString(7);
for (String s : codes.split(",")) {
if(s.length()>4){
counts.add(new IPCCount(row.getLong(4), row.getInt(6), 1, s.substring(0, 4)));
}
}
} catch (NumberFormatException e) {
System.out.println(e.getMessage());
}
return counts;
}
});
I created DataFrame from Hive table and apply flatMap function for splitting ipc codes (this field is array of string in hive table), after that I need aggregate codes with count per permid and year, result table should be permid/year/ipc/count.
What is the most efficient way to do it?

If you want a DataFrame as an output there is no good reason to use RDD and flatMap. As far as I can tell everything can be easily handled using basic Spark SQL functions. Using Scala:
import org.apache.spark.sql.functions.{col, explode, length, split, substring}
val transformed = df
.select(col("permid"), col("year"),
// Split ipc and explode into multiple rows
explode(split(col("ipc"), ",")).alias("code"))
.where(length(col("code")).gt(4)) // filter
.withColumn("code", substring(col("code"), 0, 4))
transformed.groupBy(col("permid"), col("year"), col("code")).count

Related

Why Spark writes Null in DeltaLake Table

My Java application uses a Spark Structured Streaming connected to a socket server continuously getting sensor measurement records (IoT) wrapped in an RDMessage object that records the message type for control in the protocol.
When messages arrive they are checked and converted into a Dataset using Encoder<RDMeasurement> measurementEncoder = Encoders.bean(RDMeasurement.class).
Although the stream is read correctly and the RDMeasurement objects are created correctly, the output stream is set to None or zero depending on the data type. I see this in the DeltaFrame table or in the console when I change the format (.format("console")).
What did I miss here? what is going wrong?
See below the most significant segments of Java code
public final class SocketRDMeasurement {
public static void main(String[] args) throws Exception {
SparkSession spark = SparkSession
.builder()
.appName("SSSocketRDMeasurement")
.master("local[*]")
.getOrCreate();
Encoder<StringArray> stringArrayEncoder = Encoders.bean(StringArray.class);
Encoder<RDMessage> messageEncoder = Encoders.bean(RDMessage.class);
Encoder<RDMeasurement> measurementEncoder = Encoders.bean(RDMeasurement.class);
Dataset<Row> records = spark
.readStream()
.format("socket")
.option("host", host)
.option("port", port)
.load();
Dataset<String> inputReceived = records.as(Encoders.STRING());
Dataset<StringArray> input = inputReceived.as(Encoders.STRING())
.map((MapFunction<String, StringArray>) x ->
new StringArray(x),
stringArrayEncoder);
Dataset<RDMessage> messages = input.map(
(MapFunction<StringArray, RDMessage>)
r -> new RDMessage(r), messageEncoder);
Dataset<RDMeasurement> measurements = messages
.map((MapFunction<RDMessage, RDMeasurement>) r ->
new RDMeasurement(), measurementEncoder);
// The code executes without warning or error but despite the
// objects being created correctly the output of dataset is
// is saved with nulls/nan
StreamingQuery query = measurements.writeStream()
.outputMode("append")
.format("delta")
.option("checkpointLocation",
"/opt/data/delta/_checkpoints/ss-socket-rd-measurement")
.start("/opt/data/delta/ss-socket-rd-measurement");
query.awaitTermination();
}
}
public class StringArray implements Serializable {
private String[] tokens;
public StringArray(String tokens) {
this.tokens = tokens.split(",");
}
// getters, setters and toString goes here
}
public class RDMeasurement implements Serializable {
private String dataSourceName = null;
private double dt = 0.0f;
private double t0 = 0f;
private double endTimestamp = 0L;
private double[] valuesArray;
public RDMeasurement() { }
public RDMeasurement(String dataSourceName, double t0,
double dt, double endTimestamp, double[] valuesArray) {
this.dataSourceName = dataSourceName;
this.t0 = t0;
this.dt = dt;
this.endTimestamp = endTimestamp;
this.valuesArray = valuesArray;
}
// getters, setters and toString goes here
}
public class RDMessage implements Serializable {
String type;
RDMeasurement rdMeasurement;
public RDMessage(String type, RDMeasurement rdMeasurement) {
this.type = type;
this.rdMeasurement = rdMeasurement;
}
public RDMessage(StringArray stringArray) {
this(stringArray.getTokens()[0] ,
new RDMeasurement(stringArray.getTokens()[1],
Double.parseDouble(stringArray.getTokens()[2]),
Double.parseDouble(stringArray.getTokens()[3]),
Double.parseDouble(stringArray.getTokens()[4]),
toDoubleArray(5, stringArray))
);
}
private static double[] toDoubleArray(int skip, StringArray stringArray) {
double[] ret = new double[stringArray.getTokens().length - 5];
for (int i = 0; i < stringArray.getTokens().length - 5; i++) {
ret[i] = Double.parseDouble(stringArray.getTokens()[i+skip]);
}
return ret;
}
// getters, setters and toString goes here
}
Each line of input follows the format below:
V1_start_rd_0,ds_1,1642442598.266,1.0,1642442618.266,1.00,2.00,3.00,4.00,5.00,6.00,7.00,8.00,9.00,10.00,11.00,12.00,13.00,14.00,15.00,16.00,17.00,18.00,19.00,20.00
V1_rd_1,ds_2,1642442619.266,1.0,1642442639.266,1.00,2.00,3.00,4.00,5.00,6.00,7.00,8.00,9.00,10.00,11.00,12.00,13.00,14.00,15.00,16.00,17.00,18.00,19.00,20.00
V1_rd_2,ds_3,1642442640.266,1.0,1642442660.266,1.00,2.00,3.00,4.00,5.00,6.00,7.00,8.00,9.00,10.00,11.00,12.00,13.00,14.00,15.00,16.00,17.00,18.00,19.00,20.00
After refactoring my java code and adding a code segment for debug I was able to identify the error.
See the refactoring:
StreamingQuery query = dataStreamReader.load()
.as(Encoders.STRING())
.map((MapFunction<String, StringArray>) x -> new StringArray(x),
stringArrayEncoder)
.map((MapFunction<StringArray, RDMessage>)
r -> new RDMessage(r), messageEncoder)
.map((MapFunction<RDMessage, RDMeasurement>) e ->
e.getRdMeasurement(), measurementEncoder)
/*
.map((MapFunction<RDMeasurement, String>) e -> {
if (e.getDataSourceName() != null) {
System.out.println("•••> " + e);
}
return e.toString();
}, Encoders.STRING())
.map((MapFunction<String, RDMeasurement>) s -> new RDMeasurement(s),
measurementEncoder)
*/
.writeStream()
.outputMode("append")
.format("console")
.start();
query.awaitTermination();
The code commented above allowed me to identify the problem.

Why updating broadcast variable sample code didn't work?

I want to update broadcast variable every minute. So I use the sample code you give by Aastha in this question.
how can I update a broadcast variable in Spark streaming?
But it didn't work. The function updateAndGet() only works when the streaming application start. When I debug my code , it didn't went into the fuction updateAndGet() twice. So the broadcast variable didn't update every minute.
Why?
Here is my sample code.
public class BroadcastWrapper {
private Broadcast<List<String>> broadcastVar;
private Date lastUpdatedAt = Calendar.getInstance().getTime();
private static BroadcastWrapper obj = new BroadcastWrapper();
private BroadcastWrapper(){}
public static BroadcastWrapper getInstance() {
return obj;
}
public JavaSparkContext getSparkContext(SparkContext sc) {
JavaSparkContext jsc = JavaSparkContext.fromSparkContext(sc);
return jsc;
}
public Broadcast<List<String>> updateAndGet(JavaStreamingContext jsc) {
Date currentDate = Calendar.getInstance().getTime();
long diff = currentDate.getTime()-lastUpdatedAt.getTime();
if (broadcastVar == null || diff > 60000) { // Lets say we want to refresh every 1 min =
// 60000 ms
if (broadcastVar != null)
broadcastVar.unpersist();
lastUpdatedAt = new Date(System.currentTimeMillis());
// Your logic to refreshs
// List<String> data = getRefData();
List<String> data = new ArrayList<String>();
data.add("tang");
data.add("xiao");
data.add(String.valueOf(System.currentTimeMillis()));
broadcastVar = jsc.sparkContext().broadcast(data);
}
return broadcastVar;}}
//Here is the computing code submit to spark streaming.
lines.transform(new Function<JavaRDD<String>, JavaRDD<String>>() {
Broadcast<List<String>> blacklist =
BroadcastWrapper.getInstance().updateAndGet(jsc);
#Override
public JavaRDD<String> call(JavaRDD<String> rdd) {
JavaRDD<String> dd=rdd.filter(new Function<String, Boolean>() {
#Override
public Boolean call(String word) {
if (blacklist.getValue().contains(word)) {
return false;
} else {
return true;
}
}
});
return dd;
}});

Issue displaying ImageMatrix data in Java

As it stands I have two classes my ImageMatrix class and my ImageMatrixDB class, these should essentially build a 8×8-pixel matrices into an array to be manipulated later. I am using some datasets which I hope to apply some mahcine learning algorithms too. (See the data description and sample dataset.) As it stands the java files look like so:
ImageMatrix.java
public class ImageMatrix {
int[] data;
int classCode;
public ImageMatrix(int[] data, int classCode) {
assert data.length == 64;
this.data = data;
this.classCode = classCode;
}
public int[] getData() {
return data;
}
public int getClassCode() {
return classCode;
}
}
ImageMatrixDB.java
import java.io.*;
import java.util.*;
public class ImageMatrixDB implements Iterable<ImageMatrix> {
private List<ImageMatrix> list = new ArrayList<>;
public static ImageMatrixDB load(File f) throws IOException {
ImageMatrixDB result = new ImageMatrixDB();
try (FileReader fr = new FileReader(f);
BufferedReader br = new BufferedReader(fr)) {
for (String line; null != (line = br.readLine()); ) {
int lastComma = line.lastIndexOf(',');
int classCode = Integer.parseInt(line.substring(1 + lastComma));
int[] data = Arrays.stream(line.substring(0, lastComma).split(","))
.mapToInt(Integer::parseInt)
.toArray();
result.list.add(new ImageMatrix(data, classCode));
}
}
return result;
}
public Iterator<ImageMatrix> iterator() {
return this.list.iterator();
}
}
My issue stems from outputting the data in readable format. I was wondering if someone could walk me through what my main should look like to output the data as I keep getting thrown errors when trying to load the file.
Thanks again.

Spark streaming transform function

I am having compilation errors in the transform function for spark streaming.
Specifically seem to be missing finalizing the DStream variable or something similar. I have copied from the amplab tutorials so slightly confused...
Here is the code, the problem is in the transform function towards the end.
Here is the error:
[ERROR] /home/nipun/ngla-stable/online/src/main/java/org/necla/ngla/spark_streaming/Type4ViolationChecker.java:[120,63] error:
no suitable method found for transform(<anonymous Function<JavaPairRDD<Long,Integer>,JavaPairRDD<Long,Integer>>>)
[INFO] 1 error
Code:
public class Type4ViolationChecker {
private static final Pattern NEWSPACE = Pattern.compile("\n");
public static Long generateTSKey(String line) throws ParseException{
JSONObject obj = new JSONObject(line);
String time = obj.getString("mts");
DateFormat formatter = new SimpleDateFormat("yyyy / MM / dd HH : mm : ss");
Date date = (Date)formatter.parse(time);
long since = date.getTime();
long key = (long)(since/10000) * 10000;
return key;
}
public static void main(String[] args) {
Type4ViolationChecker obj = new Type4ViolationChecker();
SparkConf sparkConf = new SparkConf().setAppName("Type4ViolationChecker");
final JavaStreamingContext ssc = new JavaStreamingContext(sparkConf, new Duration(10000));
JavaReceiverInputDStream<String> lines = ssc.socketTextStream(args[0], Integer.parseInt(args[1]), StorageLevels.MEMORY_AND_DISK_SER);
JavaDStream<String> words = lines.flatMap(new FlatMapFunction<String, String>() {
#Override
public Iterable<String> call(String x) {
return Lists.newArrayList(NEWSPACE.split(x));
}
});
words.persist();
JavaDStream<String> matched = words.filter(new Function<String, Boolean>() {
public Boolean call(String line) {
return line.contains("pattern");
}});
JavaPairDStream<Long, Integer> keyValStream = matched.mapToPair(
new PairFunction<String, Long, Integer>(){
/**
* Here we are converting the string to a key value tuple
* Key -> time bucket calculated using the 1970 GMT date as anchor, and dividing by the polling interval
* Value -> is the original message
*/
#Override
public Tuple2<Long, Integer> call(String arg0)
throws Exception {
// TODO Auto-generated method stub
return new Tuple2<Long,Integer>(generateTSKey(arg0),1);
}
});
JavaPairDStream<Long, Integer> tsStream = keyValStream.reduceByKey(
new Function2<Integer,Integer,Integer>(){
public Integer call(Integer i1, Integer i2){
return i1+ i2;
}});
JavaPairDStream<Long,Integer> sortedtsStream = tsStream.transform(
new Function<JavaPairRDD<Long, Integer>, JavaPairRDD<Long,Integer>>() {
#Override
public JavaPairRDD<Long, Integer> call(JavaPairRDD<Long, Integer> longIntegerJavaPairRDD) throws Exception {
return longIntegerJavaPairRDD.sortByKey(false);
}
});
//sortedtsStream.print();
ssc.start();
ssc.awaitTermination();
}
}
Thanks to #GaborBakos for providing the answer...
The following seems to work! Had to use transformtoPair, instead of transform
JavaPairDStream<Long,Integer> sortedtsStream = tsStream.transformToPair(
new Function<JavaPairRDD<Long, Integer>, JavaPairRDD<Long,Integer>>() {
#Override
public JavaPairRDD<Long, Integer> call(JavaPairRDD<Long, Integer> longIntegerJavaPairRDD) throws Exception {
return longIntegerJavaPairRDD.sortByKey(true);
}
});

Join with Hadoop in Java [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I'm working since short time with Hadoop and trying to implement a join in Java. It doesn't matter if Map-Side or Reduce-Side. I took Reduce-Side join since it was supposed to be easier to implement. I know that Java is not the best choice for joins, aggregations etc. and should better pick Hive or Pig which I have done already. However I'm working on a research project and I have to use all of those 3 languages in order to deliver a comparison.
Anyway, I have two input files with different structure. One is key|value and the other one is key|value1;value2;value3;value4. One record from each input file looks like following:
Input1: 1;2010-01-10T00:00:01
Input2: 1;23;Blue;2010-01-11T00:00:01;9999-12-31T23:59:59
I followed the example in the Hadoop Definitve Guide book, but it didn't work for me. I'm posting my java files here, so you can see what's wrong.
public class LookupReducer extends Reducer<TextPair,Text,Text,Text> {
private String result = "";
private String msisdn;
private String attribute, product;
private long trans_dt_long, start_dt_long, end_dt_long;
private String trans_dt, start_dt, end_dt;
#Override
public void reduce(TextPair key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
context.progress();
//value without key to remember
Iterator<Text> iter = values.iterator();
for (Text val : values) {
Text recordNoKey = val; //new Text(iter.next());
String valSplitted[] = recordNoKey.toString().split(";");
//if the input is coming from CDR set corresponding values
if(key.getSecond().toString().equals(CDR.CDR_TAG))
{
trans_dt = recordNoKey.toString();
trans_dt_long = dateToLong(recordNoKey.toString());
}
//if the input is coming from Attributes set corresponding values
else if(key.getSecond().toString().equals(Attribute.ATT_TAG))
{
attribute = valSplitted[0];
product = valSplitted[1];
start_dt = valSplitted[2];
start_dt_long = dateToLong(valSplitted[2]);
end_dt = valSplitted[3];
end_dt_long = dateToLong(valSplitted[3]);;
}
Text record = val; //iter.next();
//System.out.println("RECORD: " + record);
Text outValue = new Text(recordNoKey.toString() + ";" + record.toString());
if(start_dt_long < trans_dt_long && trans_dt_long < end_dt_long)
{
//concat output columns
result = attribute + ";" + product + ";" + trans_dt;
context.write(key.getFirst(), new Text(result));
System.out.println("KEY: " + key);
}
}
}
private static long dateToLong(String date){
DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date parsedDate = null;
try {
parsedDate = formatter.parse(date);
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
long dateInLong = parsedDate.getTime();
return dateInLong;
}
public static class TextPair implements WritableComparable<TextPair> {
private Text first;
private Text second;
public TextPair(){
set(new Text(), new Text());
}
public TextPair(String first, String second){
set(new Text(first), new Text(second));
}
public TextPair(Text first, Text second){
set(first, second);
}
public void set(Text first, Text second){
this.first = first;
this.second = second;
}
public Text getFirst() {
return first;
}
public void setFirst(Text first) {
this.first = first;
}
public Text getSecond() {
return second;
}
public void setSecond(Text second) {
this.second = second;
}
#Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
first.readFields(in);
second.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
first.write(out);
second.write(out);
}
#Override
public int hashCode(){
return first.hashCode() * 163 + second.hashCode();
}
#Override
public boolean equals(Object o){
if(o instanceof TextPair)
{
TextPair tp = (TextPair) o;
return first.equals(tp.first) && second.equals(tp.second);
}
return false;
}
#Override
public String toString(){
return first + ";" + second;
}
#Override
public int compareTo(TextPair tp) {
// TODO Auto-generated method stub
int cmp = first.compareTo(tp.first);
if(cmp != 0)
return cmp;
return second.compareTo(tp.second);
}
public static class FirstComparator extends WritableComparator {
protected FirstComparator(){
super(TextPair.class, true);
}
#Override
public int compare(WritableComparable comp1, WritableComparable comp2){
TextPair pair1 = (TextPair) comp1;
TextPair pair2 = (TextPair) comp2;
int cmp = pair1.getFirst().compareTo(pair2.getFirst());
if(cmp != 0)
return cmp;
return -pair1.getSecond().compareTo(pair2.getSecond());
}
}
public static class GroupComparator extends WritableComparator {
protected GroupComparator()
{
super(TextPair.class, true);
}
#Override
public int compare(WritableComparable comp1, WritableComparable comp2)
{
TextPair pair1 = (TextPair) comp1;
TextPair pair2 = (TextPair) comp2;
return pair1.compareTo(pair2);
}
}
}
}
public class Joiner extends Configured implements Tool {
public static final String DATA_SEPERATOR =";"; //Define the symbol for seperating the output data
public static final String NUMBER_OF_REDUCER = "1"; //Define the number of the used reducer jobs
public static final String COMPRESS_MAP_OUTPUT = "true"; //if the output from the mapping process should be compressed, set COMPRESS_MAP_OUTPUT = "true" (if not set it to "false")
public static final String
USED_COMPRESSION_CODEC = "org.apache.hadoop.io.compress.SnappyCodec"; //set the used codec for the data compression
public static final boolean JOB_RUNNING_LOCAL = true; //if you run the Hadoop job on your local machine, you have to set JOB_RUNNING_LOCAL = true
//if you run the Hadoop job on the Telefonica Cloud, you have to set JOB_RUNNING_LOCAL = false
public static final String OUTPUT_PATH = "/home/hduser"; //set the folder, where the output is saved. Only needed, if JOB_RUNNING_LOCAL = false
public static class KeyPartitioner extends Partitioner<TextPair, Text> {
#Override
public int getPartition(/*[*/TextPair key/*]*/, Text value, int numPartitions) {
System.out.println("numPartitions: " + numPartitions);
return (/*[*/key.getFirst().hashCode()/*]*/ & Integer.MAX_VALUE) % numPartitions;
}
}
private static Configuration hadoopconfig() {
Configuration conf = new Configuration();
conf.set("mapred.textoutputformat.separator", DATA_SEPERATOR);
conf.set("mapred.compress.map.output", COMPRESS_MAP_OUTPUT);
//conf.set("mapred.map.output.compression.codec", USED_COMPRESSION_CODEC);
conf.set("mapred.reduce.tasks", NUMBER_OF_REDUCER);
return conf;
}
#Override
public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
if ((args.length != 3) && (JOB_RUNNING_LOCAL)) {
System.err.println("Usage: Lookup <CDR-inputPath> <Attribute-inputPath> <outputPath>");
System.exit(2);
}
//starting the Hadoop job
Configuration conf = hadoopconfig();
Job job = new Job(conf, "Join cdrs and attributes");
job.setJarByClass(Joiner.class);
MultipleInputs.addInputPath(job, new Path(args[0]), TextInputFormat.class, CDRMapper.class);
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, AttributeMapper.class);
//FileInputFormat.addInputPath(job, new Path(otherArgs[0])); //expecting a folder instead of a file
if(JOB_RUNNING_LOCAL)
FileOutputFormat.setOutputPath(job, new Path(args[2]));
else
FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PATH));
job.setPartitionerClass(KeyPartitioner.class);
job.setGroupingComparatorClass(TextPair.FirstComparator.class);
job.setReducerClass(LookupReducer.class);
job.setMapOutputKeyClass(TextPair.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
return job.waitForCompletion(true) ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int exitCode = ToolRunner.run(new Joiner(), args);
System.exit(exitCode);
}
}
public class Attribute {
public static final String ATT_TAG = "1";
public static class AttributeMapper
extends Mapper<LongWritable, Text, TextPair, Text>{
private static Text values = new Text();
//private Object output = new Text();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//partition the input line by the separator semicolon
String[] attributes = value.toString().split(";");
String valuesInString = "";
if(attributes.length != 5)
System.err.println("Input column number not correct. Expected 5, provided " + attributes.length
+ "\n" + "Check the input file");
if(attributes.length == 5)
{
//setting the values with the input values read above
valuesInString = attributes[1] + ";" + attributes[2] + ";" + attributes[3] + ";" + attributes[4];
values.set(valuesInString);
//writing out the key and value pair
context.write( new TextPair(new Text(String.valueOf(attributes[0])), new Text(ATT_TAG)), values);
}
}
}
}
public class CDR {
public static final String CDR_TAG = "0";
public static class CDRMapper
extends Mapper<LongWritable, Text, TextPair, Text>{
private static Text values = new Text();
private Object output = new Text();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//partition the input line by the separator semicolon
String[] cdr = value.toString().split(";");
//setting the values with the input values read above
values.set(cdr[1]);
//output = CDR_TAG + cdr[1];
//writing out the key and value pair
context.write( new TextPair(new Text(String.valueOf(cdr[0])), new Text(CDR_TAG)), values);
}
}
}
I would be glad if anyone could at least post a link for a tutorial or a simple example where such a join functionality is implemented. I searched a lot, but either the code was not complete or there was not enough explanation.
To be honest, I have no idea what your code is trying to do, but that's probably because I'd do it in a different way and not familiar with the API's you're using.
I would start from scratch as follows:
Create a mapper to read one of the files. For each line read, write a key value pair to the context. The key is a Text created from the key and the value is another Text created by concatenating a "1" with the entire input record.
Create another mapper for the other file. This mapper acts just like the first mapper, but the value is a Text created by concatenating a "2" with the entire input record.
Write a reducer to do the join. The reduce() method will get all records written for a specific key. You can tell which input file (and therefore the data format for the record) by looking to see whether the value starts with a "1" or a "2". Once you know whether or not you have one, the other or both record types, you can write whatever logic you need to merge the data from the one or two records.
By the way, you use the MultipleInputs class to configure more than one mapper in your job/driver class.

Categories