MapReduce: Reduce function is writing strange values that are not expected - java

My reduce function in Java is writing on the output file values that are not expected. I inspect my code with breakpoints and I saw that, for each context.write call that I made, the key and the value that I'm writing are correct. Where am I making mistakes?
What I'm trying to do is taking in input row of type date, customer, vendor, amount that represent transactions and generate a dataset with row like date, user, balance where the balance is the sum of all transactions in which user was both customer or vendor.
Here is my code:
public class Transactions {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, Text>{
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
var splittedValues = value.toString().split(",");
var date = splittedValues[0];
var customer = splittedValues[1];
var vendor = splittedValues[2];
var amount = splittedValues[3];
var reduceValue = new Text(customer + "," + vendor + "," + amount);
context.write(new Text(date), reduceValue);
}
}
public static class IntSumReducer
extends Reducer<Text,Text,Text,Text> {
public void reduce(Text key, Iterable<Text> values,
Context context
) throws IOException, InterruptedException {
Map<String, Integer> balanceByUserId = new ConcurrentHashMap<>();
values.forEach(transaction -> {
var splittedTransaction = transaction.toString().split(",");
var customer = splittedTransaction[0];
var vendor = splittedTransaction[1];
var amount = 0;
if (splittedTransaction.length > 2) {
amount = Integer.parseInt(splittedTransaction[2]);
}
if (!balanceByUserId.containsKey(customer)) {
balanceByUserId.put(customer, 0);
}
if (!balanceByUserId.containsKey(vendor)) {
balanceByUserId.put(vendor, 0);
}
balanceByUserId.put(customer, balanceByUserId.get(customer) - amount);
balanceByUserId.put(vendor, balanceByUserId.get(vendor) + amount);
});
balanceByUserId.entrySet().forEach(entry -> {
var reducerValue = new Text(entry.getKey() + "," + entry.getValue().toString());
try {
context.write(key, reducerValue);
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "transactions");
job.setJarByClass(Transactions.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}

where the balance is the sum of all transactions in which user was both customer or vendor
balanceByUserId exists only for each unique date since your map key is the date.
If you want to aggregate by customer info (name / ID?), then customer should be the key of the mapper output.
Once all data from each customer is grouped by the reducer, you can then sort by date, if needed, but aggregate by other details.
Also worth pointing out that this would be easier in Hive or SparkSQL rather than Mapreduce.

Related

MapReduce word count which finds a specific word in the data set

I'm working on a simple map reduce program using the Kaggle data set
https://www.kaggle.com/datasnaek/youtube-new
The dataset contains 40950 records of videos with 16 variables such as video_id , trending_date , title, channel_title, category_id, publish_time, tags, views, likes, dislikes, comment_count, description etc.
The purpose of my MapReduce program is to find all videos which contain "iPhoneX" in its description and has atleast 10,000likes. The final output should only contain (title, video count)
Driver class
package solution;
public class Driver extends Configured implements Tool{
#Override
public int run(String[] args) throws Exception{
if(args.length != 2){
System.out.printf("Usage: Driver <input dir> <output dir> \n");
return -1;
}
Job job = new Job(getConf());
job.setJarByClass(Driver.class);
job.setJobName("iPhoneX");
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapperClass(Mapper.class);
job.setReducerClass(Reducer.class);
//Specify Combiner as the combiner class
job.setCombinerClass(Reducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
if(job.getCombinerClass() == null){
throw new Exception("Combiner not set");
}
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
/* The main method calls the ToolRunner.run method,
* which calls the options parser that interprets Hadoop terminal
* options and puts them into a config object
* */
public static void main(String[] args) throws Exception{
int exitCode = ToolRunner.run(new Configuration(), new Driver(),args);
System.exit(exitCode);
}
}
Reducer class
package solution;
import java.io.IOException;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class Reducer extends Reducer<Text, IntWritable, Text, IntWritable>{
#Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException{
int video_count = 0;
for(IntWritable value : values){
video_count += value.get();
}
context.write(key, new IntWritable(video_count));
}
}
Mapper class
public class Mapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text description = new Text();
private IntWritable likes = new IntWritable();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException{
String line = value.toString();
String str[] = line.split("\t");
if(str.length > 3){
description.set(str[8]);
}
// Testing how many times the iPhoneX word is located in the data set
// StringTokenizer itr = new StringTokenizer(line);
//
// while(itr.hasMoreTokens()){
// String token = itr.nextToken();
// if(token.contains("iPhoneX")){
// word.set("iPhoneX Count");
// context.write(word, new IntWritable(1));
// }
// }
}
}
Your code looks fine, but you're going to need to uncomment the part of the mapper that outputs any data, however, your mapper key should just be "iPhone" and you probably want to tokenize the description, not the entire line
You'll also want to extract the number of likes and filter out only those that match the listed condition of the problem set
By the way, you need at least 9 elements to get that position, not just three, so change the condition here
if(str.length >= 9){
description.set(str[8]);
likes = Integer.parseInt(str[...]);
if (likes >= 10000) {
// TODO: find when description string contains iPhoneX
context.write("IPhoneX", count);
}
} else {
return; // skip line
}
Alternatively, rather than pre-aggregating in the mapper, you could just write out (token, 1) for every token that is "iPhoneX", then let the combiner and reducer do the summation for you

Map Reduce - How to group and aggregate multiple attributes in a single job

I am currently struggling a bit with MapReduce.
I have the following dataset:
1,John,Computer
2,Anne,Computer
3,John,Mobile
4,Julia,Mobile
5,Jack,Mobile
6,Jack,TV
7,John,Computer
8,Jack,TV
9,Jack,TV
10,Anne,Mobile
11,Anne,Computer
12,Julia,Mobile
Now I want to apply MapReduce with grouping and
aggregation on this data set, in order that the output
doesn't only show how many times which person bought something,
but also what the product is, which the person ordered most.
So the output should look like:
John 3 Computer
Anne 3 Mobile
Jack 4 TV
Julia 2 Mobile
My current implementation of the mapper as well as reducer
looks like that, which perfectly returns how many orders were
made by the individuals, however, I am really clueless how
to get the desired output.
static class CountMatchesMapper extends Mapper<Object,Text,Text,IntWritable> {
#Override
protected void map(Object key, Text value, Context ctx) throws IOException, InterruptedException {
String row = value.toString();
String[] row_part = row.split(",");
try{
ctx.write(new Text(row_part[1]), new IntWritable(1));
catch (IOException e) {
}
catch (InterruptedException e) {
}
}
}
}
static class CountMatchesReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
#Override
protected void reduce(Text key, Iterable<IntWritable> values, Context ctx) throws IOException, InterruptedException {
int i = 0;
for (IntWritable value : values) i += value.get();
try{
ctx.write(key, new IntWritable(i));
}
catch (IOException e) {
}
catch (InterruptedException e) {
}
}
}
I would really appreciate any efficient solution and help.
Thanks in advance!
If I understand correctly what you want, I think the 2nd output line should be:
Anne 3 Computer
based on the input. Anne has bought 3 products in total: 2 Computers and 1 Mobile.
I have here a very basic and simplistic approach, which doesn't take into account edge cases etc, but could give you some direction:
static class CountMatchesMapper extends Mapper<LongWritable, Text, Text, Text> {
private Text outputKey = new Text();
private Text outputValue = new Text();
#Override
protected void map(LongWritable key, Text value, Context ctx) throws IOException, InterruptedException {
String row = value.toString();
String[] row_part = row.split(",");
outputKey.set(row_part[1]);
outputValue.set(row_part[2]);
ctx.write(outputKey, outputValue);
}
}
static class CountMatchesReducer extends Reducer<Text, Text, Text, NullWritable> {
private Text output = new Text();
#Override
protected void reduce(Text key, Iterable<Text> values, Context ctx) throws IOException, InterruptedException {
HashMap<String, Integer> productCounts = new HashMap();
int totalProductsBought = 0;
for (Text value : values) {
String productBought = value.toString();
int count = 0;
if (productCounts.containsKey(productBought)) {
count = productCounts.get(productBought);
}
productCounts.put(productBought, count + 1);
totalProductsBought += 1;
}
String topProduct = getTopProductForPerson(productCounts);
output.set(key.toString() + " " + totalProductsBought + " " + topProduct);
ctx.write(output, NullWritable.get());
}
private String getTopProductForPerson(Map<String, Integer> productCounts) {
String topProduct = "";
int maxCount = 0;
for (Map.Entry<String, Integer> productCount : productCounts.entrySet()) {
if (productCount.getValue() > maxCount) {
maxCount = productCount.getValue();
topProduct = productCount.getKey();
}
}
return topProduct;
}
}
The above will give the output that you described.
If you want a proper solution that scales etc then probably you need a composite key and custom GroupComparator. This way you will be able to add Combiner as well and make it much more efficient. However, the approach above should work for an average case.

Load data via HFile into HBase not working

I wrote a mapper to load data from disk via HFile into HBase, the program runs successfully, but there's no data loaded in my HBase table, any ideas on this please?
Here's my java program:
protected void writeToHBaseViaHFile() throws Exception {
try {
System.out.println("In try...");
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "XXXX");
Connection connection = ConnectionFactory.createConnection(conf);
System.out.println("got connection");
String inputPath = "/tmp/nuggets_from_Hive/part-00000";
String outputPath = "/tmp/mytemp" + new Random().nextInt(1000);
final TableName tableName = TableName.valueOf("steve1");
System.out.println("got table steve1, outputPath = " + outputPath);
// tag::SETUP[]
Table table = connection.getTable(tableName);
Job job = Job.getInstance(conf, "ConvertToHFiles");
System.out.println("job is setup...");
HFileOutputFormat2.configureIncrementalLoad(job, table,
connection.getRegionLocator(tableName)); // <1>
System.out.println("done configuring incremental load...");
job.setInputFormatClass(TextInputFormat.class); // <2>
job.setJarByClass(Importer.class); // <3>
job.setMapperClass(LoadDataMapper.class); // <4>
job.setMapOutputKeyClass(ImmutableBytesWritable.class); // <5>
job.setMapOutputValueClass(KeyValue.class); // <6>
FileInputFormat.setInputPaths(job, inputPath);
HFileOutputFormat2.setOutputPath(job, new org.apache.hadoop.fs.Path(outputPath));
System.out.println("Setup complete...");
// end::SETUP[]
if (!job.waitForCompletion(true)) {
System.out.println("Failure");
} else {
System.out.println("Success");
}
} catch (Exception e) {
e.printStackTrace();
}
}
Here's my mapper class:
public class LoadDataMapper extends Mapper<LongWritable, Text, ImmutableBytesWritable, Cell> {
public static final byte[] FAMILY = Bytes.toBytes("pd");
public static final byte[] COL = Bytes.toBytes("bf");
public static final ImmutableBytesWritable rowKey = new ImmutableBytesWritable();
#Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] line = value.toString().split("\t"); // <1>
byte[] rowKeyBytes = Bytes.toBytes(line[0]);
rowKey.set(rowKeyBytes);
KeyValue kv = new KeyValue(rowKeyBytes, FAMILY, COL, Bytes.toBytes(line[1])); // <6>
context.write (rowKey, kv); // <7>
System.out.println("line[0] = " + line[0] + "\tline[1] = " + line[1]);
}
}
I've created the table steve1 in my cluster, but got 0 rows after the program runs successfully:
hbase(main):007:0> count 'steve1'
0 row(s) in 0.0100 seconds
=> 0
What I've tried:
I tried to add print out message as in the mapper class to see if it really read the data, but the printouts never got printed in my console.
I'm at a loss at how to debug this.
Any ideas is greatly appreciated!
This is only to create HFiles, you still need to load HFile onto your table. For example, you need to do something like:
LoadIncrementalHFiles loader = new LoadIncrementalHFiles(conf);
loader.doBulkLoad(new Path(outputPath), admin, hTable, regionLocator);

Map Reduce job generating empty output file

Program is generating empty output file. Can anyone please suggest me where am I going wrong.
Any help will be highly appreciated. I tried to put job.setNumReduceTask(0) as I am not using reducer but still output file is empty.
public static class PrizeDisMapper extends Mapper<LongWritable, Text, Text, Pair>{
int rating = 0;
Text CustID;
IntWritable r;
Text MovieID;
public void map(LongWritable key, Text line, Context context
) throws IOException, InterruptedException {
String line1 = line.toString();
String [] fields = line1.split(":");
if(fields.length > 1)
{
String Movieid = fields[0];
String line2 = fields[1];
String [] splitline = line2.split(",");
String Custid = splitline[0];
int rate = Integer.parseInt(splitline[1]);
r = new IntWritable(rate);
CustID = new Text(Custid);
MovieID = new Text(Movieid);
Pair P = new Pair();
context.write(MovieID,P);
}
else
{
return;
}
}
}
public static class IntSumReducer extends Reducer<Text,Pair,Text,Pair> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<Pair> values,
Context context
) throws IOException, InterruptedException {
for (Pair val : values) {
context.write(key, val);
}
}
public class Pair implements Writable
{
String key;
int value;
public void write(DataOutput out) throws IOException {
out.writeInt(value);
out.writeChars(key);
}
public void readFields(DataInput in) throws IOException {
key = in.readUTF();
value = in.readInt();
}
public void setVal(String aKey, int aValue)
{
key = aKey;
value = aValue;
}
Main class:
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setInputFormatClass (TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Pair.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
Thanks #Pathmanaban Palsamy and #Chris Gerken for your suggestions. I have modified the code as per your suggestions but still getting empty output file. Can anyone please suggest me configurations in my main class for input and output. Do I need to specify Pair class in input to mapper & how?
I'm guessing the reduce method should be declared as
public void reduce(Text key, Iterable<Pair> values,
Context context
) throws IOException, InterruptedException
You get passed an Iterable (an object from which you can get an Iterator) which you use to iterate over all of the values that were mapped to the given key.
Since no reducer required, I suspect below line
Pair P = new Pair();
context.write(MovieID,P);
empty Pair would be the issue.
also pls check your Driver class you have given correct keyclass and valueclass like
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Pair.class);

Join with Hadoop in Java [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
I'm working since short time with Hadoop and trying to implement a join in Java. It doesn't matter if Map-Side or Reduce-Side. I took Reduce-Side join since it was supposed to be easier to implement. I know that Java is not the best choice for joins, aggregations etc. and should better pick Hive or Pig which I have done already. However I'm working on a research project and I have to use all of those 3 languages in order to deliver a comparison.
Anyway, I have two input files with different structure. One is key|value and the other one is key|value1;value2;value3;value4. One record from each input file looks like following:
Input1: 1;2010-01-10T00:00:01
Input2: 1;23;Blue;2010-01-11T00:00:01;9999-12-31T23:59:59
I followed the example in the Hadoop Definitve Guide book, but it didn't work for me. I'm posting my java files here, so you can see what's wrong.
public class LookupReducer extends Reducer<TextPair,Text,Text,Text> {
private String result = "";
private String msisdn;
private String attribute, product;
private long trans_dt_long, start_dt_long, end_dt_long;
private String trans_dt, start_dt, end_dt;
#Override
public void reduce(TextPair key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
context.progress();
//value without key to remember
Iterator<Text> iter = values.iterator();
for (Text val : values) {
Text recordNoKey = val; //new Text(iter.next());
String valSplitted[] = recordNoKey.toString().split(";");
//if the input is coming from CDR set corresponding values
if(key.getSecond().toString().equals(CDR.CDR_TAG))
{
trans_dt = recordNoKey.toString();
trans_dt_long = dateToLong(recordNoKey.toString());
}
//if the input is coming from Attributes set corresponding values
else if(key.getSecond().toString().equals(Attribute.ATT_TAG))
{
attribute = valSplitted[0];
product = valSplitted[1];
start_dt = valSplitted[2];
start_dt_long = dateToLong(valSplitted[2]);
end_dt = valSplitted[3];
end_dt_long = dateToLong(valSplitted[3]);;
}
Text record = val; //iter.next();
//System.out.println("RECORD: " + record);
Text outValue = new Text(recordNoKey.toString() + ";" + record.toString());
if(start_dt_long < trans_dt_long && trans_dt_long < end_dt_long)
{
//concat output columns
result = attribute + ";" + product + ";" + trans_dt;
context.write(key.getFirst(), new Text(result));
System.out.println("KEY: " + key);
}
}
}
private static long dateToLong(String date){
DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date parsedDate = null;
try {
parsedDate = formatter.parse(date);
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
long dateInLong = parsedDate.getTime();
return dateInLong;
}
public static class TextPair implements WritableComparable<TextPair> {
private Text first;
private Text second;
public TextPair(){
set(new Text(), new Text());
}
public TextPair(String first, String second){
set(new Text(first), new Text(second));
}
public TextPair(Text first, Text second){
set(first, second);
}
public void set(Text first, Text second){
this.first = first;
this.second = second;
}
public Text getFirst() {
return first;
}
public void setFirst(Text first) {
this.first = first;
}
public Text getSecond() {
return second;
}
public void setSecond(Text second) {
this.second = second;
}
#Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
first.readFields(in);
second.readFields(in);
}
#Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
first.write(out);
second.write(out);
}
#Override
public int hashCode(){
return first.hashCode() * 163 + second.hashCode();
}
#Override
public boolean equals(Object o){
if(o instanceof TextPair)
{
TextPair tp = (TextPair) o;
return first.equals(tp.first) && second.equals(tp.second);
}
return false;
}
#Override
public String toString(){
return first + ";" + second;
}
#Override
public int compareTo(TextPair tp) {
// TODO Auto-generated method stub
int cmp = first.compareTo(tp.first);
if(cmp != 0)
return cmp;
return second.compareTo(tp.second);
}
public static class FirstComparator extends WritableComparator {
protected FirstComparator(){
super(TextPair.class, true);
}
#Override
public int compare(WritableComparable comp1, WritableComparable comp2){
TextPair pair1 = (TextPair) comp1;
TextPair pair2 = (TextPair) comp2;
int cmp = pair1.getFirst().compareTo(pair2.getFirst());
if(cmp != 0)
return cmp;
return -pair1.getSecond().compareTo(pair2.getSecond());
}
}
public static class GroupComparator extends WritableComparator {
protected GroupComparator()
{
super(TextPair.class, true);
}
#Override
public int compare(WritableComparable comp1, WritableComparable comp2)
{
TextPair pair1 = (TextPair) comp1;
TextPair pair2 = (TextPair) comp2;
return pair1.compareTo(pair2);
}
}
}
}
public class Joiner extends Configured implements Tool {
public static final String DATA_SEPERATOR =";"; //Define the symbol for seperating the output data
public static final String NUMBER_OF_REDUCER = "1"; //Define the number of the used reducer jobs
public static final String COMPRESS_MAP_OUTPUT = "true"; //if the output from the mapping process should be compressed, set COMPRESS_MAP_OUTPUT = "true" (if not set it to "false")
public static final String
USED_COMPRESSION_CODEC = "org.apache.hadoop.io.compress.SnappyCodec"; //set the used codec for the data compression
public static final boolean JOB_RUNNING_LOCAL = true; //if you run the Hadoop job on your local machine, you have to set JOB_RUNNING_LOCAL = true
//if you run the Hadoop job on the Telefonica Cloud, you have to set JOB_RUNNING_LOCAL = false
public static final String OUTPUT_PATH = "/home/hduser"; //set the folder, where the output is saved. Only needed, if JOB_RUNNING_LOCAL = false
public static class KeyPartitioner extends Partitioner<TextPair, Text> {
#Override
public int getPartition(/*[*/TextPair key/*]*/, Text value, int numPartitions) {
System.out.println("numPartitions: " + numPartitions);
return (/*[*/key.getFirst().hashCode()/*]*/ & Integer.MAX_VALUE) % numPartitions;
}
}
private static Configuration hadoopconfig() {
Configuration conf = new Configuration();
conf.set("mapred.textoutputformat.separator", DATA_SEPERATOR);
conf.set("mapred.compress.map.output", COMPRESS_MAP_OUTPUT);
//conf.set("mapred.map.output.compression.codec", USED_COMPRESSION_CODEC);
conf.set("mapred.reduce.tasks", NUMBER_OF_REDUCER);
return conf;
}
#Override
public int run(String[] args) throws Exception {
// TODO Auto-generated method stub
if ((args.length != 3) && (JOB_RUNNING_LOCAL)) {
System.err.println("Usage: Lookup <CDR-inputPath> <Attribute-inputPath> <outputPath>");
System.exit(2);
}
//starting the Hadoop job
Configuration conf = hadoopconfig();
Job job = new Job(conf, "Join cdrs and attributes");
job.setJarByClass(Joiner.class);
MultipleInputs.addInputPath(job, new Path(args[0]), TextInputFormat.class, CDRMapper.class);
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, AttributeMapper.class);
//FileInputFormat.addInputPath(job, new Path(otherArgs[0])); //expecting a folder instead of a file
if(JOB_RUNNING_LOCAL)
FileOutputFormat.setOutputPath(job, new Path(args[2]));
else
FileOutputFormat.setOutputPath(job, new Path(OUTPUT_PATH));
job.setPartitionerClass(KeyPartitioner.class);
job.setGroupingComparatorClass(TextPair.FirstComparator.class);
job.setReducerClass(LookupReducer.class);
job.setMapOutputKeyClass(TextPair.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
return job.waitForCompletion(true) ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int exitCode = ToolRunner.run(new Joiner(), args);
System.exit(exitCode);
}
}
public class Attribute {
public static final String ATT_TAG = "1";
public static class AttributeMapper
extends Mapper<LongWritable, Text, TextPair, Text>{
private static Text values = new Text();
//private Object output = new Text();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//partition the input line by the separator semicolon
String[] attributes = value.toString().split(";");
String valuesInString = "";
if(attributes.length != 5)
System.err.println("Input column number not correct. Expected 5, provided " + attributes.length
+ "\n" + "Check the input file");
if(attributes.length == 5)
{
//setting the values with the input values read above
valuesInString = attributes[1] + ";" + attributes[2] + ";" + attributes[3] + ";" + attributes[4];
values.set(valuesInString);
//writing out the key and value pair
context.write( new TextPair(new Text(String.valueOf(attributes[0])), new Text(ATT_TAG)), values);
}
}
}
}
public class CDR {
public static final String CDR_TAG = "0";
public static class CDRMapper
extends Mapper<LongWritable, Text, TextPair, Text>{
private static Text values = new Text();
private Object output = new Text();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//partition the input line by the separator semicolon
String[] cdr = value.toString().split(";");
//setting the values with the input values read above
values.set(cdr[1]);
//output = CDR_TAG + cdr[1];
//writing out the key and value pair
context.write( new TextPair(new Text(String.valueOf(cdr[0])), new Text(CDR_TAG)), values);
}
}
}
I would be glad if anyone could at least post a link for a tutorial or a simple example where such a join functionality is implemented. I searched a lot, but either the code was not complete or there was not enough explanation.
To be honest, I have no idea what your code is trying to do, but that's probably because I'd do it in a different way and not familiar with the API's you're using.
I would start from scratch as follows:
Create a mapper to read one of the files. For each line read, write a key value pair to the context. The key is a Text created from the key and the value is another Text created by concatenating a "1" with the entire input record.
Create another mapper for the other file. This mapper acts just like the first mapper, but the value is a Text created by concatenating a "2" with the entire input record.
Write a reducer to do the join. The reduce() method will get all records written for a specific key. You can tell which input file (and therefore the data format for the record) by looking to see whether the value starts with a "1" or a "2". Once you know whether or not you have one, the other or both record types, you can write whatever logic you need to merge the data from the one or two records.
By the way, you use the MultipleInputs class to configure more than one mapper in your job/driver class.

Categories