Global variable's value doesn't change after for Loop - java

I am developing a hadoop project. I want to find customers in a certain day and then write those with the max consumption in that day. In my reducer class, for some reason, the global variable max doesn't change it's value after a for loop.
EDIT I want to find the customers with max consumption in a certain day. I have managed to find the customers in the date I want, but I am facing a problem in my Reducer class. Here is the code:
EDIT #2 I already know that the values(consumption) are Natural numbers. So in my output file I want to be only the customers, of a certain day, with max consumption.
EDIT #3 My input file is consisted of many data. It has three columns; the customer's id, the timestamp (yyyy-mm-DD HH:mm:ss) and the consumption
Driver class
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class alicanteDriver {
public static void main(String[] args) throws Exception {
long t_start = System.currentTimeMillis();
long t_end;
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Alicante");
job.setJarByClass(alicanteDriver.class);
job.setMapperClass(alicanteMapperC.class);
//job.setCombinerClass(alicanteCombiner.class);
job.setPartitionerClass(alicantePartitioner.class);
job.setNumReduceTasks(2);
job.setReducerClass(alicanteReducerC.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path("/alicante_1y.txt"));
FileOutputFormat.setOutputPath(job, new Path("/alicante_output"));
job.waitForCompletion(true);
t_end = System.currentTimeMillis();
System.out.println((t_end-t_start)/1000);
}
}
Mapper class
import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class alicanteMapperC extends
Mapper<LongWritable, Text, Text, IntWritable> {
String Customer = new String();
SimpleDateFormat ft = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date t = new Date();
IntWritable Consumption = new IntWritable();
int counter = 0;
// new vars
int max = 0;
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
Date d2 = null;
try {
d2 = ft.parse("2013-07-01 01:00:00");
} catch (ParseException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
if (counter > 0) {
String line = value.toString();
StringTokenizer itr = new StringTokenizer(line, ",");
while (itr.hasMoreTokens()) {
Customer = itr.nextToken();
try {
t = ft.parse(itr.nextToken());
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Consumption.set(Integer.parseInt(itr.nextToken()));
//sort out as many values as possible
if(Consumption.get() > max) {
max = Consumption.get();
}
//find customers in a certain date
if (t.compareTo(d2) == 0 && Consumption.get() == max) {
context.write(new Text(Customer), Consumption);
}
}
}
counter++;
}
}
Reducer class
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import com.google.common.collect.Iterables;
public class alicanteReducerC extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int max = 0; //this var
// declaration of Lists
List<Text> l1 = new ArrayList<Text>();
List<IntWritable> l2 = new ArrayList<IntWritable>();
for (IntWritable val : values) {
if (val.get() > max) {
max = val.get();
}
l1.add(key);
l2.add(val);
}
for (int i = 0; i < l1.size(); i++) {
if (l2.get(i).get() == max) {
context.write(key, new IntWritable(max));
}
}
}
}
Some values of the Input file
C11FA586148,2013-07-01 01:00:00,3
C11FA586152,2015-09-01 15:22:22,3
C11FA586168,2015-02-01 15:22:22,1
C11FA586258,2013-07-01 01:00:00,5
C11FA586413,2013-07-01 01:00:00,5
C11UA487446,2013-09-01 15:22:22,3
C11UA487446,2013-07-01 01:00:00,3
C11FA586148,2013-07-01 01:00:00,4
Output should be
C11FA586258 5
C11FA586413 5
I have searched the forums for a couple of hours, and still can't find the issue. Any ideas?

here is the refactored code: you can pass/change specific value for date of consumption. In this case you don't need reducer. my first answer was to query max comsumption from input, and this answer is to query user provided consumption from input. setup method will get user provided value for mapper.maxConsumption.date and pass them to map method. cleaup method in reducer scans all max consumption customers and writes final max in input (i.e, 5 in this case) - see screen shot for detail execution log:
run as:
hadoop jar maxConsumption.jar -Dmapper.maxConsumption.date="2013-07-01 01:00:00" Data/input.txt output/maxConsupmtion5
#input:
C11FA586148,2013-07-01 01:00:00,3
C11FA586152,2015-09-01 15:22:22,3
C11FA586168,2015-02-01 15:22:22,1
C11FA586258,2013-07-01 01:00:00,5
C11FA586413,2013-07-01 01:00:00,5
C11UA487446,2013-09-01 15:22:22,3
C11UA487446,2013-07-01 01:00:00,3
C11FA586148,2013-07-01 01:00:00,4
#output:
C11FA586258 5
C11FA586413 5
public class maxConsumption extends Configured implements Tool{
public static class DataMapper extends Mapper<Object, Text, Text, IntWritable> {
SimpleDateFormat ft = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
Date dateInFile, filterDate;
int lineno=0;
private final static Text customer = new Text();
private final static IntWritable consumption = new IntWritable();
private final static Text maxConsumptionDate = new Text();
public void setup(Context context) {
Configuration config = context.getConfiguration();
maxConsumptionDate.set(config.get("mapper.maxConsumption.date"));
}
public void map(Object key, Text value, Context context) throws IOException, InterruptedException{
try{
lineno++;
filterDate = ft.parse(maxConsumptionDate.toString());
//map data from line/file
String[] fields = value.toString().split(",");
customer.set(fields[0].trim());
dateInFile = ft.parse(fields[1].trim());
consumption.set(Integer.parseInt(fields[2].trim()));
if(dateInFile.equals(filterDate)) //only send to reducer if date filter matches....
context.write(new Text(customer), consumption);
}catch(Exception e){
System.err.println("Invaid Data at line: " + lineno + " Error: " + e.getMessage());
}
}
}
public static class DataReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
LinkedHashMap<String, Integer> maxConsumption = new LinkedHashMap<String,Integer>();
#Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int max=0;
System.out.print("reducer received: " + key + " [ ");
for(IntWritable value: values){
System.out.print( value.get() + " ");
if(value.get() > max)
max=value.get();
}
System.out.println( " ]");
System.out.println(key.toString() + " max is " + max);
maxConsumption.put(key.toString(), max);
}
#Override
protected void cleanup(Context context)
throws IOException, InterruptedException {
int max=0;
//first find the max from reducer
for (String key : maxConsumption.keySet()){
System.out.println("cleaup customer : " + key.toString() + " consumption : " + maxConsumption.get(key)
+ " max: " + max);
if(maxConsumption.get(key) > max)
max=maxConsumption.get(key);
}
System.out.println("final max is: " + max);
//write only the max value from map
for (String key : maxConsumption.keySet()){
if(maxConsumption.get(key) == max)
context.write(new Text(key), new IntWritable(maxConsumption.get(key)));
}
}
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new maxConsumption(), args);
System.exit(res);
}
public int run(String[] args) throws Exception {
if (args.length != 2) {
System.err.println("Usage: -Dmapper.maxConsumption.date=\"2013-07-01 01:00:00\" <in> <out>");
System.exit(2);
}
Configuration conf = this.getConf();
Job job = Job.getInstance(conf, "get-max-consumption");
job.setJarByClass(maxConsumption.class);
job.setMapperClass(DataMapper.class);
job.setReducerClass(DataReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
FileSystem fs = null;
Path dstFilePath = new Path(args[1]);
try {
fs = dstFilePath.getFileSystem(conf);
if (fs.exists(dstFilePath))
fs.delete(dstFilePath, true);
} catch (IOException e1) {
e1.printStackTrace();
}
return job.waitForCompletion(true) ? 0 : 1;
}
}

Probably all the values that go to your reducer are under 0. Try min value to identify if you variable change.
max = MIN_VALUE;
Based on what you say, the output should be only 0's (in this the max value in the reducers are 0) or no output (all values less than 0). Also, look this
context.write(key, new IntWritable());
it should be
context.write(key, new IntWritable(max));
EDIT: I just saw your Mapper class, it has a lot of problems. the following code is skiping the first element in every mapper. why?
if (counter > 0) {
I guess, that you are getting something like this right? "customer, 2013-07-01 01:00:00, 2,..." if that is the case and you are already filtering values, you should declare your max variable as local, not in the mapper scope, it would affect multiple customers.
There are a lot of questions around this.. you could explain your input for each mapper and what you want to do.
EDIT2: Based on your answer I would try this
import java.io.IOException;
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.StringTokenizer;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class AlicanteMapperC extends Mapper<LongWritable, Text, Text, IntWritable> {
private final int max = 5;
private SimpleDateFormat ft = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
Date t = null;
String[] line = value.toString().split(",");
String customer = line[0];
try {
t = ft.parse(line[1]);
} catch (ParseException e) {
// TODO Auto-generated catch block
throw new RuntimeException("something wrong with the date!" + line[1]);
}
Integer consumption = Integer.parseInt(line[2]);
//find customers in a certain date
if (t.compareTo(ft.parse("2013-07-01 01:00:00")) == 0 && consumption == max) {
context.write(new Text(customer), new IntWritable(consumption));
}
counter++;
}
}
and reducer pretty simple to emit 1 record per customer
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import com.google.common.collect.Iterables;
public class AlicanteReducerC extends
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
//We already now that it is 5
context.write(key, new IntWritable(5));
//If you want something different, for example report customer with different values, you could iterate over the iterator like this
//for (IntWritable val : values) {
// context.write(key, new IntWritable(val));
//}
}
}

Related

I am having trouble chaining two mapreduce jobs

The First map-reduce is
map ( key, line ):
read 2 long integers from the line into the variables key2 and value2
emit (key2,value2)
reduce ( key, nodes ):
count = 0
for n in nodes
count++
emit(key,count)
The second Map-Reduce is:
map ( node, count ):
emit(count,1)
reduce ( key, values ):
sum = 0
for v in values
sum += v
emit(key,sum)
The code i wrote for this is:
import java.io.IOException;
import java.util.Scanner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.*;
import org.apache.hadoop.mapreduce.lib.output.*;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
public class Graph extends Configured implements Tool{
#Override
public int run( String[] args ) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "job1");
job.setJobName("job1");
job.setJarByClass(Graph.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(IntWritable.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(IntWritable.class);
job.setMapperClass(MyMapper.class);
job.setReducerClass(MyReducer.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path("job1"));
job.waitForCompletion(true);
Job job2 = Job.getInstance(conf, "job2");
job2.setJobName("job2");
job2.setOutputKeyClass(IntWritable.class);
job2.setOutputValueClass(IntWritable.class);
job2.setMapOutputKeyClass(IntWritable.class);
job2.setMapOutputValueClass(IntWritable.class);
job2.setMapperClass(MyMapper1.class);
job2.setReducerClass(MyReducer1.class);
job2.setInputFormatClass(TextInputFormat.class);
job2.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job2,new Path("job1"));
FileOutputFormat.setOutputPath(job2,new Path(args[1]));
job2.waitForCompletion(true);
return 0;
}
public static void main ( String[] args ) throws Exception {
ToolRunner.run(new Configuration(),new Graph(),args);
}
public static class MyMapper extends Mapper<Object,Text,IntWritable,IntWritable> {
#Override
public void map ( Object key, Text value, Context context )
throws IOException, InterruptedException {
Scanner s = new Scanner(value.toString()).useDelimiter(",");
int key2 = s.nextInt();
int value2 = s.nextInt();
context.write(new IntWritable(key2),new IntWritable(value2));
s.close();
}
}
public static class MyReducer extends Reducer<IntWritable,IntWritable,IntWritable,IntWritable> {
#Override
public void reduce ( IntWritable key, Iterable<IntWritable> values, Context context )
throws IOException, InterruptedException {
int count = 0;
for (IntWritable v: values) {
count++;
};
context.write(key,new IntWritable(count));
}
}
public static class MyMapper1 extends Mapper<IntWritable, IntWritable,IntWritable,IntWritable >{
#Override
public void map(IntWritable node, IntWritable count, Context context )
throws IOException, InterruptedException {
context.write(count, new IntWritable(1));
}
}
public static class MyReducer1 extends Reducer<IntWritable,IntWritable,IntWritable,IntWritable> {
#Override
public void reduce ( IntWritable key, Iterable<IntWritable> values, Context context )
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable v: values) {
sum += v.get();
};
context.write(key,new IntWritable(sum));
//System.out.println("job 2"+sum);
}
}
}
I have tried to implement that psudocode, arg[0] is the input and arg[1] is the ouput.....when i run the code, i get the output of job1 and not that of job2.
Whats seems to be wrong??
I think i am not passing the output of job1 to job2 properly.
Instead of job1 in
FileOutputFormat.setOutputPath(job, new Path("job1"));
use this instead:
String temporary="home/xxx/...." //store result here
FileOutputFormat.setOutputPath(job, new Path(temporary));

MAPREDUCE error: method write in interface TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> cannot be applied to given types

package br.edu.ufam.anibrata;
import java.io.*;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Iterator;
import java.util.List;
import java.util.StringTokenizer;
import java.util.Arrays;
import java.util.HashSet;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.log4j.Logger;
import org.kohsuke.args4j.CmdLineException;
import org.kohsuke.args4j.CmdLineParser;
import org.kohsuke.args4j.Option;
import org.kohsuke.args4j.ParserProperties;
import tl.lin.data.array.ArrayListWritable;
import tl.lin.data.pair.PairOfStringInt;
import tl.lin.data.pair.PairOfWritables;
import br.edu.ufam.data.Dataset;
import com.google.gson.JsonSyntaxException;
public class BuildIndexWebTables extends Configured implements Tool {
private static final Logger LOG = Logger.getLogger(BuildIndexWebTables.class);
public static void main(String[] args) throws Exception
{
ToolRunner.run(new BuildIndexWebTables(), args);
}
#Override
public int run(String[] argv) throws Exception {
// Creates a new job configuration for this Hadoop job.
Args args = new Args();
CmdLineParser parser = new CmdLineParser(args, ParserProperties.defaults().withUsageWidth(100));
try
{
parser.parseArgument(argv);
}
catch (CmdLineException e)
{
System.err.println(e.getMessage());
parser.printUsage(System.err);
return -1;
}
Configuration conf = getConf();
conf.setBoolean("mapreduce.map.output.compress", true);
conf.setBoolean("mapreduce.map.output.compress", true);
conf.set("mapreduce.map.failures.maxpercent", "10");
conf.set("mapreduce.max.map.failures.percent", "10");
conf.set("mapred.max.map.failures.percent", "10");
conf.set("mapred.map.failures.maxpercent", "10");
conf.setBoolean("mapred.compress.map.output", true);
conf.set("mapred.map.output.compression.codec", "org.apache.hadoop.io.compress.SnappyCodec");
conf.setBoolean("mapreduce.map.output.compress", true);
/*String inputPrefixes = args[0];
String outputFile = args[1];*/
Job job = Job.getInstance(conf);
/*FileInputFormat.addInputPath(job, new Path(inputPrefixes));
FileOutputFormat.setOutputPath(job, new Path(outputFile));*/
FileInputFormat.setInputPaths(job, new Path(args.input));
FileOutputFormat.setOutputPath(job, new Path(args.output));
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job,org.apache.hadoop.io.compress.GzipCodec.class);
job.setMapperClass(BuildIndexWebTablesMapper.class);
job.setReducerClass(BuildIndexWebTablesReducer.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(PairOfWritables.class);
//job.setOutputFormatClass(MapFileOutputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
/*job.setOutputFormatClass(TextOutputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(NullWritable.class);
job.setOutputValueClass(Text.class);*/
job.setJarByClass(BuildIndexWebTables.class);
job.setNumReduceTasks(args.numReducers);
//job.setNumReduceTasks(500);
FileInputFormat.setInputPaths(job, new Path(args.input));
FileOutputFormat.setOutputPath(job, new Path(args.output));
System.out.println(Arrays.deepToString(FileInputFormat.getInputPaths(job)));
// Delete the output directory if it exists already.
Path outputDir = new Path(args.output);
FileSystem.get(getConf()).delete(outputDir, true);
long startTime = System.currentTimeMillis();
job.waitForCompletion(true);
System.out.println("Job Finished in " + (System.currentTimeMillis() - startTime) / 1000.0 + " seconds");
return 0;
}
private BuildIndexWebTables() {}
public static class Args
{
#Option(name = "-input", metaVar = "[path]", required = true, usage = "input path")
public String input;
#Option(name = "-output", metaVar = "[path]", required = true, usage = "output path")
public String output;
#Option(name = "-reducers", metaVar = "[num]", required = false, usage = "number of reducers")
public int numReducers = 1;
}
public static class BuildIndexWebTablesMapper extends Mapper<LongWritable, Text, Text, Text> {
//public static final Log log = LogFactory.getLog(BuildIndexWebTablesMapper.class);
private static final Text WORD = new Text();
private static final Text OPVAL = new Text();
#Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// Log to stdout file
System.out.println("Map key : TEST");
//log to the syslog file
//log.info("Map key "+ key);
/*if(log.isDebugEanbled()){
log.debug("Map key "+ key);
}*/
Dataset ds;
String pgTitle; // Table page title
List<String> tokens = new ArrayList<String>(); // terms for frequency and other data
ds = Dataset.fromJson(value.toString()); // Get all text values from the json corpus
String[][] rel = ds.getRelation(); // Extract relation from the first json
int numCols = rel.length; // Number of columns in the relation
String[] attributes = new String[numCols]; // To store attributes for the relation
for (int j = 0; j < numCols; j++) { // Attributes of the relation
attributes[j] = rel[j][0];
}
int numRows = rel[0].length; //Number of rows of the relation
//dsTabNum = ds.getTableNum(); // Gets the table number from json
// Reads terms from relation and stores in tokens
for (int i = 0; i < numRows; i++ ){
for (int j = 0; j < numCols; j++ ){
String w = rel[i][j].toLowerCase().replaceAll("(^[^a-z]+|[^a-z]+$)", "");
if (w.length() == 0)
continue;
else {
w = w + "|" + pgTitle + "." + j + "|" + i; // Concatenate the term/PageTitle.Column number/row number in term
tokens.add(w);
}
}
}
// Emit postings.
for (String token : tokens){
String[] tokenPart = token.split("|", -2); // Split based on "|", -2(any negative) to split multiple times.
String newkey = tokenPart[0] + "|" + tokenPart[1];
WORD.set(newkey); // Emit term as key
//String valstr = Arrays.toString(Arrays.copyOfRange(tokenPart, 2, tokenPart.length)); // Emit rest of the string as value
String valstr = tokenPart[2];
OPVAL.set(valstr);
context.write(WORD,OPVAL);
}
}
}
public static class BuildIndexWebTablesReducer extends Reducer<Text, Text, Text, Text> {
private static final Text TERM = new Text();
private static final IntWritable TF = new IntWritable();
private String PrevTerm = null;
private int termFrequency = 0;
#Override
protected void reduce(Text key, Iterable<Text> textval, Context context) throws IOException, InterruptedException {
Iterator<Text> iter = textval.iterator();
IntWritable tnum = new IntWritable();
ArrayListWritable<IntWritable> postings = new ArrayListWritable<IntWritable>();
PairOfStringInt relColInfo = new PairOfStringInt();
PairOfWritables keyVal = new PairOfWritables<PairOfStringInt, ArrayListWritable<IntWritable>>();
if((!key.toString().equals(PrevTerm)) && (PrevTerm != null)) {
String[] parseKey = PrevTerm.split("|", -2);
TERM.set(parseKey[0]);
relColInfo.set(parseKey[1],termFrequency);
keyVal.set(relColInfo, postings);
context.write(TERM, keyVal);
termFrequency = 0;
postings.clear();
}
PrevTerm = key.toString();
while (iter.hasNext()) {
int tupleset = Integer.parseInt(iter.next().toString());
tnum.set(tupleset);
postings.add(tnum);
termFrequency++;
}
}
}
}`
I am getting the below mentioned error while compilation.
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
(default-compile) on project projeto-final: Compilation failure
[ERROR]
/home/cloudera/topicosBD-pis/topicosBD-pis/projeto-final/src/main/java/br/edu/ufam/anibrata/BuildIndexWebTables.java:[278,11]
error: method write in interface
TaskInputOutputContext cannot be
applied to given types;
The line where exactly this occurs is "context.write(TERM, keyVal);". This code has some dependencies that is based on my local machine though. I am stuck at the error since I am not getting any idea about it anywhere. If someone can help me understand the origin of the issue and how this can be tackled. I am pretty new to hadoop / mapreduce.
I have tried toggling between OutputFormatClass among job.setOutputFormatClass(MapFileOutputFormat.class); and
job.setOutputFormatClass(TextOutputFormat.class);, both of them throwing the same error. I am using "mvn clean package" to compile.
Any help is appreciated very much.
Thanks in advance.
As i can see, you are trying to write in context a key(TERM) of type Text and a value(keyval) of type PairOfWritables, but your reducer class extends Reducer with VALUEOUT(the last one) of type TEXT. You should change VALUEOUT to proper type.
In your case:
public static class BuildIndexWebTablesReducer extends Reducer<Text, Text, Text, PairOfWritables>

hadoop mapreduce to generate substrings of different lengths

Using Hadoop mapreduce I am writing code to get substrings of different lengths. Example given string "ZYXCBA" and length 3 (Using a text file i give input as "3 ZYXCBA"). My code has to return all possible strings of length 3 ("ZYX","YXC","XCB","CBA"), length 4("ZYXC","YXCB","XCBA") finally length 5("ZYXCB","YXCBA").
In map phase I did the following:
key = length of substrings I want
value = "ZYXCBA".
So mapper output is
3,"ZYXCBA"
4,"ZYXCBA"
5,"ZYXCBA"
In reduce I take string ("ZYXCBA") and key 3 to get all substrings of length 3. Same occurs for 4,5. Results are concatenated using a string. So out put of reduce should be :
3 "ZYX YXC XCB CBA"
4 "ZYXC YXCB XCBA"
5 "ZYXCB YXCBA"
I am running my code using following command:
hduser#Ganesh:~/Documents$ hadoop jar Saishingles.jar hadoopshingles.Saishingles Behara/Shingles/input Behara/Shingles/output
My code is as shown below:
package hadoopshingles;
import java.io.IOException;
//import java.util.ArrayList;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class Saishingles{
public static class shinglesmapper extends Mapper<Object, Text, IntWritable, Text>{
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
String str = new String(value.toString());
String[] list = str.split(" ");
int x = Integer.parseInt(list[0]);
String val = list[1];
int M = val.length();
int X = M-1;
for(int z = x; z <= X; z++)
{
context.write(new IntWritable(z), new Text(val));
}
}
}
public static class shinglesreducer extends Reducer<IntWritable,Text,IntWritable,Text> {
public void reduce(IntWritable key, Text value, Context context
) throws IOException, InterruptedException {
int z = key.get();
String str = new String(value.toString());
int M = str.length();
int Tz = M - z;
String newvalue = "";
for(int position = 0; position <= Tz; position++)
{
newvalue = newvalue + " " + str.substring(position,position + z);
}
context.write(new IntWritable(z),new Text(newvalue));
}
}
public static void main(String[] args) throws Exception {
GenericOptionsParser parser = new GenericOptionsParser(args);
Configuration conf = parser.getConfiguration();
String[] otherArgs = parser.getRemainingArgs();
if (otherArgs.length != 2)
{
System.err.println("Usage: Saishingles <inputFile> <outputDir>");
System.exit(2);
}
Job job = Job.getInstance(conf, "Saishingles");
job.setJarByClass(hadoopshingles.Saishingles.class);
job.setMapperClass(shinglesmapper.class);
//job.setCombinerClass(shinglesreducer.class);
job.setReducerClass(shinglesreducer.class);
//job.setMapOutputKeyClass(IntWritable.class);
//job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Output of reduce instead of returning
3 "ZYX YXC XCB CBA"
4 "ZYXC YXCB XCBA"
5 "ZYXCB YXCBA"
it's returning
3 "ZYXCBA"
4 "ZYXCBA"
5 "ZYXCBA"
i.e., it's giving same output as mapper. Don't know why this is happening. Please help me resolve this and thanks in advance for helping ;) :) :)
You can achieve this without even running reducer. your map/reduce logic is wrong...transformation should be done in Mapper.
Reduce - In this phase the reduce(WritableComparable, Iterator, OutputCollector, Reporter) method is called for each <key, (list of values)> pair in the grouped inputs.
in your reduce signature: public void reduce(IntWritable key, Text value, Context context)
should be public void reduce(IntWritable key, Iterable<Text> values, Context context)
Also, change last line of reduce method: context.write(new IntWritable(z),new Text(newvalue)); to context.write(key,new Text(newvalue)); - you already have Intwritable Key from mapper, I wouldn't create new one.
with given input:
3 "ZYXCBA"
4 "ZYXCBA"
5 "ZYXCBA"
Mapper job will output:
3 "XCB YXC ZYX"
4 "XCBA YXCB ZYXC"
5 "YXCBA ZYXCB"
MapReduceJob:
import java.io.IOException;
import java.util.ArrayList;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Reducer.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class SubStrings{
public static class SubStringsMapper extends Mapper<Object, Text, IntWritable, Text> {
#Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String [] values = value.toString().split(" ");
int len = Integer.parseInt(values[0].trim());
String str = values[1].replaceAll("\"", "").trim();
int endindex=len;
for(int i = 0; i < len; i++)
{
endindex=i+len;
if(endindex <= str.length())
context.write(new IntWritable(len), new Text(str.substring(i, endindex)));
}
}
}
public static class SubStringsReducer extends Reducer<IntWritable, Text, IntWritable, Text> {
public void reduce(IntWritable key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
String str="\""; //adding starting quotes
for(Text value: values)
str += " " + value;
str=str.replace("\" ", "\"") + "\""; //adding ending quotes
context.write(key, new Text(str));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "get-possible-strings-by-length");
job.setJarByClass(SubStrings.class);
job.setMapperClass(SubStringsMapper.class);
job.setReducerClass(SubStringsReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
FileSystem fs = null;
Path dstFilePath = new Path(args[1]);
try {
fs = dstFilePath.getFileSystem(conf);
if (fs.exists(dstFilePath))
fs.delete(dstFilePath, true);
} catch (IOException e1) {
e1.printStackTrace();
}
job.waitForCompletion(true);
}
}

When writing to context in the Reducer.reduce method, why is the toString method invoked and not the write method?

I'm writing a map-reduce batch job, which consists of 3-4 chained jobs. In the second job I'm using a custom class as the output value class when writing to context via context.write().
When studying the behavior of the code, I noticed that the toString method of this custom class is invoked, rather then the write method. Why does this happen, if the class implements the Writable interface, and I implemented the write method?
The custom class's code:
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
public class WritableLongPair implements Writable {
private long l1;
private long l2;
public WritableLongPair() {
l1 = 0;
l2 = 0;
}
public WritableLongPair(long l1, long l2) {
this.l1 = l1;
this.l2 = l2;
}
#Override
public void write(DataOutput dataOutput) throws IOException {
dataOutput.writeLong(l1);
dataOutput.writeLong(l2);
}
#Override
public void readFields(DataInput dataInput) throws IOException {
l1 = dataInput.readLong();
l2 = dataInput.readLong();
}
#Override
public String toString() {
return l1 + " " + l2;
}
}
The second job's code:
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Partitioner;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class Phase2 {
private static final int ASCII_OFFSET = 97;
public static class Mapper2
extends Mapper<Object, Text, Text, LongWritable>{
#Override
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
String[] valueAsStrings = value.toString().split("\t");
String actualKey = valueAsStrings[0];
LongWritable actualValue = new LongWritable(Long.parseLong(valueAsStrings[1]));
String[] components = actualKey.toString().split("[$]");
if (!components[1].equals("*")) {
context.write(new Text(components[1] + "$" + components[0]), actualValue);
context.write(new Text(components[1] + "$*"), actualValue);
}
context.write(new Text(actualKey), actualValue);
}
}
public static class Partitioner2 extends Partitioner<Text, LongWritable> {
#Override
public int getPartition(Text text, LongWritable longWritable, int i) {
return (int)(text.toString().charAt(0)) - ASCII_OFFSET;
}
}
public static class Reducer2
extends Reducer<Text, LongWritable, Text, WritableLongPair> {
private Text currentKey;
private long sum;
#Override
public void setup(Context context) {
currentKey = new Text();
currentKey.set("");
sum = 0l;
}
private String textContent(String w1, String w2) {
if (w2.equals("*"))
return w1 + "$*";
if (w1.compareTo(w2) < 0)
return w1 + "$" + w2;
else
return w2 + "$" + w1;
}
public void reduce(Text key, Iterable<LongWritable> counts,
Context context
) throws IOException, InterruptedException {
long sumPair = 0l;
String[] components = key.toString().split("[$]");
for (LongWritable count : counts) {
if (currentKey.equals(components[0])) {
if (components[1].equals("*"))
sum += count.get();
else
sumPair += count.get();
}
else {
sum = count.get();
currentKey.set(components[0]);
}
}
if (!components[1].equals("*"))
context.write(new Text(textContent(components[0], components[1])), new WritableLongPair(sumPair, sum));
}
}
public static class Comparator2 extends WritableComparator {
#Override
public int compare(WritableComparable o1, WritableComparable o2) {
String[] components1 = o1.toString().split("[$]");
String[] components2 = o2.toString().split("[$]");
if (components1[1].equals("*") && components2[1].equals("*"))
return components1[0].compareTo(components2[0]);
if (components1[1].equals("*")) {
if (components1[0].equals(components2[0]))
return -1;
else
return components1[0].compareTo(components2[0]);
}
if (components2[1].equals("*")) {
if (components1[0].equals(components2[0]))
return 1;
else
return components1[0].compareTo(components2[0]);
}
return components1[0].compareTo(components2[0]);
}
}
}
...and how I define my jobs:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Counter;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class Manager {
public static void main(String[] args) throws Exception {
Configuration conf1 = new Configuration();
if (args.length != 2) {
System.err.println("Usage: Manager <in> <out>");
System.exit(1);
}
Job job1 = Job.getInstance(conf1, "Phase 1");
job1.setJarByClass(Phase1.class);
job1.setMapperClass(Phase1.Mapper1.class);
job1.setPartitionerClass(Phase1.Partitioner1.class);
// job1.setCombinerClass(Phase1.Combiner1.class);
job1.setReducerClass(Phase1.Reducer1.class);
job1.setInputFormatClass(SequenceFileInputFormat.class);
// job1.setOutputFormatClass(FileOutputFormat.class);
job1.setOutputKeyClass(Text.class);
job1.setOutputValueClass(LongWritable.class);
job1.setNumReduceTasks(12);
FileInputFormat.addInputPath(job1, new Path(args[0]));
Path output1 = new Path(args[1]);
FileOutputFormat.setOutputPath(job1, output1);
boolean result = job1.waitForCompletion(true);
Counter counter = job1.getCounters().findCounter("org.apache.hadoop.mapreduce.TaskCounter", "REDUCE_INPUT_RECORDS");
System.out.println("Num of pairs sent to reducers in phase 1: " + counter.getValue());
Configuration conf2 = new Configuration();
Job job2 = Job.getInstance(conf2, "Phase 2");
job2.setJarByClass(Phase2.class);
job2.setMapperClass(Phase2.Mapper2.class);
job2.setPartitionerClass(Phase2.Partitioner2.class);
// job2.setCombinerClass(Phase2.Combiner2.class);
job2.setReducerClass(Phase2.Reducer2.class);
job2.setMapOutputKeyClass(Text.class);
job2.setMapOutputValueClass(LongWritable.class);
job2.setOutputKeyClass(Text.class);
job2.setOutputValueClass(WritableLongPair.class);
job2.setNumReduceTasks(26);
// job2.setGroupingComparatorClass(Phase2.Comparator2.class);
FileInputFormat.addInputPath(job2, output1);
Path output2 = new Path(args[1] + "2");
FileOutputFormat.setOutputPath(job2, output2);
result = job2.waitForCompletion(true);
counter = job2.getCounters().findCounter("org.apache.hadoop.mapreduce.TaskCounter", "REDUCE_INPUT_RECORDS");
System.out.println("Num of pairs sent to reducers in phase 2: " + counter.getValue());
// System.exit(job1.waitForCompletion(true) ? 0 : 1);
}
}
If you use the default output formatter (TextOutputFormat) Hadoop will call the toString() method on the object when it writes it to disk. This is expected behavior. The context.write() is being called, but its the output format that's controlling how the data appears on disk.
If you're chaining jobs together you would typically use SequenceFileInputFormat and SequenceFileOutputFormat for all of the jobs, since it makes reading the output from one job into a subsequent job easy.

Reading file in Java Hadoop

I am trying to follow a Hadoop tutorial on a website. I am trying to implement it in Java. The file that is provided is a file containing data about a forum. I want to parse that file and use the data.
The code to set my configurations is as follows:
public class ForumAnalyser extends Configured implements Tool{
public static void main(String[] args) {
int exitCode = 0;
try {
exitCode = ToolRunner.run(new ForumAnalyser(), args);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
finally {
System.exit(exitCode);
}
}
#Override
public int run(String[] args) throws Exception {
JobConf conf = new JobConf(ForumAnalyser.class);
setStudentHourPostJob(conf);
JobClient.runJob(conf);
return 0;
}
public static void setStudentHourPostJob(JobConf conf) {
FileInputFormat.setInputPaths(conf, new Path("input2"));
FileOutputFormat.setOutputPath(conf, new Path("output_forum_post"));
conf.setJarByClass(ForumAnalyser.class);
conf.setMapperClass(StudentHourPostMapper.class);
conf.setOutputKeyClass(LongWritable.class);
conf.setMapOutputKeyClass(LongWritable.class);
conf.setReducerClass(StudentHourPostReducer.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapOutputValueClass(IntWritable.class);
}
}
Each record in the file is separated by a "\n". So in the mapper class, each record is mostly correctly returned. Each column in every record is separated by tabs. The problem occurs for a specific column "posts". This column has the "posts" written by people and hence also contains "\n". So the mapper incorrectly reads a certain line under the "posts" column as a new record. Also, the "posts" column is specifically in double quotes in the file. The question that I have is:
1. How can I tell the mapper to differentiate each record correctly? Can I somehow tell it to read each column by tab? (I know how many columns each record has)?
Thanks in advance for the help.
By default, the MapReduce uses TextInputFormat, in which each record is a line of input (it assumes each record is delimited by new line ("\n")).
To achieve your requirements, you need to write your own InputFormat and RecordReader classes. For e.g. in Mahout, there is a XmlInputFormat for reading entire XML file as one record. Check the code here: https://github.com/apache/mahout/blob/master/integration/src/main/java/org/apache/mahout/text/wikipedia/XmlInputFormat.java
I took the code for XmlInputFormat and modified it to achieve your requirements. Here is the code (I call it as MultiLineInputFormat and MultiLineRecordReader):
package com.myorg.hadooptests;
import com.google.common.io.Closeables;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.DataOutputBuffer;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.IOException;
/**
* Reads records that are delimited by a specific begin/end tag.
*/
public class MultiLineInputFormat extends TextInputFormat {
private static final Logger log = LoggerFactory.getLogger(MultiLineInputFormat.class);
#Override
public RecordReader<LongWritable, Text> createRecordReader(InputSplit split, TaskAttemptContext context) {
try {
return new MultiLineRecordReader((FileSplit) split, context.getConfiguration());
} catch (IOException ioe) {
log.warn("Error while creating MultiLineRecordReader", ioe);
return null;
}
}
/**
* MultiLineRecordReader class to read through a given text document to output records containing multiple
* lines as a single line
*
*/
public static class MultiLineRecordReader extends RecordReader<LongWritable, Text> {
private final long start;
private final long end;
private final FSDataInputStream fsin;
private final DataOutputBuffer buffer = new DataOutputBuffer();
private LongWritable currentKey;
private Text currentValue;
private static final Logger log = LoggerFactory.getLogger(MultiLineRecordReader.class);
public MultiLineRecordReader(FileSplit split, Configuration conf) throws IOException {
// open the file and seek to the start of the split
start = split.getStart();
end = start + split.getLength();
Path file = split.getPath();
FileSystem fs = file.getFileSystem(conf);
fsin = fs.open(split.getPath());
fsin.seek(start);
log.info("start: " + Long.toString(start) + " end: " + Long.toString(end));
}
private boolean next(LongWritable key, Text value) throws IOException {
if (fsin.getPos() < end) {
try {
log.info("Started reading");
if(readUntilEnd()) {
key.set(fsin.getPos());
value.set(buffer.getData(), 0, buffer.getLength());
return true;
}
} finally {
buffer.reset();
}
}
return false;
}
#Override
public void close() throws IOException {
Closeables.closeQuietly(fsin);
}
#Override
public float getProgress() throws IOException {
return (fsin.getPos() - start) / (float) (end - start);
}
private boolean readUntilEnd() throws IOException {
boolean insideColumn = false;
byte[] delimiterBytes = new String("\"").getBytes("utf-8");
byte[] newLineBytes = new String("\n").getBytes("utf-8");
while (true) {
int b = fsin.read();
// end of file:
if (b == -1) return false;
log.info("Read: " + b);
// We encountered a Double Quote
if(b == delimiterBytes[0]) {
if(!insideColumn)
insideColumn = true;
else
insideColumn = false;
}
// If we encounter a new line and we are not inside a columnt, it means end of record.
if(b == newLineBytes[0] && !insideColumn) return true;
// save to buffer:
buffer.write(b);
// see if we've passed the stop point:
if (fsin.getPos() >= end) {
if(buffer.getLength() > 0) // If buffer has some data, then return true
return true;
else
return false;
}
}
}
#Override
public LongWritable getCurrentKey() throws IOException, InterruptedException {
return currentKey;
}
#Override
public Text getCurrentValue() throws IOException, InterruptedException {
return currentValue;
}
#Override
public void initialize(InputSplit split, TaskAttemptContext context) throws IOException, InterruptedException {
}
#Override
public boolean nextKeyValue() throws IOException, InterruptedException {
currentKey = new LongWritable();
currentValue = new Text();
return next(currentKey, currentValue);
}
}
}
Logic:
I have assumed that the fields containing new lines ("\n") are delimited by double quotes (").
The record reading logic is in readUntilEnd() method.
In this method, if a new line appears and we are in the middle of reading a field (which is delimited by double quotes), we do not consider it as one record.
To test this, I wrote a Identity Mapper (which writes the input as-is to the output). In the driver, you explicitly specify the input format as your custom input format.
For e.g., I have specified the input format as:
job.setInputFormatClass(MultiLineInputFormat.class); // This is my custom class for InputFormat and RecordReader
Following is the code:
package com.myorg.hadooptests;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class MultiLineDemo {
public static class MultiLineMapper
extends Mapper<LongWritable, Text , Text, NullWritable> {
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
context.write(value, NullWritable.get());
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "MultiLineMapper");
job.setInputFormatClass(MultiLineInputFormat.class);
job.setJarByClass(MultiLineDemo.class);
job.setMapperClass(MultiLineMapper.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(NullWritable.class);
FileInputFormat.addInputPath(job, new Path("/in/in8.txt"));
FileOutputFormat.setOutputPath(job, new Path("/out/"));
job.waitForCompletion(true);
}
}
I ran this on the following input. The input records match the output records exactly. You can see that 2nd field in each record, contains new lines ("\n"), but still entire record is returned in the output.
E:\HadoopTests\target>hadoop fs -cat /in/in8.txt
1 "post1 \n" 3
1 "post2 \n post2 \n" 3
4 "post3 \n post3 \n post3 \n" 6
1 "post4 \n post4 \n post4 \n post4 \n" 6
E:\HadoopTests\target>hadoop fs -cat /out/*
1 "post1 \n" 3
1 "post2 \n post2 \n" 3
1 "post4 \n post4 \n post4 \n post4 \n" 6
4 "post3 \n post3 \n post3 \n" 6
Note: I wrote this code for demo purpose. You need to handle the corner cases (if any) and optimize the code (if there is a scope for optimization).

Categories