Say, we take two input files:
file1 content: a b c d
file2 content: b c d e
I want TokenizerMapper.class to create the result below
ab ac ad ae bd bc bd be cb cc cd ce db dc dd de
one word align to the other word
I store file1 content into an array and file2 content into an another array
and run for loop to create it .
The Problem is How can i using MultipleInputs to combine two file content in the same class?
blew code has MultipleIputs:
public class WordCount {
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
//FileInputFormat.addInputPath(job, new Path(args[0]));
MultipleInputs.addInputPath(job, new Path(args[0]), TextInputFormat.class, TokenizerMapper.class);
MultipleInputs.addInputPath(job, new Path(args[1]), TextInputFormat.class, TokenizerMapper.class);
FileOutputFormat.setOutputPath(job, new Path(args[2]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Related
I'm new to Hadoop and I don't know how link two csv file together.
Here is my two CSV file :
order_dataset.csv
order_id order_approved_at order_delivered_customer_date
---------- ------------------- -------------------------------
1 2017-10-02 19:55:00 2017-10-04 04:39:00
2 2017-01-26 14:16:31 2017-02-02 14:08:10
3 2018-06-09 03:13:12 2018-06-19 12:05:52
order_review_dataset.csv
order_id customer_id review_score
---------- ------------- --------------
1 12 3
2 23 4
3 93 5
I would like to have a result file like this :
delivery_time in day avg_review_score
---------------------- ------------------
1 3.03
2 4.5
3 3.76
For now, I have count the number of delivery time. I don't know how to use the second CSV file to add the review_score. Here is my code :
public class Question {
public static void main(String[] args) throws Exception
{
if (args.length != 1) {
System.err.println("Usage: Question <input path>");
System.exit(-1);
}
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Job Title");
job.setJarByClass(Question.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
Path outputPath = new Path("./output/question3");
FileOutputFormat.setOutputPath(job, outputPath);
outputPath.getFileSystem(conf).delete(outputPath,true);
job.setMapperClass(QuestionMapper.class);
job.setReducerClass(QuestionReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
public class QuestionMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private Text jourDelaiLivraison = new Text();
#Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException
{
if (key.get() == 0)
return;
String ligne = value.toString();
String[] tokens = ligne.split(",");
DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
try {
Date order_approved_at = formatter.parse(tokens[4]);
Date order_delivered_customer_date = formatter.parse(tokens[6]);
//Il faut que la commande soit delivered
//Et que les Dates soient cohérentes
if (tokens[2].equals("delivered") && order_approved_at.compareTo(order_delivered_customer_date) < 0) {
long diff = order_delivered_customer_date.getTime() - order_approved_at.getTime();
String delai = String.valueOf(TimeUnit.DAYS.convert(diff, TimeUnit.MILLISECONDS));
jourDelaiLivraison.set(delai);
}
context.write(jourDelaiLivraison, new IntWritable(1));
} catch (ParseException e) {
e.printStackTrace();
}
}
}
public class QuestionReducer extends Reducer<Text, IntWritable, Text, IntWritable>{
private Map<Text, Integer> delaiNoteSatisfaction;
#Override
public void setup(Context context) {
this.delaiNoteSatisfaction = new HashMap<>();
}
#Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
{
int count = StreamSupport.stream(values.spliterator(), false)
.mapToInt(IntWritable::get)
.sum();
delaiNoteSatisfaction.put(new Text(key), count);
}
#Override
public void cleanup(Context context){
List<Text> keyList = new ArrayList(delaiNoteSatisfaction.keySet());
keyList.sort(Comparator.comparingInt((Text t) -> Integer.valueOf(t.toString())));
keyList.forEach(key -> {
try {
context.write(key, new IntWritable(delaiNoteSatisfaction.get(key)));
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
});
}
}
I am writign a simple extension on Mapreduce program and found that my code is only displaying output from Map(). Mapred job runs in eclipse without any errors but does not invoke reduce().
Here is my map():
public static class KVMapper
extends Mapper<Text, Text, IntWritable, Text>{
// extends Mapper<Text, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private String word;// = new Text();
private IntWritable iw;
private final LongWritable val = new LongWritable();
public void map(Text key, Text value , Context context
) throws IOException, InterruptedException {
iw = new IntWritable(Integer.parseInt(value.toString()));
System.out.println(value +" hello , world " +key );
context.write(iw, key);
}
}
Reduce()
public static class KVReducer
extends Reducer<IntWritable,Text,IntWritable, Text> {
KVReducer(){
System.out.println("Inside reducer");
}
public void reduce(IntWritable key, Text value,
Context context
) throws IOException, InterruptedException {
System.out.println(value +" hello2 , world " +key );
context.write(key, value);
}
}
main()
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator", "\t");
//conf.set("mapreduce.input.keyvaluelinerecordreader.key.value.separator",",");
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length < 2) {
System.err.println("Usage: wordcount <in> [<in>...] <out>");
System.exit(2);
}
Job job = new Job(conf, "word desc");
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setJarByClass(WordDesc.class);
job.setMapperClass(KVMapper.class);
job.setCombinerClass(KVReducer.class);
job.setReducerClass(KVReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setOutputKeyClass(IntWritable.class);
job.setOutputValueClass(Text.class);
for (int i = 0; i < otherArgs.length - 1; ++i) {
FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
}
FileOutputFormat.setOutputPath(job,
new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
Sample of the input:
1500s 1
1960s 1
Aldus 1
Sample output from the program, while I was expecting mapper to reverse key and value pairs
1500s 1
1960s 1
Aldus 1
Not sure why the reduce() is not being invoked in the above code
You are not overriding reduce() method of Reducer class.
For your case its signature should be like public void reduce(IntWritable key, Iterable<Text> values,Context context)
Here is updated KVReducer
public static class KVReducer
extends Reducer<IntWritable,Text,IntWritable, Text> {
KVReducer(){
System.out.println("Inside reducer");
}
public void reduce(IntWritable key, Iterable<Text> values,Context context) throws IOException, InterruptedException {
for(Text value: values){}
System.out.println(value +" hello2 , world " +key );
context.write(key, value);
}
}
}
I have a mapreduce program who's reduce method outputs a Text as the key and a FloatArrayWritable as the values. However, the values are outputting the array address instead of the values from the toString() method.
The output I am getting is:
IYE marketDataPackage.MarketData#69204998
IYE marketDataPackage.MarketData#69204998
The output should be:
IYE 38.89, 38.50, etc.
Could someone please advise the error in my code? Thanks.
public static class Map extends Mapper<LongWritable, Text, Text, MarketData> {
private Text symbol = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
StringTokenizer tokenizer2 = new StringTokenizer(tokenizer.nextToken().toString(), ",");
symbol.set(tokenizer2.nextToken());
context.write(symbol, new MarketData(tokenizer2.nextToken(), Float.parseFloat(tokenizer2.nextToken())));
}
}
}
public static class Reduce extends Reducer<Text, FloatWritable, Text, FloatArrayWritable> {
public void reduce(Text key, Iterable<MarketData> values, Context context) throws IOException, InterruptedException, ParseException {
Calendar today = Calendar.getInstance();
today.add(Calendar.DAY_OF_MONTH, -45);
Calendar testDate = Calendar.getInstance();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy/m/d");
List<FloatWritable> prices = new ArrayList<FloatWritable>();
for (MarketData m : values) {
testDate.setTime(sdf.parse(m.getTradeDate()));
if (testDate.after(today)) {
prices.add(new FloatWritable(m.getPrice()));
}
}
context.write(key, new FloatArrayWritable(prices.toArray(new FloatWritable[prices.size()])));
}
}
public static void main(String[] args) {
Configuration conf = new Configuration();
Job job = new Job(conf, "Security_Closing_Prices");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(MarketData.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
FloatArrayWritable class:
public class FloatArrayWritable extends ArrayWritable {
public FloatArrayWritable() {
super(FloatWritable.class);
}
public FloatArrayWritable(FloatWritable[] values) {
super(FloatWritable.class, values);
}
#Override
public FloatWritable[] get() {
return (FloatWritable[]) super.get();
}
#Override
public String toString() {
FloatWritable[] values = get();
String prices = "";
for (FloatWritable f : values) {
prices = prices + f.toString() + ", ";
}
if (prices != null && !prices.isEmpty()) {
prices = prices.substring(0, prices.length() - 2);
}
return prices;
}
}
The MarketData class should override toString(). You don't provide code for that class, but I suspect that it doesn't.
Program is generating empty output file. Can anyone please suggest me where am I going wrong.
Any help will be highly appreciated. I tried to put job.setNumReduceTask(0) as I am not using reducer but still output file is empty.
public static class PrizeDisMapper extends Mapper<LongWritable, Text, Text, Pair>{
int rating = 0;
Text CustID;
IntWritable r;
Text MovieID;
public void map(LongWritable key, Text line, Context context
) throws IOException, InterruptedException {
String line1 = line.toString();
String [] fields = line1.split(":");
if(fields.length > 1)
{
String Movieid = fields[0];
String line2 = fields[1];
String [] splitline = line2.split(",");
String Custid = splitline[0];
int rate = Integer.parseInt(splitline[1]);
r = new IntWritable(rate);
CustID = new Text(Custid);
MovieID = new Text(Movieid);
Pair P = new Pair();
context.write(MovieID,P);
}
else
{
return;
}
}
}
public static class IntSumReducer extends Reducer<Text,Pair,Text,Pair> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<Pair> values,
Context context
) throws IOException, InterruptedException {
for (Pair val : values) {
context.write(key, val);
}
}
public class Pair implements Writable
{
String key;
int value;
public void write(DataOutput out) throws IOException {
out.writeInt(value);
out.writeChars(key);
}
public void readFields(DataInput in) throws IOException {
key = in.readUTF();
value = in.readInt();
}
public void setVal(String aKey, int aValue)
{
key = aKey;
value = aValue;
}
Main class:
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setInputFormatClass (TextInputFormat.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Pair.class);
System.exit(job.waitForCompletion(true) ? 0 : 1);
Thanks #Pathmanaban Palsamy and #Chris Gerken for your suggestions. I have modified the code as per your suggestions but still getting empty output file. Can anyone please suggest me configurations in my main class for input and output. Do I need to specify Pair class in input to mapper & how?
I'm guessing the reduce method should be declared as
public void reduce(Text key, Iterable<Pair> values,
Context context
) throws IOException, InterruptedException
You get passed an Iterable (an object from which you can get an Iterator) which you use to iterate over all of the values that were mapped to the given key.
Since no reducer required, I suspect below line
Pair P = new Pair();
context.write(MovieID,P);
empty Pair would be the issue.
also pls check your Driver class you have given correct keyclass and valueclass like
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Pair.class);
I need to load data from text file to Map Reduce, I have searched the web, but I didn't find any right solution for my work.
Is there any method or class which reads a text /csv file from a system and store the data into HBASE Table.
For reading from text file first of all the text file should be in hdfs.
You need to specify input format and outputformat for job
Job job = new Job(conf, "example");
FileInputFormat.addInputPath(job, new Path("PATH to text file"));
job.setInputFormatClass(TextInputFormat.class);
job.setMapperClass(YourMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
TableMapReduceUtil.initTableReducerJob("hbase_table_name", YourReducer.class, job);
job.waitForCompletion(true);
YourReducer should extends org.apache.hadoop.hbase.mapreduce.TableReducer<Text, Text, Text>
Sample reducer code
public class YourReducer extends TableReducer<Text, Text, Text> {
private byte[] rawUpdateColumnFamily = Bytes.toBytes("colName");
/**
* Called once at the beginning of the task.
*/
#Override
protected void setup(Context context) throws IOException, InterruptedException {
// something that need to be done at start of reducer
}
#Override
public void reduce(Text keyin, Iterable<Text> values, Context context) throws IOException, InterruptedException {
// aggregate counts
int valuesCount = 0;
for (Text val : values) {
valuesCount += 1;
// put date in table
Put put = new Put(keyin.toString().getBytes());
long explicitTimeInMs = new Date().getTime();
put.add(rawUpdateColumnFamily, Bytes.toBytes("colName"), explicitTimeInMs,val.toString().getBytes());
context.write(keyin, put);
}
}
}
Sample mapper class
public static class YourMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}