I am very confused why this is happening. I have been working on this for some time and I just don't understand.
My Map code works as I am able to verify the output in the directory it is in.
This is the method:
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String stateKeyword = value.toString();
String[] pieces = new String[] {stateKeyword};
for (String element : pieces) {
String name = element.split(":")[0].trim();
String id = element.split(":")[1].trim();
Integer rank = Integer.parseInt(element.split(":")[2].trim());
context.write(new Text(name), new Text(id + ":" + rank));
}
}
So my Output will have the concatenation of the id and rank field. I can see it in the output file if I print the value normally.
However, any split manipulation I execute throws aArrayOutOfBoundsException and I can't understand why. I even do a check if the value contains a ":" and it will print but it won't split. But when I don't make this check I get the exception.
Here is my reduce:
public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException {
List<String> elements = new ArrayList<String>();
Text word = new Text();
for (Text val : values) {
if (val.toString().contains(":")) {
String state = val.toString().split(":")[0];
word.set(state);
}
context.write(key, word);
}
}
My output in my file looks like this:
Name id:rank
Name id:rank
Name id:rank
...
...
...
But why can't I split off the id and rank?
To avoid ArrayOutOfBoundsException, check array size before getting value out of the array. Something like this will be more appropriate:
String[] temp = element.split(":");
if(element.size()==2){
String name = temp[0].trim();
String id = temp[1].trim();
}
I am having trouble with a MapReduce Job. My map function does run and it produces the desired output. However, the reduce function does not run. It seems like the function never gets called. I am using Text as keys and Text as values. But I don't think that this causes the problem.
The input file is formatted as follows:
2015-06-06,2015-06-06,40.80239868164062,-73.93379211425781,40.72591781616211,-73.98358154296875,7.71,35.72
2015-06-06,2015-06-06,40.71020126342773,-73.96302032470703,40.72967529296875,-74.00226593017578,3.11,2.19
2015-06-05,2015-06-05,40.68404388427734,-73.97597503662109,40.67932510375977,-73.95581817626953,1.13,1.29
...
I want to extract the second date of a line as Text and use it as key for the reduce. The value for the key will be a combination of the last two float values in the same line.
i.e.: 2015-06-06 7.71 35.72
2015-06-06 9.71 66.72
So that the value part can be viewed as two columns separated by a blank.
That actually works and I get an output file with many same keys but different values.
Now I want to sum up the both of the float columns for each key, so that after the reduce I get a date as key with the summed up columns as value.
Problem: reduce does not run.
See the code below:
Mapper
public class Aggregate {
public static class EarnDistMapper extends Mapper<Object, Text, Text, Text> {
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
String [] splitResult = value.toString().split(",");
String dropOffDate = "";
String compEarningDist = "";
//dropoffDate at pos 1 as key
dropOffDate = splitResult[1];
//distance at pos length-2 and earnings at pos length-1 as values separated by space
compEarningDist = splitResult[splitResult.length -2] + " " + splitResult[splitResult.length-1];
context.write(new Text(dropOffDate), new Text(compEarningDist));
}
}
Reducer
public static class EarnDistReducer extends Reducer<Text,Text,Text,Text> {
public void reduce(Text key, Iterator<Text> values, Context context) throws IOException, InterruptedException {
float sumDistance = 0;
float sumEarnings = 0;
String[] splitArray;
while (values.hasNext()){
splitArray = values.next().toString().split("\\s+");
//distance first
sumDistance += Float.parseFloat(splitArray[0]);
sumEarnings += Float.parseFloat(splitArray[1]);
}
//combine result to text
context.write(key, new Text(Float.toString(sumDistance) + " " + Float.toString(sumEarnings)));
}
}
Job
public static void main(String[] args) throws Exception{
// TODO
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "Taxi dropoff");
job.setJarByClass(Aggregate.class);
job.setMapperClass(EarnDistMapper.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setCombinerClass(EarnDistReducer.class);
job.setReducerClass(EarnDistReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
Thank you for your help!!
You have the signature of the reduce method wrong. You have:
public void reduce(Text key, Iterator<Text> values, Context context) {
It should be:
public void reduce(Text key, Iterable<Text> values, Context context) {
I'm new to Hadoop, and i'm trying to do a MapReduce program, to count the max first two occurrencise of lecters by date (grouped by month). So my input is of this kind :
2017-06-01 , A, B, A, C, B, E, F
2017-06-02 , Q, B, Q, F, K, E, F
2017-06-03 , A, B, A, R, T, E, E
2017-07-01 , A, B, A, C, B, E, F
2017-07-05 , A, B, A, G, B, G, G
so, i'm expeting as result of this MapReducer program, something like :
2017-06, A:4, E:4
2017-07, A:4, B:4
public class ArrayGiulioTest {
public static Logger logger = Logger.getLogger(ArrayGiulioTest.class);
public static class CustomMap extends Mapper<LongWritable, Text, Text, TextWritable> {
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
TextWritable array = new TextWritable();
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line, ",");
String dataAttuale = tokenizer.nextToken().substring(0,
line.lastIndexOf("-"));
Text tmp = null;
Text[] tmpArray = new Text[tokenizer.countTokens()];
int i = 0;
while (tokenizer.hasMoreTokens()) {
String prod = tokenizer.nextToken(",");
word.set(dataAttuale);
tmp = new Text(prod);
tmpArray[i] = tmp;
i++;
}
array.set(tmpArray);
context.write(word, array);
}
}
public static class CustomReduce extends Reducer<Text, TextWritable, Text, Text> {
public void reduce(Text key, Iterator<TextWritable> values,
Context context) throws IOException, InterruptedException {
MapWritable map = new MapWritable();
Text txt = new Text();
while (values.hasNext()) {
TextWritable array = values.next();
Text[] tmpArray = (Text[]) array.toArray();
for(Text t : tmpArray) {
if(map.get(t)!= null) {
IntWritable val = (IntWritable) map.get(t);
map.put(t, new IntWritable(val.get()+1));
} else {
map.put(t, new IntWritable(1));
}
}
}
Set<Writable> set = map.keySet();
StringBuffer str = new StringBuffer();
for(Writable k : set) {
str.append("key: " + k.toString() + " value: " + map.get(k) + "**");
}
txt.set(str.toString());
context.write(key, txt);
}
}
public static void main(String[] args) throws Exception {
long inizio = System.currentTimeMillis();
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "countProduct");
job.setJarByClass(ArrayGiulioTest.class);
job.setMapperClass(CustomMap.class);
//job.setCombinerClass(CustomReduce.class);
job.setReducerClass(CustomReduce.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(TextWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
long fine = System.currentTimeMillis();
logger.info("**************************************End" + (End-Start));
System.exit(1);
}
}
and i've implemented my custom TextWritable in this way :
public class TextWritable extends ArrayWritable {
public TextWritable() {
super(Text.class);
}
}
..so when i run my MapReduce program i obtain a result of this kind
2017-6 wordcount.TextWritable#3e960865
2017-6 wordcount.TextWritable#3e960865
it's obvious that my reducer it doesn't works. It seems the output from my Mapper
Any idea? And someone can says if is the right path to the solution?
Here Console Log (Just for information, my input file has 6 rows instead of 5)
*I obtain the same result starting MapReduce problem under eclipse(mono JVM) or using Hadoop with Hdfs
File System Counters
FILE: Number of bytes read=1216
FILE: Number of bytes written=431465
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
Map-Reduce Framework
Map input records=6
Map output records=6
Map output bytes=214
Map output materialized bytes=232
Input split bytes=97
Combine input records=0
Combine output records=0
Reduce input groups=3
Reduce shuffle bytes=232
Reduce input records=6
Reduce output records=6
Spilled Records=12
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=0
Total committed heap usage (bytes)=394264576
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=208
File Output Format Counters
Bytes Written=1813
I think you're trying to do too much work in the Mapper. You only need to group the dates (which it seems you aren't formatting them correctly anyway based on your expected output).
The following approach is going to turn these lines, for example
2017-07-01 , A, B, A, C, B, E, F
2017-07-05 , A, B, A, G, B, G, G
Into this pair for the reducer
2017-07 , ("A,B,A,C,B,E,F", "A,B,A,G,B,G,G")
In other words, you gain no real benefit by using an ArrayWritable, just keep it as text.
So, the Mapper would look like this
class CustomMap extends Mapper<LongWritable, Text, Text, Text> {
private final Text key = new Text();
private final Text output = new Text();
#Override
protected void map(LongWritable offset, Text value, Context context) throws IOException, InterruptedException {
int separatorIndex = value.find(",");
final String valueStr = value.toString();
if (separatorIndex < 0) {
System.err.printf("mapper: not enough records for %s", valueStr);
return;
}
String dateKey = valueStr.substring(0, separatorIndex).trim();
String tokens = valueStr.substring(1 + separatorIndex).trim().replaceAll("\\p{Space}", "");
SimpleDateFormat fmtFrom = new SimpleDateFormat("yyyy-MM-dd");
SimpleDateFormat fmtTo = new SimpleDateFormat("yyyy-MM");
try {
dateKey = fmtTo.format(fmtFrom.parse(dateKey));
key.set(dateKey);
} catch (ParseException ex) {
System.err.printf("mapper: invalid key format %s", dateKey);
return;
}
output.set(tokens);
context.write(key, output);
}
}
And then the reducer can build a Map that collects and counts the values from the value strings. Again, writing out only Text.
class CustomReduce extends Reducer<Text, Text, Text, Text> {
private final Text output = new Text();
#Override
protected void reduce(Text date, Iterable<Text> values, Context context) throws IOException, InterruptedException {
Map<String, Integer> keyMap = new TreeMap<>();
for (Text v : values) {
String[] keys = v.toString().trim().split(",");
for (String key : keys) {
if (!keyMap.containsKey(key)) {
keyMap.put(key, 0);
}
keyMap.put(key, 1 + keyMap.get(key));
}
}
output.set(mapToString(keyMap));
context.write(date, output);
}
private String mapToString(Map<String, Integer> map) {
StringBuilder sb = new StringBuilder();
String delimiter = ", ";
for (Map.Entry<String, Integer> entry : map.entrySet()) {
sb.append(
String.format("%s:%d", entry.getKey(), entry.getValue())
).append(delimiter);
}
sb.setLength(sb.length()-delimiter.length());
return sb.toString();
}
}
Given your input, I got this
2017-06 A:4, B:4, C:1, E:4, F:3, K:1, Q:2, R:1, T:1
2017-07 A:4, B:4, C:1, E:1, F:1, G:3
The main problem is about the sign of the reduce method :
I was writing : public void reduce(Text key, Iterator<TextWritable> values,
Context context)
instead of
public void reduce(Text key, Iterable<ArrayTextWritable> values,
This is the reason why i obtain my Map output instead of my Reduce otuput
I am trying to convert Text to String in my reduce function but its not working. I tried the same logic in Map function and it worked perfectly, but when I tried to apply this logic in my reduce function it is giving error: java.lang.ArrayIndexOutOfBoundsException 1
My Map code is like this
public static class OutDegreeMapper2
extends Mapper<Object, Text, Text, Text>
{
private Text word = new Text();
private Text word2 = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException
{
String oneLine = value.toString();
String[] parts = oneLine.split("\t");
word.set(parts[0]);
String join = parts[1]+",from2";
word2.set(join);
context.write(word, word2);
}
}
My reduce function is like this
public static class OutDegreeReducer
extends Reducer<Text,Text,Text,Text>
{
private Text word = new Text();
String merge ="";
public void reduce(Text key, Iterable<Text> values,
Context context
) throws IOException, InterruptedException
{
for(Text val:values)
{
String[] x = val.toString().split(",");
if(x[1].contains("from2")){
merge+= x[0];
}
}
word.set(merge);
context.write(key, word);
}
}
Kindly tell me why split is working in map function but not in reducer?
Very likely here
String[] parts = oneLine.split("\t");
word.set(parts[0]);
String join = parts[1]+",from2";
or here
String[] x = val.toString().split(",");
if(x[1].contains("from2")){
merge+= x[0];
}
when read x[1] or parts[1] throws the ArrayIndexOutOfBoundsException because there is no , and \t inside the string.
I suggest to check the size of the array before access the element 1.
Looking at the stacktrace you should be able to understand where is throwing the exception.
Instead of
if(x.length() > 1 && x[1].contains("from2")){
merge+= x[0];
}
Do this:
if(x.length() > 1 && x[1].contains("from2")){
merge+= x[0];
}
I have the following Mapper
private Text sentiment = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, Text> output, Reporter reporter)
throws IOException {
String allPages = value.toString();
String[] tokens = allPages.split(":::");
for(int i=0;i<(tokens.length-1);i++)
{
String articleID="";
sentiment.set(tokens[i].trim());
articleID = tokens[0].trim();
System.out.println("articleID "+articleID);
Text articleIDValue = new Text(articleID);
output.collect(sentiment,articleIDValue);
}
String line = "";
for(int j=1;j<tokens.length;j++){
line = line + " "+tokens[j];
System.out.println("line.... "+line);
}
Text lineText = new Text(line.trim());
output.collect(new Text(tokens[0]),lineText);
}
Sample Input: abc ::: In a market that's awash in tech IPOs, this one is different.
should store keyValue pair as (abc,In a market that's awash in tech IPOs, this one is different.)
Right now this stores as (abc,abc).. Where am I going wrong?
I suspect you're seeing the result of the first collect() call in which both key and value are set from tokens[0] ("abc").