Hadoop Type mismatch in key from map expected org apache hadoop io Text recieved org apache hadoop io LongWritable

+2 votes

WordCount.java

package counter;


public class WordCount extends Configured implements Tool {

public int run(String[] arg0) throws Exception {
    Configuration conf = new Configuration();

    Job job = new Job(conf, "wordcount");

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    job.setMapperClass(WordCountMapper.class);
    job.setReducerClass(WordCountReducer.class);

    job.setInputFormatClass(TextInputFormat.class);
    job.setOutputFormatClass(TextOutputFormat.class);

    FileInputFormat.addInputPath(job, new Path("counterinput"));
    // Erase previous run output (if any)
    FileSystem.get(conf).delete(new Path("counteroutput"), true);
    FileOutputFormat.setOutputPath(job, new Path("counteroutput"));

    job.waitForCompletion(true);
    return 0;
}   

public static void main(String[] args) throws Exception {
    int res = ToolRunner.run(new Configuration(), new WordCount(), args);
    System.exit(res);

    }
}

WordCountMapper.java

public class WordCountMapper extends
Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text,IntWritable> output, Reporter reporter)
    throws IOException, InterruptedException {
        System.out.println("hi");
    String line = value.toString();
    StringTokenizer tokenizer = new StringTokenizer(line);
    while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        output.collect(word, one);
        }
    }
}

WordCountReducer.java

public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values,
        OutputCollector<Text,IntWritable> output, Reporter reporter) throws IOException, InterruptedException {
        System.out.println("hello");
        int sum = 0;
        while (values.hasNext()) {
            sum += values.next().get();
        }
        output.collect(key, new IntWritable(sum));
    }
}

I am getting following error

13/06/23 23:13:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with  
processName=JobTracker, sessionId=

13/06/23 23:13:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the 
arguments. Applications should implement Tool for the same.
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.JobClient: Running job: job_local_0001
13/06/23 23:13:26 INFO input.FileInputFormat: Total input paths to process : 1
13/06/23 23:13:26 INFO mapred.MapTask: io.sort.mb = 100
13/06/23 23:13:26 INFO mapred.MapTask: data buffer = 79691776/99614720
13/06/23 23:13:26 INFO mapred.MapTask: record buffer = 262144/327680
13/06/23 23:13:26 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected org.apache.hadoop.io.Text, 
recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.
apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/06/23 23:13:27 INFO mapred.JobClient:  map 0% reduce 0%
13/06/23 23:13:27 INFO mapred.JobClient: Job complete: job_local_0001
13/06/23 23:13:27 INFO mapred.JobClient: Counters: 0
Nov 15, 2018 in Big Data Hadoop by digger
• 26,740 points
5,940 views

1 answer to this question.

0 votes

Add these 2 lines in your code :

job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);

This should solve the error

answered Nov 15, 2018 by Omkar
• 69,210 points
it didnt work for me.

can you please help me?

public static void main(String[] args) throws IllegalArgumentException, IOException,Exception{

// TODO Auto-generated method stub

Configuration config =new Configuration();

Job  j= Job.getInstance(config, "MaxAnnualRainFall");

FileInputFormat.addInputPath(j, new Path(args[0]));

FileOutputFormat.setOutputPath(j,new Path(args[1]));

j.setJarByClass(AnnualRainMain.class);

j.setMapperClass(AnnualRainMapper.class);

j.setReducerClass(AnnualRainReducer.class);

j.setInputFormatClass(TextInputFormat.class);

j.setOutputFormatClass(TextOutputFormat.class);

j.setOutputKeyClass(Text.class);

    j.setOutputValueClass(FloatWritable.class);

j.setMapOutputKeyClass(Text.class);

j.setMapOutputValueClass(FloatWritable.class);

System.exit(j.waitForCompletion(true)?0:1);

}

public void Map(LongWritable in,Text l,Context con) throws NumberFormatException, IOException, InterruptedException{

String key=l.toString().split("|")[0];

String val=l.toString().split("|")[14];

con.write(new Text(key),new FloatWritable(Float.parseFloat(val)));

}

public void Reduce(Text key ,Iterator<FloatWritable> val, Context con) throws IOException, InterruptedException{

float max=0;

while(val.hasNext()){

FloatWritable res=val.next();

if(res.get()>max){

max=res.get();

}

con.write(key, new FloatWritable(max));

}
Hi,

Try changing the map() method signature, it is with capital letter: public void Map(..). Use public void map(...) instead. Let us know if it works.

I suggest you try this below code:

package Edureka;

import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class WordCount {
public static void main(String [] args) throws Exception
{
Configuration c=new Configuration();
String[] files=new GenericOptionsParser(c,args).getRemainingArgs();
Path input=new Path(files[0]);
Path output=new Path(files[1]);
Job j=new Job(c,"wordcount");
j.setJarByClass(WordCount.class);
j.setMapperClass(MapForWordCount.class);
j.setReducerClass(ReduceForWordCount.class);
j.setOutputKeyClass(Text.class);
j.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(j, input);
FileOutputFormat.setOutputPath(j, output);
System.exit(j.waitForCompletion(true)?0:1);
}
public static class MapForWordCount extends Mapper<LongWritable, Text, Text, IntWritable>{
public void map(LongWritable key, Text value, Context con) throws IOException, InterruptedException
{
String line = value.toString();
String[] words=line.split(",");
for(String word: words )
{
      Text outputKey = new Text(word.toUpperCase().trim());
  IntWritable outputValue = new IntWritable(1);
  con.write(outputKey, outputValue);
}
}
}
public static class ReduceForWordCount extends Reducer<Text, IntWritable, Text, IntWritable>
{
public void reduce(Text word, Iterable<IntWritable> values, Context con) throws IOException, InterruptedException
{
int sum = 0;
   for(IntWritable value : values)
   {
   sum += value.get();
   }
   con.write(word, new IntWritable(sum));
}
}
}

After that, make sure you have these below jar files. If not, right-click on the project -> Select build path -> add external jars and include these jar file.

  1. usr/lib/hadoop-0.20/hadoop-core.jar

  2. Usr/lib/hadoop-0.20/lib/Commons-cli-1.2.jar

After that, export the project as a jar file and create a word count text file and add a few words into it. Then you need to move the text file into hdfs location and run the created jar file in your terminal. 

Hope this helps you :)
Let me know if you face any more errors.

I changed it to lower letter,it worked fine.Thank you :)

Related Questions In Big Data Hadoop

0 votes
1 answer

Getting error in Hadoop Streaming: Type mismatch in Key from Map

In Hadoop streaming you have to customize ...READ MORE

answered Apr 18, 2018 in Big Data Hadoop by coldcode
• 2,080 points
1,003 views
0 votes
1 answer

FAILEDError: java.io.IOException: Type mismatch in key from map

This error is thrown when the parameters ...READ MORE

answered Jul 30, 2019 in Big Data Hadoop by Rishi
1,467 views
0 votes
1 answer
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,555 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,184 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
104,200 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,390 points
4,260 views
0 votes
1 answer

expected org.apache.hadoop.io.IntWritable, received org.apache.hadoop.io.LongWritable

Hey George!  This error comes whenever you use ...READ MORE

answered Jan 25, 2019 in Big Data Hadoop by Omkar
• 69,210 points
948 views
0 votes
1 answer

PIG - Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected

Yes, it is a compatibility issue. in Hadoop ...READ MORE

answered Oct 10, 2018 in Big Data Hadoop by Omkar
• 69,210 points
900 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP