MapReduce to count alphabets

0 votes
Can someone help me to write alphabet count program. Please share the MapReduce code to do so.
May 23, 2019 in Big Data Hadoop by Quill
3,726 views

1 answer to this question.

0 votes

Please find the code below for alphabet count:

package in.edureka.mapreduce;

import java.io.IOException;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class AlphabetWordCount {

public static class AlphabetWordCountMapper extends Mapper<LongWritable,Text,IntWritable,IntWritable>{

 // here we are declaring a final static local variable of static type

 public static final IntWritable ONE = new IntWritable(1);

  @Override

  public void map(LongWritable key, Text value, Context cont) throws IOException,InterruptedException{

   String line = value.toString();

   for (String word : line.split(" ")) {

       System.out.println("This is a Alphabet count problem");

    cont.write(new IntWritable(word.length()), ONE);

    }

  }

 }

 public static class AlphabetWordCountReducer extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable>{

  @Override

  public void reduce(IntWritable key, Iterable<IntWritable> value, Context cont) throws IOException,InterruptedException{

   int count = 0;

   for (IntWritable values : value) {

   count += values.get();

      }

   cont.write(key, new IntWritable(count));

   }

   }

 public static void main(String[] args) throws ClassNotFoundException,IOException,InterruptedException{

  // TODO Auto-generated method stub

  Configuration conf = new Configuration();

  Job job = Job.getInstance(conf, "WordCount");

  job.setMapperClass(AlphabetWordCountMapper.class);

  job.setReducerClass(AlphabetWordCountReducer.class);

  job.setCombinerClass(AlphabetWordCountReducer.class);

  job.setMapOutputKeyClass(IntWritable.class);

  job.setMapOutputValueClass(IntWritable.class);

  job.setOutputKeyClass(IntWritable.class);

  job.setOutputValueClass(IntWritable.class);

  job.setJarByClass(AlphabetWordCount.class);

  FileInputFormat.addInputPath(job, new Path(args[0]));

  FileOutputFormat.setOutputPath(job, new Path(args[1]));

  System.exit(job.waitForCompletion(true) ? 0 : 1);

 }
}
answered May 23, 2019 by Firoz

Related Questions In Big Data Hadoop

0 votes
1 answer

How hadoop mapreduce job is submitted to worker nodes?

Alright, I think you are basically looking ...READ MORE

answered Mar 30, 2018 in Big Data Hadoop by Ashish
• 2,650 points
5,657 views
0 votes
1 answer

How to use custom FileInputFormat in MapReduce?

You have to override isSplitable method. ...READ MORE

answered Apr 10, 2018 in Big Data Hadoop by Shubham
• 13,490 points
1,247 views
0 votes
1 answer

How to groupBy/count then filter on count in Scala

I think the exception is caused because ...READ MORE

answered Apr 19, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
29,765 views
0 votes
1 answer

How to implement data locality in Hadoop MapReduce?

You can use this getFileBlockLocations method of ...READ MORE

answered Apr 20, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
1,123 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
11,072 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
2,571 views
+2 votes
11 answers

hadoop fs -put command?

Hi, You can create one directory in HDFS ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
109,055 views
–1 vote
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,350 points
4,639 views
0 votes
1 answer

Hadoop: How to Group mongodb - mapReduce output?

db.order.mapReduce(function() { emit (this.customer,{count:1,orderDate:this.orderDate.interval_start}) }, function(key,values){ var category; ...READ MORE

answered Oct 31, 2018 in Big Data Hadoop by Omkar
• 69,220 points
924 views
0 votes
1 answer

Hadoop hdfs: How to count number of lines?

You can count the number of lines ...READ MORE

answered Nov 19, 2018 in Big Data Hadoop by Omkar
• 69,220 points
1,210 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP