MapReduce to count alphabets

0 votes
Can someone help me to write alphabet count program. Please share the MapReduce code to do so.
May 23 in Big Data Hadoop by Quill
18 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

Please find the code below for alphabet count:

package in.edureka.mapreduce;

import java.io.IOException;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.io.LongWritable;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class AlphabetWordCount {

public static class AlphabetWordCountMapper extends Mapper<LongWritable,Text,IntWritable,IntWritable>{

 // here we are declaring a final static local variable of static type

 public static final IntWritable ONE = new IntWritable(1);

  @Override

  public void map(LongWritable key, Text value, Context cont) throws IOException,InterruptedException{

   String line = value.toString();

   for (String word : line.split(" ")) {

       System.out.println("This is a Alphabet count problem");

    cont.write(new IntWritable(word.length()), ONE);

    }

  }

 }

 public static class AlphabetWordCountReducer extends Reducer<IntWritable, IntWritable, IntWritable, IntWritable>{

  @Override

  public void reduce(IntWritable key, Iterable<IntWritable> value, Context cont) throws IOException,InterruptedException{

   int count = 0;

   for (IntWritable values : value) {

   count += values.get();

      }

   cont.write(key, new IntWritable(count));

   }

   }

 public static void main(String[] args) throws ClassNotFoundException,IOException,InterruptedException{

  // TODO Auto-generated method stub

  Configuration conf = new Configuration();

  Job job = Job.getInstance(conf, "WordCount");

  job.setMapperClass(AlphabetWordCountMapper.class);

  job.setReducerClass(AlphabetWordCountReducer.class);

  job.setCombinerClass(AlphabetWordCountReducer.class);

  job.setMapOutputKeyClass(IntWritable.class);

  job.setMapOutputValueClass(IntWritable.class);

  job.setOutputKeyClass(IntWritable.class);

  job.setOutputValueClass(IntWritable.class);

  job.setJarByClass(AlphabetWordCount.class);

  FileInputFormat.addInputPath(job, new Path(args[0]));

  FileOutputFormat.setOutputPath(job, new Path(args[1]));

  System.exit(job.waitForCompletion(true) ? 0 : 1);

 }
}
answered May 23 by Firoz

Related Questions In Big Data Hadoop

0 votes
1 answer

How hadoop mapreduce job is submitted to worker nodes?

Alright, I think you are basically looking ...READ MORE

answered Mar 29, 2018 in Big Data Hadoop by Ashish
• 2,630 points
1,341 views
0 votes
1 answer

How to use custom FileInputFormat in MapReduce?

You have to override isSplitable method. ...READ MORE

answered Apr 10, 2018 in Big Data Hadoop by Shubham
• 12,890 points
87 views
0 votes
1 answer

How to groupBy/count then filter on count in Scala

I think the exception is caused because ...READ MORE

answered Apr 19, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
4,548 views
0 votes
1 answer

How to implement data locality in Hadoop MapReduce?

You can use this getFileBlockLocations method of ...READ MORE

answered Apr 20, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
23 views
0 votes
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,670 points
1,882 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 9,670 points
168 views
0 votes
10 answers

hadoop fs -put command?

copy command can be used to copy files ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Sujay
9,411 views
0 votes
1 answer

Hadoop dfs -ls command?

In your case there is no difference ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by kurt_cobain
• 9,240 points
678 views
0 votes
1 answer

Hadoop: How to Group mongodb - mapReduce output?

db.order.mapReduce(function() { emit (this.customer,{count:1,orderDate:this.orderDate.interval_start}) }, function(key,values){ var category; ...READ MORE

answered Oct 31, 2018 in Big Data Hadoop by Omkar
• 67,000 points
53 views
0 votes
1 answer

Hadoop hdfs: How to count number of lines?

You can count the number of lines ...READ MORE

answered Nov 19, 2018 in Big Data Hadoop by Omkar
• 67,000 points
61 views

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.