使用Java语言基于MapReduce计算框架,完成wordcount(单词计数)例子的编码测试
时间: 2024-03-16 14:44:49 浏览: 56
Hadoop- 单词计数(Word Count)的MapReduce实现.doc
好的,下面是一个使用Java语言基于MapReduce计算框架完成wordcount的例子:
Mapper类实现:
```java
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class WordCountMapper extends Mapper<LongWritable, Text, Text, LongWritable> {
// 定义key变量,存储每个单词
private Text key = new Text();
// 定义value变量,存储每个单词的词频
private LongWritable value = new LongWritable(1);
public void map(LongWritable k, Text v, Context context) throws IOException, InterruptedException {
// 将每一行文本转换成字符串
String line = v.toString();
// 将字符串按照空格切分成单词
String[] words = line.split(" ");
// 遍历每个单词,输出<word, 1>
for (String word : words) {
key.set(word);
context.write(key, value);
}
}
}
```
Reducer类实现:
```java
import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
public class WordCountReducer extends Reducer<Text, LongWritable, Text, LongWritable> {
// 定义key变量,存储每个单词
private Text key = new Text();
// 定义value变量,存储每个单词的总词频
private LongWritable value = new LongWritable();
public void reduce(Text k, Iterable<LongWritable> v, Context context) throws IOException, InterruptedException {
// 定义一个计数器,用于统计每个单词的词频
long count = 0;
// 遍历每个value,累加每个单词的词频
for (LongWritable val : v) {
count += val.get();
}
// 将词频累加结果输出为<word, count>
key.set(k);
value.set(count);
context.write(key, value);
}
}
```
Driver类实现:
```java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class WordCountDriver {
public static void main(String[] args) throws Exception {
// 创建一个配置对象
Configuration conf = new Configuration();
// 创建一个Job对象
Job job = Job.getInstance(conf, "word count");
// 设置job的主类
job.setJarByClass(WordCountDriver.class);
// 设置Mapper类
job.setMapperClass(WordCountMapper.class);
// 设置Reducer类
job.setReducerClass(WordCountReducer.class);
// 设置Mapper的输出key类型
job.setMapOutputKeyClass(Text.class);
// 设置Mapper的输出value类型
job.setMapOutputValueClass(LongWritable.class);
// 设置Reducer的输出key类型
job.setOutputKeyClass(Text.class);
// 设置Reducer的输出value类型
job.setOutputValueClass(LongWritable.class);
// 设置输入路径
FileInputFormat.addInputPath(job, new Path(args[0]));
// 设置输出路径
FileOutputFormat.setOutputPath(job, new Path(args[1]));
// 等待job完成
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
```
运行该代码需要在Hadoop集群中进行,可以使用Hadoop单节点伪分布式模式进行测试。您需要创建一个文本文件作为输入,将其上传到HDFS中,并将该文件的HDFS路径作为参数传递给上述Driver的main()函数。输出将保存在另一个HDFS目录中,您可以使用Hadoop命令将其下载到本地进行查看。
希望这个例子可以帮助您理解如何使用Java语言基于MapReduce计算框架完成wordcount。
阅读全文