没有合适的资源?快使用搜索试试~ 我知道了~
首页hdfs_design, hadoop file system design
资源详情
资源评论
资源推荐
The Hadoop Distributed File System:
Architecture and Design
by Dhruba Borthakur
Table of contents
1 Introduction .......................................................................................................................3
2 Assumptions and Goals .....................................................................................................3
2.1 Hardware Failure ..........................................................................................................3
2.2 Streaming Data Access .................................................................................................3
2.3 Large Data Sets .............................................................................................................3
2.4 Simple Coherency Model .............................................................................................4
2.5 “Moving Computation is Cheaper than Moving Data” ................................................4
2.6 Portability Across Heterogeneous Hardware and Software Platforms .........................4
3 NameNode and DataNodes ...............................................................................................4
4 The File System Namespace .............................................................................................5
5 Data Replication ................................................................................................................6
5.1 Replica Placement: The First Baby Steps .................................................................... 7
5.2 Replica Selection ..........................................................................................................8
5.3 Safemode ......................................................................................................................8
6 The Persistence of File System Metadata ......................................................................... 8
7 The Communication Protocols ......................................................................................... 9
8 Robustness ........................................................................................................................ 9
8.1 Data Disk Failure, Heartbeats and Re-Replication .....................................................10
8.2 Cluster Rebalancing ....................................................................................................10
8.3 Data Integrity ..............................................................................................................10
Copyright © 2007 The Apache Software Foundation. All rights reserved.
8.4 Metadata Disk Failure ................................................................................................ 10
8.5 Snapshots ....................................................................................................................11
9 Data Organization ........................................................................................................... 11
9.1 Data Blocks ................................................................................................................ 11
9.2 Staging ........................................................................................................................11
9.3 Replication Pipelining ................................................................................................ 12
10 Accessibility ..................................................................................................................12
10.1 FS Shell .....................................................................................................................12
10.2 DFSAdmin ................................................................................................................13
10.3 Browser Interface ......................................................................................................13
11 Space Reclamation ........................................................................................................ 13
11.1 File Deletes and Undeletes ....................................................................................... 13
11.2 Decrease Replication Factor .....................................................................................14
12 References ..................................................................................................................... 14
The Hadoop Distributed File System: Architecture and Design
Page 2
Copyright © 2007 The Apache Software Foundation. All rights reserved.
1. Introduction
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on
commodity hardware. It has many similarities with existing distributed file systems.
However, the differences from other distributed file systems are significant. HDFS is highly
fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high
throughput access to application data and is suitable for applications that have large data sets.
HDFS relaxes a few POSIX requirements to enable streaming access to file system data.
HDFS was originally built as infrastructure for the Apache Nutch web search engine project.
HDFS is part of the Apache Hadoop Core project. The project URL is
http://hadoop.apache.org/core/.
2. Assumptions and Goals
2.1. Hardware Failure
Hardware failure is the norm rather than the exception. An HDFS instance may consist of
hundreds or thousands of server machines, each storing part of the file system’s data. The
fact that there are a huge number of components and that each component has a non-trivial
probability of failure means that some component of HDFS is always non-functional.
Therefore, detection of faults and quick, automatic recovery from them is a core architectural
goal of HDFS.
2.2. Streaming Data Access
Applications that run on HDFS need streaming access to their data sets. They are not general
purpose applications that typically run on general purpose file systems. HDFS is designed
more for batch processing rather than interactive use by users. The emphasis is on high
throughput of data access rather than low latency of data access. POSIX imposes many hard
requirements that are not needed for applications that are targeted for HDFS. POSIX
semantics in a few key areas has been traded to increase data throughput rates.
2.3. Large Data Sets
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to
terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate
data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of
millions of files in a single instance.
The Hadoop Distributed File System: Architecture and Design
Page 3
Copyright © 2007 The Apache Software Foundation. All rights reserved.
剩余13页未读,继续阅读
kzjay
- 粉丝: 6
- 资源: 17
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- RTL8188FU-Linux-v5.7.4.2-36687.20200602.tar(20765).gz
- c++校园超市商品信息管理系统课程设计说明书(含源代码) (2).pdf
- 建筑供配电系统相关课件.pptx
- 企业管理规章制度及管理模式.doc
- vb打开摄像头.doc
- 云计算-可信计算中认证协议改进方案.pdf
- [详细完整版]单片机编程4.ppt
- c语言常用算法.pdf
- c++经典程序代码大全.pdf
- 单片机数字时钟资料.doc
- 11项目管理前沿1.0.pptx
- 基于ssm的“魅力”繁峙宣传网站的设计与实现论文.doc
- 智慧交通综合解决方案.pptx
- 建筑防潮设计-PowerPointPresentati.pptx
- SPC统计过程控制程序.pptx
- SPC统计方法基础知识.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论1