3、利用Java API完成以下操作HDFS ①创建目录(以自己姓名首字母进行命名); ②上传文件到上述目录中; ③察看目录中文件信息 ④将上述文件进行重命名; ⑤从HDFS上下载上述文件到本地; 从HDFS上删除上述已传的文件;
时间: 2024-10-27 22:10:37 浏览: 35
java操作Hadoop源码之HDFS Java API操作-创建目录
在Java中使用Apache Hadoop的HDFS API来完成以上操作,首先你需要添加Hadoop的相关依赖。以下是每一步操作的示例代码:
1. **创建目录** (以 "Zhang" 假设为姓名首字母):
```java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HdfsCreateDirectory {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path dirPath = new Path("/user/Zhang/myFolder");
if (!fs.exists(dirPath)) {
fs.mkdirs(dirPath);
}
fs.close();
}
}
```
2. **上传文件** 到新创建的目录:
```java
import java.io.FileOutputStream;
public class HdfsUploadFile {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
String filePath = "/user/Zhang/myFolder/myFile.txt";
FileOutputStream fos = fs.create(new Path(filePath));
fos.write("Hello from Zhang".getBytes());
fos.close();
}
}
```
3. **查看目录内容**:
```java
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
public class HdfsListFiles {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
FileStatus[] fileStatuses = fs.listStatus(new Path("/user/Zhang/myFolder"));
for (FileStatus status : fileStatuses) {
System.out.println(status.getPath().getName());
}
fs.close();
}
}
```
4. **重命名文件**:
```java
import org.apache.hadoop.fs.Path;
public class HdfsRenameFile {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path oldFilePath = new Path("/user/Zhang/myFolder/myFile.txt");
Path newFilePath = new Path("/user/Zhang/myFolder/newFileName.txt");
fs.rename(oldFilePath, newFilePath);
}
}
```
5. **下载文件** 到本地:
```java
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
public class HdfsDownloadFile {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path srcPath = new Path("/user/Zhang/myFolder/newFileName.txt");
try (FSDataInputStream in = fs.open(srcPath);
FileOutputStream out = new FileOutputStream("newFileName.txt")) {
byte[] buffer = new byte[1024];
int bytesRead;
while ((bytesRead = in.read(buffer)) != -1) {
out.write(buffer, 0, bytesRead);
}
}
fs.close();
}
}
6. **删除文件**:
```java
import org.apache.hadoop.fs.Path;
public class HdfsDeleteFile {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path filePath = new Path("/user/Zhang/myFolder/newFileName.txt");
fs.delete(filePath, true); // 第二个参数为是否递归删除
}
}
```
注意:在实际运行时,请确保已经设置了Hadoop环境变量,并处理可能出现的异常。
阅读全文