前后端分离实现大文件切片上传java代码实现
时间: 2023-11-26 11:06:00 浏览: 87
前后端分离实现大文件切片上传的流程通常如下:
前端:
1. 将大文件切分成多个小文件,每个小文件大小固定或者根据网络状况动态调整大小。
2. 对于每个小文件,使用 FormData 对象进行封装,并通过 AJAX 请求将其发送给后端。
3. 在发送请求时,需要同时传递当前切片的序号和总切片数,以便后端进行文件合并。
后端:
1. 接收前端传递的每个切片,并进行存储。
2. 每当接收到一个切片后,就检查是否已经接收到了所有切片,如果是,则进行文件合并操作。
3. 在合并文件时,可以使用 Java NIO 的 MappedByteBuffer 进行文件拼接,以提高效率。
下面是一个简单的 Java 代码实现:
前端:
```javascript
// 切分文件
function sliceFile(file, chunkSize) {
const chunks = []
let start = 0
let end = chunkSize
while (start < file.size) {
chunks.push(file.slice(start, end))
start = end
end = start + chunkSize
}
return chunks
}
// 上传切片
function uploadChunk(url, formData, onProgress) {
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest()
xhr.open('POST', url)
xhr.upload.onprogress = onProgress
xhr.onload = () => resolve(xhr.responseText)
xhr.onerror = () => reject(xhr.statusText)
xhr.send(formData)
})
}
const file = document.getElementById('file').files[0]
const chunkSize = 1024 * 1024 // 1MB
const chunks = sliceFile(file, chunkSize)
const totalChunks = chunks.length
let uploadedChunks = 0
for (let i = 0; i < totalChunks; i++) {
const formData = new FormData()
formData.append('chunk', chunks[i])
formData.append('filename', file.name)
formData.append('chunkIndex', i)
formData.append('totalChunks', totalChunks)
uploadChunk('/upload', formData, e => {
const progress = (uploadedChunks + e.loaded) / file.size * 100
console.log(`Upload progress: ${progress.toFixed(2)}%`)
}).then(() => {
uploadedChunks++
if (uploadedChunks === totalChunks) {
console.log('Upload complete')
}
})
}
```
后端:
```java
@RestController
public class UploadController {
private final Map<String, MappedByteBuffer> bufferMap = new ConcurrentHashMap<>();
@PostMapping("/upload")
public ResponseEntity<String> uploadChunk(@RequestParam("chunk") MultipartFile chunk,
@RequestParam("filename") String filename,
@RequestParam("chunkIndex") int chunkIndex,
@RequestParam("totalChunks") int totalChunks) throws IOException {
String key = filename + "-" + chunkIndex;
File tempFile = new File(filename + ".temp");
try (RandomAccessFile raf = new RandomAccessFile(tempFile, "rw")) {
raf.seek(chunkIndex * chunk.getSize());
raf.write(chunk.getBytes());
}
if (chunkIndex == totalChunks - 1) {
File outputFile = new File(filename);
try (FileChannel outputChannel = new FileOutputStream(outputFile).getChannel()) {
for (int i = 0; i < totalChunks; i++) {
String bufferKey = filename + "-" + i;
MappedByteBuffer buffer = bufferMap.get(bufferKey);
if (buffer == null) {
FileChannel inputChannel = new FileInputStream(tempFile).getChannel();
buffer = inputChannel.map(FileChannel.MapMode.READ_ONLY, i * chunk.getSize(), chunk.getSize());
bufferMap.put(bufferKey, buffer);
inputChannel.close();
}
outputChannel.write(buffer);
}
}
tempFile.delete();
}
return ResponseEntity.ok("Upload success");
}
}
```
这里使用了 ConcurrentHashMap 来存储每个切片的 MappedByteBuffer 对象,以避免重复读取文件。最后合并文件时,只需要将每个切片对应的 MappedByteBuffer 写入到目标文件中即可。注意,这里使用了 try-with-resources 语句来确保资源的正确关闭。
阅读全文