torch.cuda.set_device(device) 改成单机多卡
时间: 2023-09-15 09:16:14 浏览: 173
基于pytorch的单机多卡分布式训练源码
如果要在单机上使用多个GPU进行计算,可以使用`DataParallel`或者`DistributedDataParallel`这两个PyTorch提供的模块。其中,`DataParallel`将数据分割成多个部分,在多个GPU上分别计算,最后再将结果合并;`DistributedDataParallel`则更加复杂,需要手动进行进程和通信的设置,但是在性能方面更加出色。
使用`DataParallel`的话,可以这样修改:
```python
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
import torch.utils.data.distributed as distributed
device_ids = [0, 1] # 指定要使用的GPU设备编号
model = nn.DataParallel(model, device_ids=device_ids) # 使用DataParallel进行多GPU计算
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
# 训练循环
for epoch in range(num_epochs):
# 从数据集中加载数据
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory=True)
# 训练模型
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
```
使用`DistributedDataParallel`的话,可以这样修改:
```python
import torch.nn as nn
import torch.optim as optim
import torch.distributed as dist
import torch.utils.data.distributed as distributed
# 初始化进程组
dist.init_process_group(backend='nccl', init_method='env://')
# 设置分布式计算相关参数
world_size = dist.get_world_size()
rank = dist.get_rank()
torch.cuda.set_device(rank)
# 加载模型,使用DistributedDataParallel进行多GPU计算
model = MyModel()
model = nn.parallel.DistributedDataParallel(model)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
# 从数据集中加载数据
train_sampler = distributed.DistributedSampler(train_dataset)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False, num_workers=4, pin_memory=True, sampler=train_sampler)
# 训练循环
for epoch in range(num_epochs):
train_sampler.set_epoch(epoch)
for inputs, labels in train_loader:
inputs, labels = inputs.cuda(), labels.cuda()
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
```
需要注意的是,使用`DistributedDataParallel`时需要先初始化进程组,设置分布式计算相关参数,以及使用`DistributedSampler`对数据进行分布式采样。
阅读全文