如何使用pytorch,设置同时使用两个GPU
时间: 2024-05-09 17:21:09 浏览: 133
pytorch多GPU并行运算的实现
在PyTorch中,可以使用`torch.nn.DataParallel`模块来设置同时使用多个GPU。以下是使用两个GPU的示例代码:
```python
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
# 定义模型
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
self.fc1 = nn.Linear(128 * 8 * 8, 512)
self.fc2 = nn.Linear(512, 10)
def forward(self, x):
x = self.conv1(x)
x = nn.functional.relu(x)
x = nn.functional.max_pool2d(x, 2)
x = self.conv2(x)
x = nn.functional.relu(x)
x = nn.functional.max_pool2d(x, 2)
x = x.view(-1, 128 * 8 * 8)
x = self.fc1(x)
x = nn.functional.relu(x)
x = self.fc2(x)
return x
model = MyModel()
# 定义数据集和数据加载器
train_dataset = ...
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# 判断是否有可用的GPU
if torch.cuda.is_available():
device = torch.device("cuda:0,1") # 使用两个GPU
model.to(device)
model = nn.DataParallel(model, device_ids=[0, 1]) # 设置并行计算
else:
device = torch.device("cpu")
# 训练模型
for epoch in range(10):
for i, (inputs, labels) in enumerate(train_loader):
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
```
在上述代码中,首先判断是否有可用的GPU,然后使用`torch.device`将模型和数据移动到GPU上。接着使用`nn.DataParallel`将模型并行计算,其中`device_ids`参数指定要使用的GPU设备编号。最后,在每个训练步骤中,将数据和标签移动到GPU上。
阅读全文