用pytorch写一个使用GPU对MLP模型批量调参的代码
时间: 2023-06-02 15:08:28 浏览: 141
好的,这个问题可以回答。以下是使用PyTorch编写使用GPU批量调整MLP模型参数的代码示例:
```python
import torch
#检查是否有可用的GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# 定义参数集合
params = {'learning_rate': [0.001, 0.01, 0.1], 'hidden_size': [16, 32, 64]}
# 创建MLP模型
class MLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.fc1 = torch.nn.Linear(input_size, hidden_size)
self.fc2 = torch.nn.Linear(hidden_size, output_size)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = self.fc2(x)
return x
# 加载数据
input_size = 10
output_size = 2
train_data = torch.randn(1000, input_size)
train_labels = torch.randint(output_size, (1000,))
# 遍历参数集合,并在GPU上训练模型
for lr in params['learning_rate']:
for hidden_size in params['hidden_size']:
mlp = MLP(input_size, hidden_size, output_size).to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mlp.parameters(), lr=lr)
for epoch in range(50):
optimizer.zero_grad()
outputs = mlp(train_data)
loss = criterion(outputs, train_labels)
loss.backward()
optimizer.step()
print('Learning rate:', lr, 'Hidden size:', hidden_size, 'Loss:', loss.item())
```
希望这个示例可以帮助你批量调整MLP模型的参数。
阅读全文