单层感知机pytorch实现
时间: 2023-12-19 07:05:45 浏览: 165
很抱歉,单层感知机无法拟合XOR函数的问题已经被证明,因此单层感知机已经不再使用。但是,我们可以介绍一下多层感知机的PyTorch实现。
以下是一个简单的多层感知机的PyTorch实现:
```python
import torch
import torch.nn as nn
# 定义多层感知机模型
class MLP(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, output_size)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
# 定义模型参数
input_size = 2
hidden_size = 5
output_size = 2
# 初始化模型
model = MLP(input_size, hidden_size, output_size)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
# 定义训练数据
x_train = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32)
y_train = torch.tensor([0, 1, 1, 0], dtype=torch.long)
# 训练模型
num_epochs = 10000
for epoch in range(num_epochs):
# 前向传播
outputs = model(x_train)
loss = criterion(outputs, y_train)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 每1000次迭代输出一次损失值
if (epoch+1) % 1000 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 测试模型
with torch.no_grad():
outputs = model(x_train)
_, predicted = torch.max(outputs.data, 1)
print('Predicted:', predicted)
```
阅读全文