Python写一个MLP示例
时间: 2023-05-27 14:03:57 浏览: 96
基于Python的MLP多层感知器及MINIST数字分类实现实例
以下是一个简单的MLP示例,使用Python的NumPy和PyTorch库:
```python
import numpy as np
import torch
# 数据集
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=np.float32)
Y = np.array([[0], [1], [1], [0]], dtype=np.float32)
# 超参数
input_size = 2
hidden_size = 4
output_size = 1
learning_rate = 0.01
num_epochs = 10000
# 定义模型
class MLP(torch.nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MLP, self).__init__()
self.fc1 = torch.nn.Linear(input_size, hidden_size)
self.fc2 = torch.nn.Linear(hidden_size, output_size)
self.relu = torch.nn.ReLU()
def forward(self, X):
hidden = self.fc1(X)
hidden = self.relu(hidden)
output = self.fc2(hidden)
return output
model = MLP(input_size, hidden_size, output_size)
# 定义损失函数和优化器
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# 训练模型
for epoch in range(num_epochs):
inputs = torch.from_numpy(X)
targets = torch.from_numpy(Y)
outputs = model(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch+1) % 1000 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 测试模型
model.eval()
with torch.no_grad():
inputs = torch.from_numpy(X)
outputs = model(inputs)
predicted = (outputs > 0.5).float()
print('Predicted Labels: ', predicted.numpy())
```
该示例使用一个具有2个输入,4个隐藏单元和1个输出的MLP来解决XOR问题。训练过程使用均方误差损失函数和随机梯度下降优化器。在训练模型之后,模型在测试集上进行评估,并输出预测标签。
阅读全文