多层感知机python实现不调库
时间: 2023-10-02 22:14:06 浏览: 99
python实现多层感知器
5星 · 资源好评率100%
多层感知机(Multilayer Perceptron, MLP)是一种常用的人工神经网络模型,用于解决分类和回归问题。下面是一个使用Python实现多层感知机的示例代码,不依赖任何外部库:
```python
import numpy as np
class MLP:
def __init__(self, num_inputs, num_hidden, num_outputs):
self.num_inputs = num_inputs
self.num_hidden = num_hidden
self.num_outputs = num_outputs
# 初始化权重和偏置
self.weights1 = np.random.randn(self.num_inputs, self.num_hidden)
self.bias1 = np.zeros((1, self.num_hidden))
self.weights2 = np.random.randn(self.num_hidden, self.num_outputs)
self.bias2 = np.zeros((1, self.num_outputs))
def sigmoid(self, x):
return 1 / (1 + np.exp(-x))
def forward(self, X):
# 输入层到隐藏层
self.hidden_output = np.dot(X, self.weights1) + self.bias1
self.hidden_activation = self.sigmoid(self.hidden_output)
# 隐藏层到输出层
self.output = np.dot(self.hidden_activation, self.weights2) + self.bias2
return self.sigmoid(self.output)
def backward(self, X, y, learning_rate):
# 反向传播
output_error = y - self.forward(X)
output_delta = output_error * (self.sigmoid(self.output) * (1 - self.sigmoid(self.output)))
hidden_error = np.dot(output_delta, self.weights2.T)
hidden_delta = hidden_error * (self.sigmoid(self.hidden_output) * (1 - self.sigmoid(self.hidden_output)))
# 更新权重和偏置
self.weights2 += learning_rate * np.dot(self.hidden_activation.T, output_delta)
self.bias2 += learning_rate * np.sum(output_delta, axis=0, keepdims=True)
self.weights1 += learning_rate * np.dot(X.T, hidden_delta)
self.bias1 += learning_rate * np.sum(hidden_delta, axis=0)
def train(self, X, y, num_epochs, learning_rate):
for epoch in range(num_epochs):
self.forward(X)
self.backward(X, y, learning_rate)
def predict(self, X):
return np.round(self.forward(X))
# 示例用法
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0], [1], [1], [0]])
mlp = MLP(2, 4, 1)
mlp.train(X, y, num_epochs=10000, learning_rate=0.1)
print(mlp.predict(X))
```
上述代码使用numpy实现了一个简单的多层感知机,包括前向传播、反向传播和训练过程。你可以根据需要调整网络的层数、每层的神经元数量、训练轮数和学习率等参数。这只是一个简单的示例,实际应用中可能需要更复杂的网络结构和调优方法。
阅读全文