使用python实现前馈神经网络近似连续函数
时间: 2023-11-03 17:05:57 浏览: 194
下面是使用python实现前馈神经网络近似连续函数的示例代码:
```python
import numpy as np
import matplotlib.pyplot as plt
# 定义输入数据
x = np.linspace(-10, 10, 1000)
y = np.sin(x)
# 定义神经网络
class NeuralNetwork:
def __init__(self):
# 随机生成权重和偏置
self.weights1 = np.random.randn(1, 10)
self.bias1 = np.random.randn(1, 10)
self.weights2 = np.random.randn(10, 1)
self.bias2 = np.random.randn(1, 1)
def forward(self, x):
# 前向传播
self.layer1 = np.dot(x, self.weights1) + self.bias1
self.activation1 = np.tanh(self.layer1)
self.layer2 = np.dot(self.activation1, self.weights2) + self.bias2
self.activation2 = self.layer2
return self.activation2
def backward(self, x, y, output, learning_rate):
# 反向传播
error = output - y
d_layer2 = error
d_weights2 = np.dot(self.activation1.T, d_layer2)
d_bias2 = np.sum(d_layer2, axis=0, keepdims=True)
d_activation1 = np.dot(d_layer2, self.weights2.T)
d_layer1 = d_activation1 * (1 - np.power(self.activation1, 2))
d_weights1 = np.dot(x.T, d_layer1)
d_bias1 = np.sum(d_layer1, axis=0, keepdims=True)
# 更新权重和偏置
self.weights1 -= learning_rate * d_weights1
self.bias1 -= learning_rate * d_bias1
self.weights2 -= learning_rate * d_weights2
self.bias2 -= learning_rate * d_bias2
def train(self, x, y, learning_rate=0.1, epochs=1000):
# 训练神经网络
for i in range(epochs):
output = self.forward(x)
self.backward(x, y, output, learning_rate)
if i % 100 == 0:
loss = np.mean(np.square(output - y))
print("Epoch:", i, "Loss:", loss)
# 创建神经网络实例
nn = NeuralNetwork()
# 训练神经网络
nn.train(x.reshape(-1, 1), y.reshape(-1, 1))
# 绘制原始数据和神经网络预测结果的比较图
plt.plot(x, y, label="Ground Truth")
plt.plot(x, nn.forward(x.reshape(-1, 1)).flatten(), label="Neural Network")
plt.legend()
plt.show()
```
上述代码首先定义了输入数据 `x` 和 `y`,其中 `y` 是 `x` 的正弦函数值。然后定义了一个名为 `NeuralNetwork` 的神经网络类,包含了前向传播、反向传播和训练方法。在训练方法中,通过循环迭代来更新权重和偏置,最终得到神经网络模型。最后,将原始数据和神经网络预测结果的比较图绘制出来。
阅读全文