不使用tensorflow的python神经网络数据预测多对一代码
时间: 2023-12-03 18:04:53 浏览: 32
以下是一个不使用TensorFlow的Python神经网络数据预测多对一的示例代码:
```python
import numpy as np
# 构造数据
data = np.array([[0, 1, 2, 3], [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6], [4, 5, 6, 7]])
target = np.array([4, 5, 6, 7, 8])
# 构造神经网络
input_size = 4
hidden_size = 10
output_size = 1
lr = 0.1
weights_input_hidden = np.random.uniform(size=(input_size, hidden_size))
weights_hidden_output = np.random.uniform(size=(hidden_size, output_size))
# 训练神经网络
for i in range(10000):
# 前向传播
hidden_layer = np.dot(data, weights_input_hidden)
hidden_layer_activation = 1 / (1 + np.exp(-hidden_layer))
output_layer = np.dot(hidden_layer_activation, weights_hidden_output)
# 计算损失
error = output_layer - target.reshape(-1, 1)
loss = np.mean(np.square(error))
# 反向传播
output_layer_gradient = 2 * error / len(data)
hidden_layer_activation_gradient = np.dot(output_layer_gradient, weights_hidden_output.T) * hidden_layer_activation * (1 - hidden_layer_activation)
weights_hidden_output_gradient = np.dot(hidden_layer_activation.T, output_layer_gradient)
weights_input_hidden_gradient = np.dot(data.T, hidden_layer_activation_gradient)
# 更新权重
weights_hidden_output -= lr * weights_hidden_output_gradient
weights_input_hidden -= lr * weights_input_hidden_gradient
# 输出损失
if i % 1000 == 0:
print("Loss:", loss)
# 预测
hidden_layer = np.dot(data, weights_input_hidden)
hidden_layer_activation = 1 / (1 + np.exp(-hidden_layer))
output_layer = np.dot(hidden_layer_activation, weights_hidden_output)
print("Predictions:", output_layer.flatten())
```
该代码使用numpy构建了一个多层感知机神经网络,并使用反向传播算法进行训练。在训练完成后,使用训练好的模型进行预测。在本例中,我们使用了一个四维的输入向量和一个一维的输出向量,因此它是一个多对一的预测模型。