用python先写一段神经网络回归预测的代码
时间: 2023-02-20 12:23:12 浏览: 202
基于python的回归预测模型
好的,以下是一段用 Python 实现神经网络回归预测的示例代码:
```
import numpy as np
import tensorflow as tf
# 设置超参数
learning_rate = 0.01
num_steps = 1000
batch_size = 256
display_step = 100
# 加载数据
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(test_split=0.2, seed=123)
# 归一化特征数据
mean, std = x_train.mean(axis=0), x_train.std(axis=0)
x_train = (x_train - mean) / std
x_test = (x_test - mean) / std
# 将数据转化为 TensorFlow 张量
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.batch(batch_size).repeat()
# 建立模型
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(x_train.shape[1],)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1)
])
# 定义损失函数和优化器
loss_fn = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(learning_rate)
# 训练模型
for step, (x_batch, y_batch) in enumerate(train_data.take(num_steps), 1):
with tf.GradientTape() as tape:
logits = model(x_batch, training=True)
loss_value = loss_fn(y_batch, logits)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
if step % display_step == 0:
print("Training step: %i, loss: %f" % (step, loss_value.numpy()))
# 评估模型
y_pred = model.predict(x_test).flatten()
mae = tf.keras.metrics.MeanAbsoluteError()
print("Mean Absolute Error: %f" % mae(y_test, y_pred).numpy())
```
希望对您有帮助!
阅读全文