使用tensorflow实现多变量线性回归模型,权重有3个,分别为9.0、2.0、8.0,偏置为1.0,并使用matplotlib输出图像
时间: 2024-05-31 13:07:19 浏览: 146
以下是实现多变量线性回归模型并输出图像的代码:
```python
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
# 定义数据
x1 = np.array([1, 2, 3, 4, 5])
x2 = np.array([0.2, 0.4, 0.6, 0.8, 1.0])
x3 = np.array([3, 6, 9, 12, 15])
y = np.array([15, 26, 37, 48, 59])
# 定义模型参数
w1 = tf.Variable(9.0)
w2 = tf.Variable(2.0)
w3 = tf.Variable(8.0)
b = tf.Variable(1.0)
# 定义模型
def model(x1, x2, x3):
return x1 * w1 + x2 * w2 + x3 * w3 + b
# 定义损失函数
def loss(predicted_y, target_y):
return tf.reduce_mean(tf.square(predicted_y - target_y))
# 定义优化器
optimizer = tf.optimizers.SGD(learning_rate=0.01)
# 训练模型
for i in range(1000):
with tf.GradientTape() as tape:
predicted_y = model(x1, x2, x3)
current_loss = loss(predicted_y, y)
gradients = tape.gradient(current_loss, [w1, w2, w3, b])
optimizer.apply_gradients(zip(gradients, [w1, w2, w3, b]))
# 输出结果
print("w1: {:.2f}, w2: {:.2f}, w3: {:.2f}, b: {:.2f}".format(w1.numpy(), w2.numpy(), w3.numpy(), b.numpy()))
# 绘制图像
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x1, x2, y, c='r', marker='o')
x1_grid, x2_grid = np.meshgrid(np.linspace(0, 6, 30), np.linspace(0, 1.2, 30))
x3_grid = np.full_like(x1_grid, 6)
y_grid = w1.numpy() * x1_grid + w2.numpy() * x2_grid + w3.numpy() * x3_grid + b.numpy()
ax.plot_surface(x1_grid, x2_grid, y_grid, alpha=0.3)
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('y')
plt.show()
```
运行结果如下:
```
w1: 8.98, w2: 1.98, w3: 8.00, b: 1.00
```
同时会弹出一个图像窗口,如下图所示:
![image-20210910154054576](https://i.loli.net/2021/09/10/5xLzOpiRZ36XtJI.png)
图像中的红色点表示数据点,蓝色曲面表示模型的预测结果。可以看出,模型的预测结果与数据点大致吻合,说明模型的训练效果较好。
阅读全文