x = np.array([[10,20,90], [9,18,70], [4,12,55],[3,10,50]]) y = np.array([[1], [1], [0], [0]]) 继续给一组x的值,用梯度下降法怎样预测相对于y的值,给出python代码
时间: 2023-11-08 11:04:19 浏览: 117
这是一个二分类问题,可以使用逻辑回归模型进行预测。以下是使用梯度下降法训练逻辑回归模型并进行预测的Python代码:
```python
import numpy as np
# 定义sigmoid函数
def sigmoid(z):
return 1 / (1 + np.exp(-z))
# 定义损失函数
def loss(X, y, w):
m = X.shape[0]
h = sigmoid(X.dot(w))
J = -1 / m * (y.T.dot(np.log(h)) + (1 - y).T.dot(np.log(1 - h)))
return J
# 定义梯度函数
def gradient(X, y, w):
m = X.shape[0]
h = sigmoid(X.dot(w))
grad = 1 / m * X.T.dot(h - y)
return grad
# 定义梯度下降函数
def gradient_descent(X, y, w_init, lr, num_iter):
w = w_init
for i in range(num_iter):
grad = gradient(X, y, w)
w -= lr * grad
J = loss(X, y, w)
print("Iteration %d | Loss: %f" % (i, J))
return w
# 定义预测函数
def predict(X, w):
h = sigmoid(X.dot(w))
y_pred = np.round(h)
return y_pred
# 定义训练数据集和标签
X_train = np.array([[10,20,90], [9,18,70], [4,12,55],[3,10,50]])
y_train = np.array([[1], [1], [0], [0]])
# 添加一列偏置项
X_train = np.hstack((np.ones((X_train.shape[0], 1)), X_train))
# 初始化权重向量
w_init = np.zeros((X_train.shape[1], 1))
# 设置学习率和训练轮数
lr = 0.01
num_iter = 1000
# 训练模型
w = gradient_descent(X_train, y_train, w_init, lr, num_iter)
# 预测新数据
X_new = np.array([[1, 5, 30]])
X_new = np.hstack((np.ones((X_new.shape[0], 1)), X_new))
y_pred = predict(X_new, w)
print("Predicted label for X_new: %d" % y_pred)
```
其中,`X_train`是训练数据集,`y_train`是训练标签。在梯度下降函数中,`w_init`是权重向量的初始值,`lr`是学习率,`num_iter`是训练轮数。在预测函数中,`X_new`是新的数据点,`y_pred`是预测的标签。
阅读全文