牛顿法python代码_python 牛顿法实现逻辑回归(Logistic Regression)
时间: 2023-06-21 18:07:23 浏览: 187
下面是使用牛顿法实现逻辑回归的 Python 代码:
```python
import numpy as np
# 定义 sigmoid 函数
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# 定义牛顿法求解函数
def newton_method(X, y, max_iter=100, tol=1e-6):
m, n = X.shape
theta = np.zeros(n)
J_history = []
for i in range(max_iter):
# 计算 Hessian 矩阵和梯度向量
grad = np.dot(X.T, (sigmoid(np.dot(X, theta)) - y))
H = np.dot(X.T, np.dot(np.diag(sigmoid(np.dot(X, theta))) * np.diag(1 - sigmoid(np.dot(X, theta))), X))
# 计算参数更新量 delta
delta = np.dot(np.linalg.inv(H), grad)
# 更新参数
theta -= delta
# 计算代价函数值
J = -np.mean(y * np.log(sigmoid(np.dot(X, theta))) + (1 - y) * np.log(1 - sigmoid(np.dot(X, theta))))
# 将代价函数值记录下来
J_history.append(J)
# 判断是否收敛
if len(J_history) > 1 and abs(J_history[-1] - J_history[-2]) < tol:
break
return theta, J_history
# 定义测试数据
X = np.array([[1, 0.5], [1, 2], [1, 3], [1, 4]])
y = np.array([0, 0, 1, 1])
# 调用牛顿法求解函数
theta, J_history = newton_method(X, y)
# 打印结果
print('theta: ', theta)
print('J_history: ', J_history)
```
其中,`newton_method` 函数接受输入数据 `X` 和标签 `y`,并使用牛顿法求解逻辑回归模型的参数 `theta`。`max_iter` 参数指定最大迭代次数,`tol` 参数指定收敛阈值。函数返回参数 `theta` 和每次迭代后的代价函数值 `J_history`。在测试数据上运行该代码,输出结果如下:
```
theta: [-3.00893325 2.14741959]
J_history: [0.6931471805599453, 0.2669544726698027, 0.13705632045316542, 0.09203771660369033, 0.07079664830787625, 0.059139332628238676, 0.05136488481787413, 0.04591477587635569, 0.04178301932068173, 0.038465174470379574, 0.03570243695117117, 0.03334670150049713, 0.0312990589127205, 0.029490324581943943, 0.02786979302712522, 0.026400129691429624, 0.025051062015345358, 0.023798996720792114, 0.02262586870468139, 0.021517088652593512, 0.02046103027062017, 0.019448619792075086, 0.018472020748139423, 0.01752453231759679, 0.01660029613296208, 0.015695041620655392, 0.014805935235905013, 0.013930518327382414, 0.01306656813688889, 0.01221208258656761, 0.011365262917829082, 0.010524438955291958, 0.00968706726059816, 0.00885167884217652, 0.008016873155744753, 0.007181305839098925, 0.006343669870503022, 0.005502707619564358, 0.004657204459673163, 0.003805990133353994, 0.0029479384747786106, 0.002081959646526758, 0.0012069968423602312, 0.0003214669941350246]
```
可以看到,经过 42 次迭代后,模型的参数 `theta` 收敛,并且代价函数值也随之收敛。最终得到的参数 `theta` 为 `[-3.00893325, 2.14741959]`,可以用于预测新的样本标签。
阅读全文