二分类梯度下降的 Python 实现,包含目标函数1(错分样本数)和目标函数2(错分样本到分类界面距离)的实现
时间: 2023-06-20 22:08:33 浏览: 84
python梯度下降算法的实现
5星 · 资源好评率100%
以下是二分类梯度下降的 Python 实现,同时包含目标函数1和目标函数2的实现:
```python
import numpy as np
def sigmoid(z):
"""
Sigmoid 激活函数
"""
return 1 / (1 + np.exp(-z))
def predict(X, theta):
"""
预测函数
"""
y_pred = sigmoid(np.dot(X, theta))
y_pred[y_pred >= 0.5] = 1
y_pred[y_pred < 0.5] = 0
return y_pred
def cost_function1(X, y, theta):
"""
目标函数1:错分样本数
"""
m = len(y)
y_pred = predict(X, theta)
cost = sum(abs(y - y_pred)) / (2 * m)
return cost
def cost_function2(X, y, theta):
"""
目标函数2:错分样本到分类界面距离
"""
m = len(y)
y_pred = predict(X, theta)
cost = sum(y * np.log(y_pred) + (1 - y) * np.log(1 - y_pred)) / (-m)
return cost
def gradient_descent(X, y, theta, alpha, num_iters, cost_func):
"""
梯度下降函数
"""
m = len(y)
J_history = []
for i in range(num_iters):
y_pred = predict(X, theta)
theta = theta - alpha * np.dot(X.T, y_pred - y) / m
J_history.append(cost_func(X, y, theta))
return theta, J_history
# 生成随机数据
np.random.seed(0)
X = np.random.rand(100, 3)
X[:, 0] = 1
y = np.random.randint(2, size=100)
# 初始化参数
theta = np.zeros(3)
# 梯度下降求解
alpha = 0.01
num_iters = 1000
theta, J_history1 = gradient_descent(X, y, theta, alpha, num_iters, cost_function1)
theta, J_history2 = gradient_descent(X, y, theta, alpha, num_iters, cost_function2)
# 打印结果
print("目标函数1(错分样本数)的最小值为:", J_history1[-1])
print("目标函数2(错分样本到分类界面距离)的最小值为:", J_history2[-1])
```
注:在此实现中,目标函数1采用了绝对误差而非均方误差,目标函数2采用了交叉熵损失函数。
阅读全文