简化并解释每行代码:X_train, y_train = load_data("data/ex2data2.txt") plot_data(X_train, y_train[:], pos_label="Accepted", neg_label="Rejected") plt.ylabel('Microchip Test 2') plt.xlabel('Microchip Test 1') plt.legend(loc="upper right") plt.show() mapped_X = map_feature(X_train[:, 0], X_train[:, 1]) def compute_cost_reg(X, y, w, b, lambda_=1): m = X.shape[0] cost = 0 f = sigmoid(np.dot(X, w) + b) reg = (lambda_/(2*m)) * np.sum(np.square(w)) cost = (1/m)np.sum(-ynp.log(f) - (1-y)*np.log(1-f)) + reg return cost def compute_gradient_reg(X, y, w, b, lambda_=1): m = X.shape[0] cost = 0 dw = np.zeros_like(w) f = sigmoid(np.dot(X, w) + b) err = (f - y) dw = (1/m)*np.dot(X.T, err) dw += (lambda_/m) * w db = (1/m) * np.sum(err) return db,dw X_mapped = map_feature(X_train[:, 0], X_train[:, 1]) np.random.seed(1) initial_w = np.random.rand(X_mapped.shape[1]) - 0.5 initial_b = 0.5 lambda_ = 0.5 dj_db, dj_dw = compute_gradient_reg(X_mapped, y_train, initial_w, initial_b, lambda_) np.random.seed(1) initial_w = np.random.rand(X_mapped.shape[1])-0.5 initial_b = 1. lambda_ = 0.01; iterations = 10000 alpha = 0.01 w,b, J_history,_ = gradient_descent(X_mapped, y_train, initial_w, initial_b, compute_cost_reg, compute_gradient_reg, alpha, iterations, lambda_) plot_decision_boundary(w, b, X_mapped, y_train) p = predict(X_mapped, w, b) print('Train Accuracy: %f'%(np.mean(p == y_train) * 100))
时间: 2024-01-26 17:03:12 浏览: 140
MATLAB课件.zip_SIMULINK_m_chap2_ex2_1.m_
这段代码主要实现了一个二分类问题的训练和预测。下面是每一行代码的解释:
```
X_train, y_train = load_data("data/ex2data2.txt")
```
从文件中读取训练数据,将特征存储在X_train中,将标签存储在y_train中。
```
plot_data(X_train, y_train[:], pos_label="Accepted", neg_label="Rejected")
plt.ylabel('Microchip Test 2')
plt.xlabel('Microchip Test 1')
plt.legend(loc="upper right")
plt.show()
```
画出训练数据的散点图,其中Accepted为正例标签,Rejected为负例标签,横坐标为Microchip Test 1,纵坐标为Microchip Test 2。
```
mapped_X = map_feature(X_train[:, 0], X_train[:, 1])
```
将原始特征映射成更高维的特征,以便更好地拟合非线性决策边界。
```
def compute_cost_reg(X, y, w, b, lambda_=1):
m = X.shape[0]
cost = 0
f = sigmoid(np.dot(X, w) + b)
reg = (lambda_/(2*m)) * np.sum(np.square(w))
cost = (1/m)np.sum(-ynp.log(f) - (1-y)*np.log(1-f)) + reg
return cost
```
计算带正则化的逻辑回归代价函数,其中X为特征数据,y为标签,w为权重,b为偏置,lambda_为正则化参数。
```
def compute_gradient_reg(X, y, w, b, lambda_=1):
m = X.shape[0]
cost = 0
dw = np.zeros_like(w)
f = sigmoid(np.dot(X, w) + b)
err = (f - y)
dw = (1/m)*np.dot(X.T, err)
dw += (lambda_/m) * w
db = (1/m) * np.sum(err)
return db,dw
```
计算带正则化的逻辑回归梯度,其中X为特征数据,y为标签,w为权重,b为偏置,lambda_为正则化参数。
```
X_mapped = map_feature(X_train[:, 0], X_train[:, 1])
np.random.seed(1)
initial_w = np.random.rand(X_mapped.shape[1]) - 0.5
initial_b = 0.5
lambda_ = 0.5
dj_db, dj_dw = compute_gradient_reg(X_mapped, y_train, initial_w, initial_b, lambda_)
```
将映射后的特征、权重、偏置和正则化参数传入梯度计算函数,计算出代价函数对权重和偏置的偏导数。
```
np.random.seed(1)
initial_w = np.random.rand(X_mapped.shape[1])-0.5
initial_b = 1.
lambda_ = 0.01; iterations = 10000; alpha = 0.01
w,b, J_history,_ = gradient_descent(X_mapped, y_train, initial_w, initial_b, compute_cost_reg, compute_gradient_reg, alpha, iterations, lambda_)
```
使用梯度下降算法对代价函数进行优化,得到最优的权重和偏置,lambda_为正则化参数,iterations为迭代次数,alpha为学习率。
```
plot_decision_boundary(w, b, X_mapped, y_train)
```
画出决策边界。
```
p = predict(X_mapped, w, b)
print('Train Accuracy: %f'%(np.mean(p == y_train) * 100))
```
使用训练好的模型进行预测,并计算训练精度。
阅读全文