写一个实现小批量梯度下降的函数和一个计算小批量梯度下降误差的函数
时间: 2024-10-04 12:00:34 浏览: 25
当然,这里是一个简单的Python示例,展示了如何编写一个实现小批量梯度下降的函数以及一个计算误差的辅助函数:
```python
import numpy as np
def calculate_batch_gradient(weights, X_batch, y_batch, learning_rate, loss_function):
"""
计算小批量梯度
weights: 当前模型的权重
X_batch: 小批量输入数据
y_batch: 小批量对应的目标值
learning_rate: 学习率
loss_function: 损失函数,例如均方误差或交叉熵
"""
predictions = predict(X_batch, weights)
gradients = loss_function.gradient(predictions, y_batch)
return -learning_rate * gradients.mean()
def update_weights(weights, gradient, learning_rate):
"""
使用小批量梯度更新模型权重
weights: 当前权重
gradient: 计算得到的小批量梯度
learning_rate: 学习率
"""
return weights - learning_rate * gradient
def predict(X, weights):
# 假设我们有一个线性回归模型的predict函数
return np.dot(X, weights)
def calculate_batch_error(X_batch, y_batch, model_weights, loss_function):
"""
计算小批量的误差
X_batch: 输入数据
y_batch: 目标值
model_weights: 当前模型的权重
loss_function: 损失函数
"""
predicted_values = predict(X_batch, model_weights)
return loss_function.loss(predicted_values, y_batch)
# 使用示例
X_train, y_train = ... # 你的训练数据
learning_rate = 0.01
batch_size = 32
for _ in range(num_iterations):
idxs = np.random.randint(0, len(X_train), batch_size) # 随机选取一批样本
X_batch, y_batch = X_train[idxs], y_train[idxs]
# 计算梯度和更新权重
gradient = calculate_batch_gradient(model_weights, X_batch, y_batch, learning_rate, loss_function)
model_weights = update_weights(model_weights, gradient, learning_rate)
# 计算并打印每一步的误差
batch_error = calculate_batch_error(X_batch, y_batch, model_weights, loss_function)
print(f"Batch error after iteration {_:3d}: {batch_error:.4f}")
```
这个例子假设你已经有了一个`loss_function`实例,它可以计算预测值和目标值之间的损失。注意实际应用中可能会有更复杂的优化库如TensorFlow或PyTorch,它们提供了内置的优化方法和计算错误的功能。
阅读全文