用python画图比较 (1)梯度下降 (2)随机梯度下降 (3)小批量随机梯度下降 损失函数收敛情况对比。
时间: 2024-12-11 13:19:22 浏览: 10
在Python中,我们可以使用matplotlib库来可视化比较不同优化算法(如梯度下降、随机梯度下降和小批量随机梯度下降)对损失函数的影响。以下是一个简单的步骤说明:
1. 首先,导入必要的库:
```python
import numpy as np
import matplotlib.pyplot as plt
```
2. 定义一个假设的损失函数,例如均方误差(MSE):
```python
def loss_function(x):
return np.mean((x - 0.5)**2)
```
3. 创建一些样本数据并设置初始参数:
```python
x = np.linspace(-1, 1, 100)
y = loss_function(x)
iterations = 100
learning_rates = [0.1, 0.01, 0.001] # 学习率列表,代表不同类型的梯度下降
```
4. 对每种优化方法,分别迭代更新参数并记录损失值:
```python
plt.figure(figsize=(10, 6))
for i, lr in enumerate(learning_rates):
y_ogd = [] # 梯度下降
for _ in range(iterations):
gradient = -2 * x + 1 # 假设的梯度
x -= lr * gradient
y_ogd.append(loss_function(x))
y_sgd = [] # 随机梯度下降
random_indices = np.random.choice(len(x), len(x), replace=False)
for _ in range(iterations):
gradient = -2 * x[random_indices] + 1
x[random_indices] -= lr * gradient
y_sgd.append(loss_function(x))
y_batch_sgd = [] # 小批量随机梯度下降 (比如每次取10个点)
batch_size = int(len(x) / 10)
for _ in range(iterations):
batch_indices = np.random.choice(len(x), batch_size, replace=False)
gradient = -2 * x[batch_indices] + 1
x[batch_indices] -= lr * gradient
y_batch_sgd.append(loss_function(x))
plt.plot(x, y, label='原始损失')
plt.plot(x, y_ogd, label=f'梯度下降 (lr={lr})', marker='o')
plt.plot(x, y_sgd, label=f'随机梯度下降 (lr={lr})', marker='^')
plt.plot(x, y_batch_sgd, label=f'小批量随机梯度下降 (lr={lr}, 批大小={batch_size})', marker='s')
plt.title('不同优化算法下损失函数的收敛对比')
plt.xlabel('参数值')
plt.ylabel('损失函数值')
plt.legend()
plt.show()
阅读全文