梯度下降法python 实现
时间: 2023-06-29 21:16:14 浏览: 93
python实现梯度下降法
好的,以下是一个简单的梯度下降法的 Python 实现:
首先,我们需要定义一个损失函数及其梯度函数,以及初始参数值和学习率等超参数:
```python
import numpy as np
# 定义损失函数及其梯度函数
def loss_function(x, y, w):
y_pred = np.dot(x, w)
return np.mean((y_pred - y)**2)
def gradient_function(x, y, w):
y_pred = np.dot(x, w)
return 2*np.dot(x.T, y_pred - y) / len(y)
# 定义初始参数值和学习率等超参数
w = np.zeros((x.shape[1], 1))
learning_rate = 0.01
num_iterations = 1000
```
然后,我们可以开始进行梯度下降迭代:
```python
# 进行梯度下降迭代
for i in range(num_iterations):
gradient = gradient_function(x, y, w)
w -= learning_rate * gradient
loss = loss_function(x, y, w)
if i % 100 == 0:
print(f"Iteration {i}, loss = {loss}")
```
在每次迭代中,我们首先计算损失函数的梯度,然后根据学习率更新参数值,最后计算损失函数的值并输出。
完整代码如下:
```python
import numpy as np
# 定义损失函数及其梯度函数
def loss_function(x, y, w):
y_pred = np.dot(x, w)
return np.mean((y_pred - y)**2)
def gradient_function(x, y, w):
y_pred = np.dot(x, w)
return 2*np.dot(x.T, y_pred - y) / len(y)
# 定义初始参数值和学习率等超参数
x = np.random.rand(100, 10)
y = np.random.rand(100, 1)
w = np.zeros((x.shape[1], 1))
learning_rate = 0.01
num_iterations = 1000
# 进行梯度下降迭代
for i in range(num_iterations):
gradient = gradient_function(x, y, w)
w -= learning_rate * gradient
loss = loss_function(x, y, w)
if i % 100 == 0:
print(f"Iteration {i}, loss = {loss}")
```
阅读全文