def gradientDescent(X,y,theta,alpha,num_iters,Lambda):
时间: 2024-01-10 09:05:03 浏览: 75
This is a Python function for performing gradient descent algorithm with regularization on a given dataset.
- X: Input feature matrix of size (m, n+1) where m is the number of training examples and n is the number of features. The first column of X is usually all ones for the bias term.
- y: Output vector of size (m, 1) containing the target values for each training example.
- theta: Parameter vector of size (n+1, 1) containing the initial values for the model parameters.
- alpha: Learning rate for the gradient descent algorithm.
- num_iters: Number of iterations to run the gradient descent algorithm.
- Lambda: Regularization parameter for controlling the trade-off between fitting the training data well and avoiding overfitting.
The function returns the optimized parameter vector theta after running the gradient descent algorithm.
Here's the code:
```python
def gradientDescent(X,y,theta,alpha,num_iters,Lambda):
m = len(y)
for i in range(num_iters):
h = X.dot(theta)
error = h - y
reg_term = (Lambda/m) * np.sum(theta[1:]**2)
grad = (1/m) * (X.T.dot(error) + reg_term)
theta[0] -= alpha * grad[0]
theta[1:] -= alpha * grad[1:]
return theta
```