matlab梯度下降法代码
时间: 2023-09-28 15:10:22 浏览: 98
Sure! Here's an example code for implementing the gradient descent algorithm in MATLAB:
```
% Define the cost function
cost_function = @(x) (x^2 + 5*x + 6);
% Define the derivative of the cost function
gradient = @(x) (2*x + 5);
% Set the learning rate and maximum number of iterations
learning_rate = 0.1;
max_iterations = 100;
% Initialize the starting point
x = 0;
% Perform gradient descent
for i = 1:max_iterations
% Compute the gradient
grad = gradient(x);
% Update the position
x = x - learning_rate * grad;
% Print the current position and cost
fprintf('Iteration %d: x = %.4f, Cost = %.4f\n', i, x, cost_function(x));
% Check for convergence
if abs(grad) < 1e-6
break;
end
end
% Print the final result
fprintf('Final Result: x = %.4f, Cost = %.4f\n', x, cost_function(x));
```
In this code, we first define the cost function that we want to minimize (`cost_function`) and its derivative (`gradient`). We then set the learning rate and maximum number of iterations. We initialize the starting point `x` and perform the gradient descent by updating `x` using the learning rate and the gradient of the cost function. We print the current position and cost at each iteration and check for convergence by comparing the absolute value of the gradient with a small threshold (`1e-6` in this case). Finally, we print the final result.
Note that this is a simple example and can be adapted to more complex cost functions or gradient calculations as needed.