三层神经网络反向传播算法matlab
时间: 2023-07-08 21:44:27 浏览: 125
以下是三层神经网络反向传播算法的 MATLAB 代码:
```
% 设置输入层、隐藏层和输出层的神经元个数
input_layer_size = 2; % 输入层神经元个数
hidden_layer_size = 4; % 隐藏层神经元个数
num_labels = 1; % 输出层神经元个数
% 初始化权重矩阵
Theta1 = rand(hidden_layer_size, input_layer_size + 1) * 2 - 1; % 隐藏层权重矩阵
Theta2 = rand(num_labels, hidden_layer_size + 1) * 2 - 1; % 输出层权重矩阵
% 加载训练数据
load('data.mat');
% 定义代价函数
function J = costFunction(X, y, Theta1, Theta2)
% 前向传播计算输出值
m = size(X, 1);
a1 = [ones(m, 1) X];
z2 = a1 * Theta1';
a2 = [ones(m, 1) sigmoid(z2)];
z3 = a2 * Theta2';
h = sigmoid(z3);
% 计算代价函数
J = sum(-y .* log(h) - (1 - y) .* log(1 - h)) / m;
end
% 定义 sigmoid 函数
function g = sigmoid(z)
g = 1 ./ (1 + exp(-z));
end
% 定义 sigmoid 函数的导数
function g = sigmoidGradient(z)
g = sigmoid(z) .* (1 - sigmoid(z));
end
% 定义反向传播算法
function [Theta1_grad, Theta2_grad] = backpropagation(X, y, Theta1, Theta2)
% 前向传播计算输出值
m = size(X, 1);
a1 = [ones(m, 1) X];
z2 = a1 * Theta1';
a2 = [ones(m, 1) sigmoid(z2)];
z3 = a2 * Theta2';
h = sigmoid(z3);
% 计算误差
delta3 = h - y;
delta2 = delta3 * Theta2 .* sigmoidGradient([ones(m, 1) z2]);
delta2 = delta2(:, 2:end);
% 计算梯度
Theta1_grad = delta2' * a1 / m;
Theta2_grad = delta3' * a2 / m;
end
% 训练神经网络
options = optimset('MaxIter', 1000);
[nn_params, cost] = fmincg(@(p)(costFunction(X, y, reshape(p(1:hidden_layer_size * (input_layer_size + 1)), hidden_layer_size, input_layer_size + 1), reshape(p(hidden_layer_size * (input_layer_size + 1) + 1:end), num_labels, hidden_layer_size + 1))), [Theta1(:); Theta2(:)], options);
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), hidden_layer_size, input_layer_size + 1);
Theta2 = reshape(nn_params(hidden_layer_size * (input_layer_size + 1) + 1:end), num_labels, hidden_layer_size + 1);
% 预测新样本的输出值
x_new = [1.5, 2.0]; % 新样本的输入值
a1_new = [1 x_new];
z2_new = a1_new * Theta1';
a2_new = [1 sigmoid(z2_new)];
z3_new = a2_new * Theta2';
h_new = sigmoid(z3_new);
```
注:此代码是一个简单的三层神经网络的反向传播算法,只适用于二分类问题,如果要应用到多分类问题中,则需要做一些修改。
阅读全文