BP神经网络 matlab 回归举例
时间: 2023-09-04 10:17:20 浏览: 37
以下是一个简单的BP神经网络回归的例子,使用Matlab实现。
数据准备:
为了演示,我们将使用一个简单的数据集,其中包含两个变量X和Y。我们将使用这些变量来训练BP神经网络,以预测Y的值。
首先,我们需要加载数据集,将其分为训练集和测试集,并进行归一化处理。
```
% Load data
data = load('data.mat');
% Split data into training and testing sets
train_data = data(1:80,:);
test_data = data(81:end,:);
% Normalize data
train_data = normalize(train_data);
test_data = normalize(test_data);
```
网络设置:
接下来,我们需要设置神经网络的结构和参数。我们将定义一个三层的BP神经网络,其中输入层有一个神经元,隐藏层有两个神经元,输出层有一个神经元。我们将使用sigmoid函数作为激活函数,并使用均方误差作为损失函数。
```
% Set network parameters
input_layer_size = 1;
hidden_layer_size = 2;
output_layer_size = 1;
% Initialize network weights and biases
W1 = randn(hidden_layer_size, input_layer_size);
b1 = randn(hidden_layer_size, 1);
W2 = randn(output_layer_size, hidden_layer_size);
b2 = randn(output_layer_size, 1);
% Set learning rate and number of epochs
alpha = 0.01;
epochs = 1000;
% Set activation function and loss function
sigmoid = @(x) 1./(1+exp(-x));
mse = @(y, t) mean((y - t).^2);
```
训练网络:
现在我们可以开始训练网络了。在每个迭代中,我们将使用前向传播计算网络的输出,并使用反向传播来更新网络的权重和偏置。
```
% Train network
for i = 1:epochs
% Forward propagation
z1 = W1 * train_data(:,1)' + b1;
a1 = sigmoid(z1);
z2 = W2 * a1 + b2;
y = z2;
% Backward propagation
delta2 = y - train_data(:,2)';
delta1 = (W2' * delta2) .* a1 .* (1-a1);
% Update weights and biases
W2 = W2 - alpha * delta2 * a1';
b2 = b2 - alpha * delta2;
W1 = W1 - alpha * delta1 * train_data(:,1);
b1 = b1 - alpha * delta1;
% Calculate training error
train_error = mse(y, train_data(:,2)');
% Display progress
if mod(i, 100) == 0
fprintf('Epoch %d, Training error: %f\n', i, train_error);
end
end
```
测试网络:
训练完成后,我们可以使用测试数据集来评估网络的性能。我们将使用前向传播计算网络的输出,并计算测试误差。
```
% Test network
z1 = W1 * test_data(:,1)' + b1;
a1 = sigmoid(z1);
z2 = W2 * a1 + b2;
y = z2;
% Calculate testing error
test_error = mse(y, test_data(:,2)');
% Display results
fprintf('Testing error: %f\n', test_error);
```
完整代码:
```
% Load data
data = load('data.mat');
% Split data into training and testing sets
train_data = data(1:80,:);
test_data = data(81:end,:);
% Normalize data
train_data = normalize(train_data);
test_data = normalize(test_data);
% Set network parameters
input_layer_size = 1;
hidden_layer_size = 2;
output_layer_size = 1;
% Initialize network weights and biases
W1 = randn(hidden_layer_size, input_layer_size);
b1 = randn(hidden_layer_size, 1);
W2 = randn(output_layer_size, hidden_layer_size);
b2 = randn(output_layer_size, 1);
% Set learning rate and number of epochs
alpha = 0.01;
epochs = 1000;
% Set activation function and loss function
sigmoid = @(x) 1./(1+exp(-x));
mse = @(y, t) mean((y - t).^2);
% Train network
for i = 1:epochs
% Forward propagation
z1 = W1 * train_data(:,1)' + b1;
a1 = sigmoid(z1);
z2 = W2 * a1 + b2;
y = z2;
% Backward propagation
delta2 = y - train_data(:,2)';
delta1 = (W2' * delta2) .* a1 .* (1-a1);
% Update weights and biases
W2 = W2 - alpha * delta2 * a1';
b2 = b2 - alpha * delta2;
W1 = W1 - alpha * delta1 * train_data(:,1);
b1 = b1 - alpha * delta1;
% Calculate training error
train_error = mse(y, train_data(:,2)');
% Display progress
if mod(i, 100) == 0
fprintf('Epoch %d, Training error: %f\n', i, train_error);
end
end
% Test network
z1 = W1 * test_data(:,1)' + b1;
a1 = sigmoid(z1);
z2 = W2 * a1 + b2;
y = z2;
% Calculate testing error
test_error = mse(y, test_data(:,2)');
% Display results
fprintf('Testing error: %f\n', test_error);
```