数据为一个excle文件,excle文件有一列随时间变化的LS数据,我们使用pytroch里面的lstm模型对LS进行回归预测,用前面50个数据,预测之后10个数据,给出预测评价指标和图片的python代码,不用进行归一化处理
时间: 2024-02-24 11:57:22 浏览: 120
好的,针对您的问题,我可以提供以下的代码实现,其中使用的是 PyTorch 中的 LSTM 模型进行回归预测,评价指标包括 MSE 和 MAE:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# 读取数据
data = pd.read_excel('data.xlsx', sheet_name='Sheet1')
ls_data = data['LS'].values
# 构建数据集
def create_dataset(data, look_back=50, look_forward=10):
data_x, data_y = [], []
for i in range(len(data)-look_back-look_forward):
data_x.append(data[i:(i+look_back)])
data_y.append(data[(i+look_back):(i+look_back+look_forward)])
return np.array(data_x), np.array(data_y)
x, y = create_dataset(ls_data)
# 划分训练集和测试集
train_size = int(len(x) * 0.7)
test_size = len(x) - train_size
train_x, train_y = x[:train_size], y[:train_size]
test_x, test_y = x[train_size:], y[train_size:]
# 定义模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers=1):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(1), self.hidden_size).to(x.device)
c0 = torch.zeros(self.num_layers, x.size(1), self.hidden_size).to(x.device)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[-1, :, :])
return out
# 设置超参数
input_size = 1
hidden_size = 64
output_size = 10
num_layers = 2
lr = 0.001
num_epochs = 50
# 初始化模型、损失函数和优化器
lstm = LSTM(input_size, hidden_size, output_size, num_layers)
criterion = nn.MSELoss()
optimizer = optim.Adam(lstm.parameters(), lr=lr)
# 训练模型
train_loss = []
for epoch in range(num_epochs):
inputs = torch.from_numpy(train_x).float().unsqueeze(-1)
targets = torch.from_numpy(train_y).float().unsqueeze(-1)
outputs = lstm(inputs)
loss = criterion(outputs, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.item())
print('Epoch [{}/{}], Train Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 测试模型
lstm.eval()
with torch.no_grad():
test_inputs = torch.from_numpy(test_x).float().unsqueeze(-1)
test_targets = torch.from_numpy(test_y).float().unsqueeze(-1)
test_outputs = lstm(test_inputs)
test_loss = criterion(test_outputs, test_targets)
print('Test Loss: {:.4f}'.format(test_loss.item()))
# 绘制损失函数图像
plt.plot(train_loss)
plt.title('Train Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.show()
# 绘制预测结果图像
lstm.eval()
with torch.no_grad():
inputs = torch.from_numpy(x).float().unsqueeze(-1)
outputs = lstm(inputs)
plt.plot(ls_data, label='True Data')
plt.plot(np.concatenate((np.zeros(50), outputs.flatten().numpy())), label='Predictions')
plt.axvline(x=train_size, c='r', linestyle='--', alpha=0.5, label='Train/Test Split')
plt.legend()
plt.show()
# 计算评价指标
from sklearn.metrics import mean_squared_error, mean_absolute_error
mse = mean_squared_error(ls_data[train_size+50:], outputs.flatten().numpy()[train_size:])
mae = mean_absolute_error(ls_data[train_size+50:], outputs.flatten().numpy()[train_size:])
print('MSE: {:.4f}'.format(mse))
print('MAE: {:.4f}'.format(mae))
```
在这段代码中,我们首先读取了数据文件,然后使用 `create_dataset()` 函数将数据转换成了 LSTM 输入所需的格式,接着将数据集划分为训练集和测试集,然后定义了一个两层 LSTM 的模型,使用 MSE 作为损失函数,Adam 作为优化器进行训练,最后计算了预测结果的 MSE 和 MAE 并输出。
需要注意的是,在这个例子中,我们并没有进行数据的归一化处理,因为 LSTM 对于数据的幅值和范围并不敏感,因此可以直接使用原始数据进行训练和预测。
阅读全文