pytorch构建lstm+attention机制,实现多变量输入单变量输出的时间序列预测模型,并对其进行训练、预测和验证,对于预测结果进行可视化
时间: 2024-05-09 19:20:30 浏览: 103
基于pytorch的LSTM多变量多输出时间序列预测使用例
5星 · 资源好评率100%
以下是使用PyTorch构建LSTM attention机制的多变量输入单变量输出的时间序列预测模型的示例代码:
```python
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# 定义LSTM attention模型
class LSTMAttentionModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers=1, dropout=0.0):
super(LSTMAttentionModel, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.dropout = dropout
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True, dropout=dropout)
self.attention = nn.Linear(hidden_size, 1)
self.fc = nn.Linear(hidden_size, 1)
def forward(self, x):
# LSTM层
output, hidden = self.lstm(x)
# 计算attention权重
attention_weights = torch.softmax(self.attention(output), dim=1)
# 加权平均
context_vector = torch.sum(attention_weights * output, dim=1)
# 输出层
output = self.fc(context_vector)
return output
# 训练模型
def train(model, train_data, train_labels, validation_data, validation_labels, optimizer, criterion, num_epochs):
train_loss = []
validation_loss = []
for epoch in range(num_epochs):
# 训练
model.train()
optimizer.zero_grad()
train_output = model(train_data)
loss = criterion(train_output, train_labels)
loss.backward()
optimizer.step()
train_loss.append(loss.item())
# 验证
model.eval()
with torch.no_grad():
validation_output = model(validation_data)
loss = criterion(validation_output, validation_labels)
validation_loss.append(loss.item())
# 输出结果
print("Epoch [{}/{}], Train Loss: {:.4f}, Validation Loss: {:.4f}"
.format(epoch+1, num_epochs, train_loss[-1], validation_loss[-1]))
return train_loss, validation_loss
# 预测
def predict(model, data):
model.eval()
with torch.no_grad():
output = model(data)
return output.numpy()
# 生成数据
def generate_data(num_samples, seq_length):
x = np.zeros((num_samples, seq_length, 2))
y = np.zeros((num_samples, 1))
for i in range(num_samples):
# 生成随机序列
seq = np.random.randn(seq_length, 2)
# 计算标签(第一个变量的平均值)
label = np.mean(seq[:,0])
# 添加噪声
seq += np.random.randn(seq_length, 2) * 0.1
# 存储数据和标签
x[i,:,:] = seq
y[i,0] = label
return x, y
# 参数设置
input_size = 2
hidden_size = 16
num_layers = 1
dropout = 0.0
batch_size = 32
num_epochs = 50
learning_rate = 0.01
seq_length = 10
num_samples = 1000
# 生成数据集
x, y = generate_data(num_samples, seq_length)
train_data = torch.from_numpy(x[:800,:,:]).float()
train_labels = torch.from_numpy(y[:800,:]).float()
validation_data = torch.from_numpy(x[800:,:,:]).float()
validation_labels = torch.from_numpy(y[800:,:]).float()
# 创建模型、优化器和损失函数
model = LSTMAttentionModel(input_size, hidden_size, num_layers, dropout)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.MSELoss()
# 训练模型
train_loss, validation_loss = train(model, train_data, train_labels, validation_data, validation_labels, optimizer, criterion, num_epochs)
# 预测
test_data = torch.from_numpy(x[:10,:,:]).float()
test_labels = torch.from_numpy(y[:10,:]).float()
predicted_labels = predict(model, test_data)
# 可视化结果
plt.plot(train_loss, label="Train Loss")
plt.plot(validation_loss, label="Validation Loss")
plt.legend()
plt.show()
plt.plot(test_labels.numpy(), label="True Labels")
plt.plot(predicted_labels, label="Predicted Labels")
plt.legend()
plt.show()
```
在上述示例代码中,我们首先定义了一个`LSTMAttentionModel`类,其中包含了一个LSTM层和一个attention层。在`forward`方法中,我们首先通过LSTM层对输入序列进行编码,然后通过attention层计算每个时间步的权重,最后对编码后的序列进行加权平均,得到一个上下文向量。最终,我们通过一个全连接层将上下文向量映射为一个标量输出。
接下来,我们通过`train`函数训练模型。在每个epoch中,我们首先将模型设置为训练模式,并对训练数据进行前向传播、反向传播和优化。然后,我们将模型设置为评估模式,并对验证数据计算损失。最后,我们输出训练损失和验证损失。
在训练完成后,我们通过`predict`函数对测试数据进行预测,并将预测结果和真实标签可视化。我们分别绘制了训练损失和验证损失的变化情况,以及前10个样本的真实标签和预测标签的对比。
值得注意的是,由于这是一个时间序列预测模型,因此在生成数据时,我们使用了随机序列作为输入,并计算第一个变量的平均值作为标签。这样可以确保生成的数据具有时间序列的特性,并且标签与输入序列之间存在一定的相关性。在实际应用中,我们可以根据具体的问题来生成相应的数据集。
阅读全文