lstm PyTorch 预测未来数据的代码
时间: 2024-10-13 08:04:02 浏览: 17
基于pytorch的LSTM多变量多输出时间序列预测使用例
5星 · 资源好评率100%
LSTM (长短期记忆网络) 是一种循环神经网络(RNN)变体,常用于处理序列数据,如时间序列预测。在PyTorch中,我们可以使用`nn.LSTM`模块创建一个LSTM模型,并通过训练数据预测未来的值。以下是一个简单的示例代码,展示了如何建立一个基本的LSTM模型并应用于预测:
```python
import torch
import torch.nn as nn
# 定义LSTM模型
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x, h0):
# x shape: (batch_size, sequence_length, input_size)
out, _ = self.lstm(x, h0)
# out shape: (batch_size, sequence_length, hidden_size)
out = self.fc(out[:, -1, :]) # 取最后一个时间步的隐藏状态作为输入到全连接层
return out
# 初始化模型参数
input_size = ... # 输入数据的维度
hidden_size = ... # LSTM的隐藏层大小
num_layers = ... # LSTM层数
output_size = ... # 输出预测值的维度
model = LSTMModel(input_size, hidden_size, num_layers, output_size)
# 假设我们有输入数据 X_train 和对应的标签 y_train
# 对于预测未来数据,我们需要训练模型并保存中间隐状态以便后续预测
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(num_epochs):
optimizer.zero_grad()
# 填充seq_len长度的序列
X_padded, y_padded, h0 = pad_sequences(X_train, y_train, model.hidden_size)
outputs = model(X_padded, h0)
loss = criterion(outputs, y_padded)
loss.backward()
optimizer.step()
# 模拟预测
def predict_future_data(model, x_test, seq_len):
h0 = torch.zeros((1, 1, model.hidden_size)) # 使用随机初始化的零向量
x_test_padded = pad_sequences(x_test, None, model.hidden_size)
with torch.no_grad():
predictions = []
for i in range(0, len(x_test_padded), seq_len):
prediction = model(x_test_padded[i:i+seq_len], h0)
predictions.append(prediction)
h0 = prediction # 更新隐藏状态
return predictions
# 假设pad_sequences 和 criterion 是你自定义的函数,分别用于填充序列和计算损失
```
阅读全文