基于pytorch的LSTM模型对多维数据同时进行时序预测的代码案例
时间: 2024-09-26 20:05:29 浏览: 40
在PyTorch中,LSTM(长短期记忆网络)是一种常用于处理序列数据的强大工具,特别是对于时间序列预测任务。下面是一个简单的基于LSTM进行多维度时间序列预测的代码示例:
```python
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset
# 假设我们有一个输入数据集,形状为 (batch_size, sequence_length, input_dim)
input_data = torch.randn(64, 50, 10) # batch_size, 时间步长, 特征数
target_data = torch.randn(64, 1) # 目标值,单个时间步长后的预测
class LSTMModel(nn.Module):
def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
super(LSTMModel, self).__init__()
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
# 序列长度应作为变量传递,这里假设为x.shape[1]
h0 = torch.zeros(num_layers, x.size(0), hidden_dim).to(device)
c0 = torch.zeros(num_layers, x.size(0), hidden_dim).to(device)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :]) # 取最后一个时间步的隐藏状态作为预测
return out
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMModel(input_dim=10, hidden_dim=64, num_layers=2, output_dim=1)
model.to(device)
criterion = nn.MSELoss() # 对于回归任务通常使用均方误差损失
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
def train(model, dataloader, criterion, optimizer):
model.train()
for inputs, targets in dataloader:
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
# 创建数据加载器
inputs = TensorDataset(input_data, target_data)
dataloader = DataLoader(inputs, batch_size=32, shuffle=False)
num_epochs = 10
for epoch in range(num_epochs):
train(model, dataloader, criterion, optimizer)
# 打印训练进度...
```
这个例子展示了如何构建一个基础的LSTM模型,以及在PyTorch中进行训练的基本流程。记得根据实际数据调整输入、隐藏层大小、层数等参数,并根据需要添加验证和测试步骤。
阅读全文