lstm时间序列预测模型代码实现
时间: 2025-01-01 09:26:40 浏览: 17
### LSTM 时间序列预测模型实现
对于构建LSTM时间序列预测模型,可以采用PyTorch框架来完成。下面展示如何创建一个基于LSTM的时间序列预测模型。
#### 数据准备
为了适应LSTM网络的要求,需要调整输入数据的形状。通常情况下,LSTM期望接收的数据格式为 `(样本数, 时间步长, 特征数量)`[^1]。
```python
import torch
from torch.utils.data import TensorDataset, DataLoader
def prepare_data(X, y, batch_size=4):
# 调整X的维度以匹配LSTM输入要求 (batch_size, seq_len, input_size)
X_tensor = torch.tensor(X).float().unsqueeze(0) # 增加批次维度并转换成张量
y_tensor = torch.tensor(y).float()
dataset = TensorDataset(X_tensor, y_tensor)
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
return dataloader
```
#### 定义LSTM模型结构
定义一个继承自`nn.Module`类的LSTM模型,并设置好隐藏层大小、层数等超参数。
```python
import torch.nn as nn
class LSTMModel(nn.Module):
def __init__(self, input_dim=2, hidden_dim=50, num_layers=2, output_dim=1):
super(LSTMModel, self).__init__()
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(x.device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).to(x.device)
out, _ = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc(out[:, -1, :])
return out
```
#### 训练过程
编写训练循环函数来进行模型训练,在此过程中会不断更新权重使得损失最小化。
```python
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = LSTMModel().to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
epochs = 100
for epoch in range(epochs):
model.train()
running_loss = 0
for inputs, labels in train_loader:
inputs, labels = inputs.to(device), labels.unsqueeze(-1).to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
print(f'Epoch [{epoch+1}/{epochs}], Loss: {running_loss/len(train_loader):.4f}')
```
阅读全文