pytorch LSTM时间序列多变量多步预测
时间: 2023-09-29 11:05:55 浏览: 47
PyTorch LSTM可以用于时间序列多变量多步预测,下面是一个简单的示例代码:
1. 导入必要的库
```python
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
```
2. 准备数据
假设我们有一个包含多个时间序列变量的数据集,我们将使用其中的前10天来预测下5天的数据。我们将使用MinMaxScaler来归一化数据。
```python
data = pd.read_csv('data.csv')
data = data.drop(columns=['date']) # 删除日期列
data = data.values # 转化为numpy数组
n_features = data.shape[1] # 特征数量
n_train = 10 # 训练数据长度
n_pred = 5 # 预测数据长度
scaler = MinMaxScaler()
data = scaler.fit_transform(data)
X_train, y_train = [], []
for i in range(n_train, len(data)-n_pred+1):
X_train.append(data[i-n_train:i])
y_train.append(data[i:i+n_pred])
X_train, y_train = np.array(X_train), np.array(y_train)
```
3. 定义模型
我们将使用一个包含两个LSTM层的模型,每个LSTM层包含有128个隐藏单元。最后一层输出一个长度为5的向量。
```python
class LSTM(nn.Module):
def __init__(self, n_features, n_hidden, n_layers, n_pred):
super(LSTM, self).__init__()
self.n_features = n_features
self.n_hidden = n_hidden
self.n_layers = n_layers
self.n_pred = n_pred
self.lstm = nn.LSTM(input_size=n_features, hidden_size=n_hidden, num_layers=n_layers, batch_first=True)
self.fc = nn.Linear(n_hidden, n_pred)
def forward(self, x):
h0 = torch.zeros(self.n_layers, x.size(0), self.n_hidden).to(device)
c0 = torch.zeros(self.n_layers, x.size(0), self.n_hidden).to(device)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :]) # 取最后一个时间步的输出作为预测值
return out
```
4. 训练模型
我们使用均方误差(MSE)作为损失函数,并使用Adam优化器进行优化。我们将模型训练100个epoch,并将训练结果保存在checkpoint.pth文件中。
```python
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = LSTM(n_features, 128, 2, n_pred).to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
for epoch in range(100):
inputs = torch.from_numpy(X_train).float().to(device)
targets = torch.from_numpy(y_train).float().to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, 100, loss.item()))
torch.save(model.state_dict(), 'checkpoint.pth')
```
5. 预测未来数据
我们将使用训练好的模型来预测未来5天的数据。
```python
model = LSTM(n_features, 128, 2, n_pred).to(device)
model.load_state_dict(torch.load('checkpoint.pth'))
X_pred = data[-n_train:].reshape((1, n_train, n_features))
X_pred = torch.from_numpy(X_pred).float().to(device)
y_pred = []
for i in range(n_pred):
output = model(X_pred)
y_pred.append(output.detach().cpu().numpy().flatten())
X_pred[:, :-1, :] = X_pred[:, 1:, :]
X_pred[:, -1, :] = output
y_pred = np.array(y_pred).T
y_pred = scaler.inverse_transform(y_pred)
print(y_pred)
```
这样我们就可以得到未来5天的预测数据了。