lstm多变量预测pytorch实例
时间: 2023-07-31 22:04:47 浏览: 141
基于pytorch的LSTM多变量多输出时间序列预测使用例
5星 · 资源好评率100%
以下是一个使用 PyTorch 实现 LSTM 多变量预测的示例代码:
```python
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
# 加载数据
df = pd.read_csv('data.csv')
df = df.dropna()
data = df.values[:, 1:].astype('float32')
# 数据预处理
scaler = MinMaxScaler(feature_range=(-1, 1))
data = scaler.fit_transform(data)
# 划分训练集和测试集
train_size = int(len(data) * 0.7)
train_data = data[:train_size, :]
test_data = data[train_size:, :]
# 定义函数将数据转换为序列数据
def create_sequences(data, seq_length):
xs = []
ys = []
for i in range(len(data) - seq_length - 1):
x = data[i:(i + seq_length), :]
y = data[(i + seq_length), :]
xs.append(x)
ys.append(y)
return np.array(xs), np.array(ys)
# 创建序列数据
seq_length = 10
train_X, train_y = create_sequences(train_data, seq_length)
test_X, test_y = create_sequences(test_data, seq_length)
# 转换为 PyTorch 张量
train_X = torch.from_numpy(train_X).float()
train_y = torch.from_numpy(train_y).float()
test_X = torch.from_numpy(test_X).float()
test_y = torch.from_numpy(test_y).float()
# 定义 LSTM 模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).requires_grad_()
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc(out[:, -1, :])
return out
# 训练模型
input_size = train_X.shape[2]
output_size = train_y.shape[1]
hidden_size = 128
num_layers = 2
learning_rate = 0.01
num_epochs = 1000
model = LSTM(input_size, hidden_size, num_layers, output_size)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
outputs = model(train_X)
optimizer.zero_grad()
loss = criterion(outputs, train_y)
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print("Epoch [{}/{}], Loss: {:.4f}".format(epoch+1, num_epochs, loss.item()))
# 测试模型
model.eval()
with torch.no_grad():
test_outputs = model(test_X)
test_loss = criterion(test_outputs, test_y)
print('Test loss: {:.4f}'.format(test_loss.item()))
# 将预测结果转换为原始数据范围
test_outputs = scaler.inverse_transform(test_outputs.numpy())
test_y = scaler.inverse_transform(test_y.numpy())
# 绘制预测结果和实际结果
import matplotlib.pyplot as plt
plt.plot(test_y[:, 0], label='Actual')
plt.plot(test_outputs[:, 0], label='Predicted')
plt.legend(loc='upper left')
plt.show()
```
这个示例代码加载了一个数据集,使用 MinMaxScaler 对数据进行预处理,然后将数据划分为训练集和测试集。接着,它定义了一个函数将数据转换为序列数据,创建了 LSTM 模型,并使用 MSE 损失函数和 Adam 优化器训练模型。最后,它测试了模型,并将预测结果和实际结果绘制在一起。
阅读全文