提供pytorch 的 多变量多步预测LSTM程序,从EXCEL文件加载数据,6个输入变量,1个输出变量(温度),要求预测未来12个时间步的温度,直接输出
时间: 2023-06-18 10:04:41 浏览: 141
好的,以下是一个基于PyTorch的多变量多步预测LSTM程序的示例代码,可以从Excel文件中加载数据,使用6个输入变量和1个输出变量(温度),并预测未来12个时间步的温度。请先确保已经安装了PyTorch和pandas库。
```python
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
from sklearn.preprocessing import MinMaxScaler
# 从Excel文件中加载数据
dataframe = pd.read_excel('data.xlsx')
dataset = dataframe.values
dataset = dataset.astype('float32')
# 归一化数据
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(dataset)
# 划分数据集
train_size = int(len(dataset) * 0.67)
test_size = len(dataset) - train_size
train_data, test_data = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
# 准备训练数据
def create_dataset(dataset, look_back, look_forward):
dataX, dataY = [], []
for i in range(len(dataset)-look_back-look_forward+1):
a = dataset[i:(i+look_back), :]
dataX.append(a)
dataY.append(dataset[i+look_back:i+look_back+look_forward, 0])
return np.array(dataX), np.array(dataY)
look_back = 12
look_forward = 12
trainX, trainY = create_dataset(train_data, look_back, look_forward)
testX, testY = create_dataset(test_data, look_back, look_forward)
# 将数据转换为PyTorch张量
trainX = torch.from_numpy(trainX)
trainY = torch.from_numpy(trainY)
testX = torch.from_numpy(testX)
testY = torch.from_numpy(testY)
# 定义LSTM模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size, num_layers):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).requires_grad_()
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc(out[:, -1, :])
return out
# 初始化模型和优化器
input_size = 6
hidden_size = 64
output_size = 1
num_layers = 2
lstm = LSTM(input_size, hidden_size, output_size, num_layers)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(lstm.parameters(), lr=0.001)
# 训练模型
num_epochs = 100
for epoch in range(num_epochs):
outputs = lstm(trainX)
loss = criterion(outputs, trainY)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print("Epoch: %d, Loss: %f" % (epoch, loss.item()))
# 测试模型
lstm.eval()
train_predict = lstm(trainX)
test_predict = lstm(testX)
# 将预测结果转换为NumPy数组
train_predict = train_predict.data.numpy()
test_predict = test_predict.data.numpy()
# 反归一化预测结果
train_predict = scaler.inverse_transform(train_predict)
trainY = scaler.inverse_transform(trainY)
test_predict = scaler.inverse_transform(test_predict)
testY = scaler.inverse_transform(testY)
# 输出预测结果
print("训练集预测结果:")
print(train_predict[:,-look_forward:])
print("测试集预测结果:")
print(test_predict[:,-look_forward:])
```
在这个示例代码中,我们首先从Excel文件中加载数据,并使用MinMaxScaler将数据归一化到0到1之间。然后我们将数据集分为训练集和测试集,并使用create_dataset函数将数据转换为可以用于LSTM模型训练的格式。接下来我们定义了一个LSTM模型,并使用均方误差(MSE)作为损失函数,使用Adam优化器进行模型训练。最后我们使用训练好的模型对训练集和测试集进行预测,并将预测结果反归一化到原始数据范围内,输出预测结果。