用pytorch进行lstm回归预测代码
时间: 2023-09-09 18:14:30 浏览: 227
基于pytorch的LSTM多变量多输出时间序列预测使用例
5星 · 资源好评率100%
以下是一个使用PyTorch进行LSTM回归预测的示例代码:
``` python
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
# 构造数据
time_steps = np.linspace(0, np.pi, 100)
data = np.sin(time_steps)
data = data.reshape(-1, 1)
# 定义超参数
input_size = 1
hidden_size = 32
output_size = 1
lr = 0.01
num_epochs = 200
# 定义LSTM模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(1, x.size(0), self.hidden_size).to(x.device) # 初始化hidden state
c0 = torch.zeros(1, x.size(0), self.hidden_size).to(x.device) # 初始化cell state
out, _ = self.lstm(x, (h0, c0)) # LSTM层
out = self.fc(out[:, -1, :]) # 全连接层
return out
# 实例化模型、定义损失函数和优化器
model = LSTM(input_size, hidden_size, output_size)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
# 训练模型
for epoch in range(num_epochs):
inputs = torch.from_numpy(data[:-1, :]).float()
labels = torch.from_numpy(data[1:, :]).float()
outputs = model(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (epoch+1) % 20 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 预测
model.eval()
with torch.no_grad():
inputs = torch.from_numpy(data[:-1, :]).float()
preds = model(inputs).detach().numpy()
# 绘制结果
plt.plot(time_steps[:-1], data[:-1], 'r', label='real')
plt.plot(time_steps[:-1], preds, 'b', label='predicted')
plt.legend()
plt.show()
```
这个示例代码使用的是一个单层的LSTM模型,输入维度为1,隐藏维度为32,输出维度为1,用于对一个正弦函数进行回归预测。在训练过程中,使用MSELoss作为损失函数,Adam作为优化器。最后,使用模型对数据进行预测,并将结果与真实值进行对比。
阅读全文