用pytorch实现lstm模型预测单时间序列
时间: 2023-06-12 10:05:42 浏览: 124
下面是用PyTorch实现LSTM模型预测单时间序列的示例代码:
```python
import torch
import torch.nn as nn
import numpy as np
# 定义LSTM模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size=1, num_layers=1):
super().__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(1), self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size(1), self.hidden_size).to(device)
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[-1, :, :])
return out
# 设置随机种子
torch.manual_seed(42)
np.random.seed(42)
# 设定设备
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# 生成数据
data = np.sin(np.arange(1000) * (20 * np.pi / 1000)).reshape(1, -1, 1)
train_data = torch.from_numpy(data[:, :-200, :]).float().to(device)
test_data = torch.from_numpy(data[:, -200:, :]).float().to(device)
# 定义模型
model = LSTM(1, 32).to(device)
# 定义损失函数和优化器
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# 训练模型
num_epochs = 1000
for epoch in range(num_epochs):
outputs = model(train_data)
optimizer.zero_grad()
loss = criterion(outputs, train_data[-1, :, :])
loss.backward()
optimizer.step()
if epoch % 100 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 测试模型
model.eval()
with torch.no_grad():
outputs = model(test_data)
loss = criterion(outputs, test_data[-1, :, :])
print('Test Loss: {:.4f}'.format(loss.item()))
# 可视化结果
import matplotlib.pyplot as plt
plt.plot(data[0, -200:, 0], label='True')
plt.plot(outputs.cpu().numpy(), label='Predicted')
plt.legend()
plt.show()
```
在这个示例中,我们生成了一个正弦波形的时间序列作为示例数据。我们使用LSTM模型来预测未来200个时间步的值。我们将前800个时间步作为训练数据,后200个时间步作为测试数据。我们使用均方误差损失函数和Adam优化器进行训练。最后,我们将真实值和预测值可视化以进行比较。
阅读全文