pytorch lstm 单变量多步预测,给出代码
时间: 2023-05-21 19:01:45 浏览: 152
以下是一个简单的 PyTorch LSTM 单变量多步预测的代码示例:
```python
import torch
import torch.nn as nn
import numpy as np
# 定义 LSTM 模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(LSTM, self).__init__()
self.hidden_size = hidden_size
self.lstm = nn.LSTM(input_size, hidden_size)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, input):
lstm_out, _ = self.lstm(input.view(len(input), 1, -1))
output = self.fc(lstm_out.view(len(input), -1))
return output[-1]
# 定义训练数据
data = np.sin(np.arange(1000) * np.pi / 50)
train_data = data[:900]
test_data = data[900:]
# 定义模型参数
input_size = 1
hidden_size = 10
output_size = 5
lr = 0.01
epochs = 100
# 初始化模型和优化器
model = LSTM(input_size, hidden_size, output_size)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
criterion = nn.MSELoss()
# 训练模型
for epoch in range(epochs):
model.train()
train_loss = 0
for i in range(len(train_data) - output_size):
input_seq = torch.FloatTensor(train_data[i:i+output_size])
target_seq = torch.FloatTensor(train_data[i+1:i+output_size+1])
optimizer.zero_grad()
output = model(input_seq)
loss = criterion(output, target_seq)
loss.backward()
optimizer.step()
train_loss += loss.item()
# 测试模型
model.eval()
test_loss = 0
with torch.no_grad():
for i in range(len(test_data) - output_size):
input_seq = torch.FloatTensor(test_data[i:i+output_size])
target_seq = torch.FloatTensor(test_data[i+1:i+output_size+1])
output = model(input_seq)
loss = criterion(output, target_seq)
test_loss += loss.item()
print(f'Epoch {epoch+1}, Train Loss: {train_loss/(len(train_data)-output_size):.6f}, Test Loss: {test_loss/(len(test_data)-output_size):.6f}')
```
这个模型使用了一个单层 LSTM,输入是一个长度为 5 的序列,输出是一个长度为 5 的序列。在训练过程中,每次输入一个长度为 5 的序列,预测下一个时间步的值。在测试过程中,使用训练好的模型预测未来的值。
阅读全文