lstm时间序列预测pytorch代码
时间: 2023-07-31 20:11:39 浏览: 93
以下是使用 PyTorch 实现 LSTM 时间序列预测的示例代码:
首先,需要导入所需的库:
```python
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np
```
接着,定义 LSTM 模型:
```python
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, output_size=1, num_layers=2):
super().__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
h0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size))
c0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size))
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
```
在这个模型中,我们使用了一个 LSTM 层和一个全连接层,输入是一个序列,输出是一个标量。
接下来,定义训练函数:
```python
def train(model, dataloader, criterion, optimizer, num_epochs):
for epoch in range(num_epochs):
for i, (inputs, targets) in enumerate(dataloader):
inputs = Variable(inputs)
targets = Variable(targets)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
if (i+1) % 10 == 0:
print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, len(dataloader), loss.data[0]))
```
在训练函数中,我们传入模型、数据加载器、损失函数和优化器,以及训练的轮数。在每个轮次中,我们遍历数据加载器,将输入和目标变量转换为张量并将它们包装在变量中。然后,我们将梯度归零,计算模型的输出和损失,反向传播并更新模型参数。每 10 步,我们打印一次损失。
最后,定义测试函数:
```python
def test(model, dataloader):
model.eval()
with torch.no_grad():
for inputs, targets in dataloader:
inputs = Variable(inputs)
targets = Variable(targets)
outputs = model(inputs)
loss = criterion(outputs, targets)
inputs = inputs.data.numpy()
targets = targets.data.numpy()
outputs = outputs.data.numpy()
plt.plot(inputs[:, :, 0].flatten(), label='input')
plt.plot(targets.flatten(), label='target')
plt.plot(outputs.flatten(), label='output')
plt.legend()
plt.show()
```
在测试函数中,我们将模型设置为评估模式,并遍历数据加载器获取输入和目标变量。然后,我们计算模型的输出和损失,并将输入、目标和输出可视化。
最后,我们可以使用以下代码来加载数据、定义模型、设置损失函数和优化器,并开始训练:
```python
# Load data
data = np.loadtxt('data.csv', delimiter=',', dtype=np.float32)
X = data[:, :-1]
y = data[:, -1:]
X = np.reshape(X, (X.shape[0], X.shape[1], 1))
# Define hyperparameters
input_size = 1
hidden_size = 2
output_size = 1
num_layers = 1
batch_size = 4
num_epochs = 100
# Define model, loss function and optimizer
model = LSTM(input_size, hidden_size, output_size, num_layers)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
# Train the model
train_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X), torch.from_numpy(y))
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
train(model, train_loader, criterion, optimizer, num_epochs)
# Test the model
test_dataset = torch.utils.data.TensorDataset(torch.from_numpy(X), torch.from_numpy(y))
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
test(model, test_loader)
```
这里假设数据集已经存在于 `data.csv` 文件中,包含输入和目标变量。首先,我们将数据加载到 NumPy 数组中,并将输入重塑为三维数组。接下来,我们定义了一些超参数,包括输入大小、隐藏大小、输出大小、层数、批量大小和训练轮数。然后,我们定义了模型、损失函数和优化器,并使用数据加载器开始训练。最后,我们使用相同的数据加载器测试模型。
阅读全文