LSTM实现
时间: 2023-07-02 22:12:30 浏览: 96
Smart-Water-Analytics:LSTM实用解决方案的实现-目标是为每个水体类别(采集器,水泉,河流,湖泊)创建一个数学模型,以基于(嘈杂的)在设定的时间间隔内预测每个唯一水体中的水量) 数据
下面是一个使用 PyTorch 实现 LSTM 的完整代码示例:
```python
import torch
import torch.nn as nn
# 定义 LSTM 模型
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, batch_first=True):
super(LSTM, self).__init__()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=batch_first)
self.fc = nn.Linear(hidden_size, 1)
def forward(self, x):
out, _ = self.lstm(x)
out = self.fc(out[:, -1, :])
return out
# 定义训练函数
def train(model, train_loader, criterion, optimizer, device):
model.train()
train_loss = 0.0
for i, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs.squeeze(), targets.float())
loss.backward()
optimizer.step()
train_loss += loss.item()
return train_loss / len(train_loader)
# 定义测试函数
def test(model, test_loader, criterion, device):
model.eval()
test_loss = 0.0
with torch.no_grad():
for inputs, targets in test_loader:
inputs, targets = inputs.to(device), targets.to(device)
outputs = model(inputs)
test_loss += criterion(outputs.squeeze(), targets.float()).item()
return test_loss / len(test_loader)
# 加载数据集
train_data = torch.randn(1000, 10, 1)
train_targets = torch.randn(1000, 1)
test_data = torch.randn(100, 10, 1)
test_targets = torch.randn(100, 1)
# 定义超参数
input_size = 1
hidden_size = 10
num_layers = 1
batch_size = 50
epochs = 50
learning_rate = 0.01
# 定义设备
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# 定义模型、损失函数和优化器
model = LSTM(input_size, hidden_size, num_layers).to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# 构建数据加载器
train_dataset = torch.utils.data.TensorDataset(train_data, train_targets)
test_dataset = torch.utils.data.TensorDataset(test_data, test_targets)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
# 训练模型
for epoch in range(epochs):
train_loss = train(model, train_loader, criterion, optimizer, device)
test_loss = test(model, test_loader, criterion, device)
print("Epoch [{}/{}], Train Loss: {:.4f}, Test Loss: {:.4f}".format(epoch+1, epochs, train_loss, test_loss))
```
在这个完整代码中,我们首先定义了 LSTM 模型,然后定义了训练函数和测试函数。在训练函数中,我们首先将输入和目标数据传入设备中,然后将模型参数梯度清零,将输入数据传入模型得到输出,计算损失并进行反向传播和参数更新。在测试函数中,我们关闭了梯度计算,用测试数据计算模型的损失。在主程序中,我们首先加载数据集,定义超参数、设备、模型、损失函数和优化器,然后构建数据加载器。最后,在训练循环中,我们调用 train 函数和 test 函数进行模型训练和测试。
阅读全文