一个简单的pytorch实现lstm的案例,能够基本知识点覆盖
时间: 2024-03-07 13:51:36 浏览: 54
好的,下面是一个简单的PyTorch实现LSTM的案例,可以覆盖基本知识点:
```python
import torch
import torch.nn as nn
# 定义LSTM模型
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
# 初始化hidden和cell状态
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device)
# LSTM层前向传播
out, _ = self.lstm(x, (h0, c0))
# 取最后一个时间步的hidden状态作为模型输出
out = self.fc(out[:, -1, :])
return out
# 定义输入、输出和超参数
input_size = 10
hidden_size = 20
num_layers = 2
output_size = 1
batch_size = 16
seq_len = 5
# 随机生成输入数据和标签
x = torch.randn(batch_size, seq_len, input_size)
y = torch.randn(batch_size, output_size)
# 初始化模型并计算模型输出
model = LSTMModel(input_size, hidden_size, num_layers, output_size)
output = model(x)
# 计算损失并进行反向传播和参数更新
criterion = nn.MSELoss()
loss = criterion(output, y)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
以上代码实现了一个具有2层LSTM的模型,输入数据形状为(batch_size, seq_len, input_size),输出数据形状为(batch_size, output_size)。模型的输出是最后一个时间步的hidden状态。损失函数采用均方误差损失函数,并采用Adam优化器进行参数更新。
阅读全文