基于pytorch用python语言写一个利用LSTM模型对时间序列分类的程序
时间: 2023-09-09 15:13:31 浏览: 47
好的,以下是基于PyTorch用Python语言写一个利用LSTM模型对时间序列分类的程序的示例代码:
```python
import torch
import torch.nn as nn
# 定义LSTM模型类
class LSTMModel(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes):
super(LSTMModel, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
# 初始化LSTM模型参数
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device)
# 前向传播
out, _ = self.lstm(x, (h0, c0))
out = self.fc(out[:, -1, :])
return out
# 定义超参数
input_size = 1
hidden_size = 64
num_layers = 2
num_classes = 2
learning_rate = 0.001
num_epochs = 100
# 实例化LSTM模型
model = LSTMModel(input_size, hidden_size, num_layers, num_classes).to(device)
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# 训练模型
for epoch in range(num_epochs):
for i, (inputs, labels) in enumerate(train_loader):
inputs = inputs.to(device)
labels = labels.to(device)
# 前向传播
outputs = model(inputs.float())
loss = criterion(outputs, labels)
# 反向传播和优化
optimizer.zero_grad()
loss.backward()
optimizer.step()
# 每经过一定周期输出一次结果
if (epoch+1) % 5 == 0:
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))
# 测试模型
with torch.no_grad():
correct = 0
total = 0
for inputs, labels in test_loader:
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs.float())
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Test Accuracy of the model: {} %'.format(100 * correct / total))
```
该模型使用PyTorch中的LSTM模块,利用时间序列数据训练分类模型。程序包括定义LSTM模型类、超参数设置、数据加载、训练模型、测试模型等步骤。训练过程中使用交叉熵损失函数和Adam优化器。测试模型时,计算模型预测的准确率。