我有一个来自十个类别各100个共1000个的信号数据,每个数据有512个特征点,存储为一个(1000,1,,512)的torch.tensor张量,现在我想将其输入一个深度NNLM网络训练分类模型用于分类这些类别,请使用pytorch实现
时间: 2024-05-12 14:21:03 浏览: 100
以下是一个简单的实现,包含数据加载、模型定义、训练和测试等步骤:
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
# 定义数据集类
class SignalDataset(Dataset):
def __init__(self, signals, labels):
self.signals = signals
self.labels = labels
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
return self.signals[idx], self.labels[idx]
# 定义模型类
class NNLM(nn.Module):
def __init__(self):
super(NNLM, self).__init__()
self.fc1 = nn.Linear(512, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
def forward(self, x):
x = nn.functional.relu(self.fc1(x))
x = nn.functional.relu(self.fc2(x))
x = self.fc3(x)
return x
# 加载数据
signals = torch.load('signals.pt')
labels = torch.load('labels.pt')
dataset = SignalDataset(signals, labels)
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# 定义模型和优化器
model = NNLM()
optimizer = optim.Adam(model.parameters(), lr=0.001)
# 训练模型
for epoch in range(10):
running_loss = 0.0
for i, data in enumerate(dataloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = model(inputs)
loss = nn.functional.cross_entropy(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
print('[Epoch %d] loss: %.3f' % (epoch+1, running_loss/len(dataset)))
# 测试模型
correct = 0
total = 0
with torch.no_grad():
for data in dataloader:
inputs, labels = data
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy: %.2f %%' % (100 * correct / total))
```
其中,`SignalDataset`是一个继承自`torch.utils.data.Dataset`的数据集类,用于加载信号数据和对应的标签。`NNLM`是一个简单的三层全连接神经网络,用于分类。在训练过程中,我们使用交叉熵损失函数和Adam优化器。在测试过程中,我们使用准确率作为评估指标。
阅读全文