Pytorch利用dataloader定义batch数目为16,建立一个全连接神经网络,并以发
时间: 2024-03-10 16:47:00 浏览: 140
以下是利用dataloader定义batch数目为16,建立一个全连接神经网络,并以此进行训练的示例代码:
```python
import torch
import torch.nn as nn
from torch.utils.data import DataLoader, TensorDataset, BatchSampler
# 定义数据集
tensor_data = torch.randn(14807, 24)
tensor_label = torch.randint(0, 2, (14807,))
# 定义BatchSampler
batch_size = 16
batch_sampler = BatchSampler(range(len(tensor_data)), batch_size=batch_size, drop_last=False)
# 定义数据集和dataloader
dataset = TensorDataset(tensor_data, tensor_label)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
# 定义全连接神经网络
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(24, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
x = self.fc1(x)
x = nn.functional.relu(x)
x = self.fc2(x)
return x
net = Net()
# 定义损失函数和优化器
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
# 训练神经网络
for epoch in range(10):
running_loss = 0.0
for i, data in enumerate(dataloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 1000 == 999:
print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 1000))
running_loss = 0.0
print('Finished Training')
```
以上代码中,先定义了一个[14807, 24]的张量作为数据集,另外定义一个[14807,]的张量作为标签。然后使用`TensorDataset`将数据集和标签封装成一个数据集对象,使用`BatchSampler`和`DataLoader`将数据集按照batch的方式进行划分。接着定义了一个全连接神经网络,损失函数和优化器,并进行训练。
阅读全文