pytorch实现CNNL1正则化代码
时间: 2023-07-07 07:42:07 浏览: 105
下面是一个使用PyTorch实现CNN L1正则化的示例代码:
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=5)
self.fc1 = nn.Linear(32 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2)
x = x.view(-1, 32 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = CNN()
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for epoch in range(10):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
# 设置L1正则化项
l1_regularization = torch.tensor(0)
for param in model.parameters():
l1_regularization += torch.norm(param, 1)
l1_regularization *= 0.001
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels) + l1_regularization
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
在上述代码中,我们使用了`torch.norm`函数来计算参数的L1范数,并将其乘以一个较小的系数作为L1正则化项,加入到损失函数中。通过调整这个系数的大小,我们可以控制正则化的强度。
阅读全文