针对卷积层为nn.Conv2d(1, 10, kernel_size=5) 的神经网络,用pytorch针对MNIST数据集写DeepFool算法的代码,并写出测试代码验证用Deepfool算法生成的对抗性示例攻击神经网络得到的正确率。要求在cpu上运行
时间: 2024-05-01 21:16:31 浏览: 114
卷积神经网络MNIST代码及测试数据
5星 · 资源好评率100%
以下是针对MNIST数据集的DeepFool攻击代码和测试代码:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import numpy as np
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = nn.functional.relu(nn.functional.max_pool2d(self.conv1(x), 2))
x = nn.functional.relu(nn.functional.max_pool2d(self.conv2(x), 2))
x = x.view(-1, 320)
x = nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return x
def deepfool(net, image, num_classes=10, overshoot=0.02, max_iter=50):
"""
DeepFool算法的实现
"""
image = image.unsqueeze(0).clone().detach().requires_grad_(True)
f_image = net(image)
I = f_image.argmax().item()
w = torch.zeros_like(image)
r = torch.zeros_like(image)
for k in range(max_iter):
pert = np.inf
f_image = net(image)
for i in range(num_classes):
if i == I:
continue
# compute w^k_i and f^k_i
net.zero_grad()
f_i = f_image[0, i]
f_i.backward(retain_graph=True)
grad_i = image.grad.clone().detach()
net.zero_grad()
f_I = f_image[0, I]
f_I.backward(retain_graph=True)
grad_I = image.grad.clone().detach()
w_i = grad_i - grad_I
f_i = f_i - f_I
# compute perturbation
pert_i = abs(f_i) / w_i.norm()
if pert_i < pert:
pert = pert_i
w = w_i
# compute r^k+1
r = r + pert * w / w.norm()
image = image + pert * w / w.norm()
image = torch.clamp(image, 0, 1)
return image.squeeze()
if __name__ == '__main__':
# 加载MNIST数据集
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
testset = datasets.MNIST(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=True)
# 加载神经网络
net = Net()
net.load_state_dict(torch.load('mnist_cnn.pt', map_location=torch.device('cpu')))
net.eval()
# 对抗性攻击
num_samples = 100
num_classes = 10
num_success = 0
for i, (image, label) in enumerate(testloader):
if i == num_samples:
break
image = image.squeeze()
label = label.item()
adv_image = deepfool(net, image)
# 测试攻击成功率
f_image = net(image.unsqueeze(0)).squeeze()
f_adv_image = net(adv_image.unsqueeze(0)).squeeze()
I = f_image.argmax().item()
J = (f_adv_image - f_image).argmax().item()
if I != J:
num_success += 1
print('Sample %d, true label: %d, predicted label: %d, adversarial label: %d' % (i+1, label, I, J))
print('Success rate: %.2f%%' % (num_success / num_samples * 100))
```
在上面的代码中,我们首先加载了MNIST数据集和训练好的神经网络,然后定义了DeepFool算法的实现。对于每个测试样本,我们使用DeepFool算法生成对抗性样本,并计算攻击成功率。最终输出测试结果,其中包括每个样本的真实标签、神经网络的预测标签和攻击后的标签,以及整个测试集的攻击成功率。
需要注意的是,由于DeepFool算法需要对每个类别都进行一次梯度计算,因此攻击的时间会比较长。为了加速测试过程,可以将测试样本数量设置为较小的值。
阅读全文