50次pgd攻击,生成mnist数据集的对抗样本,并将对抗样本转化为npy文件
时间: 2023-05-25 16:03:14 浏览: 450
以下是 Python 代码,使用 FGSM 和 PGD 攻击生成 MNIST 数据集的对抗样本,并将其保存为 .npy 文件:
```python
import tensorflow as tf
import numpy as np
from keras.datasets import mnist
# 加载 MNIST 数据集
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train.reshape(-1, 28, 28, 1)
x_test = x_test.reshape(-1, 28, 28, 1)
# 定义模型
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 训练模型
model.fit(x_train, y_train, epochs=5, validation_data=(x_test, y_test))
# FGSM 攻击
def fgsm_attack(image, epsilon, gradient):
sign_gradient = tf.sign(gradient)
perturbed_image = image + epsilon * sign_gradient
perturbed_image = tf.clip_by_value(perturbed_image, 0, 1)
return perturbed_image
# PGD 攻击
def pgd_attack(model, image, label, epsilon, alpha, iterations):
adv_image = tf.identity(image)
for i in range(iterations):
with tf.GradientTape() as tape:
tape.watch(adv_image)
prediction = model(adv_image)
loss = tf.keras.losses.sparse_categorical_crossentropy(label, prediction)
gradient = tape.gradient(loss, adv_image)
signed_grad = tf.sign(gradient)
perturbed_image = adv_image + alpha*signed_grad
perturbed_image = tf.clip_by_value(perturbed_image, 0, 1)
perturbed_image = fgsm_attack(perturbed_image, epsilon, gradient)
adv_image = tf.identity(perturbed_image)
return adv_image.numpy()
# 生成对抗样本并保存为 npy 文件
eps = 0.15
alpha = 0.01
iters = 50
adversarial_images = []
for i in range(x_test.shape[0]):
img = np.expand_dims(x_test[i], axis=0)
label = tf.cast(y_test[i], tf.int64)
adv_img = pgd_attack(model, img, label, eps, alpha, iters)
adversarial_images.append(adv_img)
adversarial_images = np.array(adversarial_images)
np.save('adversarial_mnist.npy', adversarial_images)
```
在以上代码中,我们首先加载 MNIST 数据集,并将像素值缩小到 0 到 1 之间。然后,我们定义了一个简单的 CNN 模型,并使用训练集对其进行了训练。
接下来,我们定义并实现了 FGSM 和 PGD 攻击。FGSM 攻击只需在模型梯度的基础上添加扰动,而 PGD 攻击是一种迭代攻击,每次迭代会计算梯度并加上一个小扰动,最终生成对抗样本。
最后,我们使用 PGD 攻击生成 MNIST 数据集的 50 个对抗样本,并将其保存为 .npy 文件。
阅读全文