DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds
Defense
Hang Zhou Kejiang Chen Weiming Zhang
∗
Han Fang Wenbo Zhou Nenghai Yu
University of Science and Technology of China
{zh2991, chenkj, fanghan, welbeckz}@mail.ustc.edu.cn {zhangwm, ynh}@ustc.edu.cn
Abstract
Neural networks are vulnerable to adversarial examples,
which poses a threat to their application in security sensi-
tive systems. We propose a Denoiser and UPsampler Net-
work (DUP-Net) structure as defenses for 3D adversarial
point cloud classification, where the two modules recon-
struct surface smoothness by dropping or adding points.
In this paper, statistical outlier removal (SOR) and a data-
driven upsampling network are considered as denoiser and
upsampler respectively. Compared with baseline defenses,
DUP-Net has three advantages. First, with DUP-Net as a
defense, the target model is more robust to white-box ad-
versarial attacks. Second, the statistical outlier removal
provides added robustness since it is a non-differentiable
denoising operation. Third, the upsampler network can be
trained on a small dataset and defends well against adver-
sarial attacks generated from other point cloud datasets. We
conduct various experiments to validate that DUP-Net is
very effective as defense in practice. Our best defense elim-
inates 83.8% of C&W and l
2
loss based attack (point shift-
ing), 50.0% of C&W and Hausdorff distance loss based at-
tack (point adding) and 9.0% of saliency map based attack
(point dropping) under 200 dropped points on PointNet.
1. Introduction
Deep Learning has shown superior performance on sev-
eral categories of machine learning problems, especially
classification task. These Deep Neural Networks (DNN)
learn models from large training data to efficiently classify
unseen samples with high accuracy. However, recent works
have demonstrated that DNNs are vulnerable to adversar-
ial examples, which are maliciously created by adding im-
perceptible perturbations to the original input by attackers.
Adversarially perturbed examples have been deployed to at-
tack image classification service [18], speech recognition
system [5] and autonomous driving system [34].
∗
Corresponding author.
Heretofore, numerous algorithms have been proposed to
generate adversarial examples for 2D images. When model
parameters are known, a paradigm called white-box attacks
includes methods based on calculating the gradient of the
network, such as FGSM [9], IGSM [10] and JSMA [23],
and based on solving optimization problems, such as L-
BFGS [29], Deepfool [21] and Carlini & Wagner (C&W)
attack [3]. In the scenario where access to the model is not
available, called black-box attacks.
Since the robustness of DNNs to adversarial examples
is a critical feature, defenses that target to increase robust-
ness against adversarial example are urgently considered
and can be classified into three main categories: input trans-
formations [7, 19, 20], adversarial training [29] and gradi-
ent masking [22, 41]. In addition to defense, detection of
adversarial examples before they are fed into the networks
is another approach to resist attacks, such as MagNet [20]
and steganalysis based detection [17].
The popularity of 3D sensors such as LiDAR and RGBD
cameras draws many research concerns with 3D vision. An
increasing number of accessible data motivates data-driven
deep learning methods practical to be used in many areas,
including autopilot [24, 43], robotics [12, 6] and graph-
ics [35, 13, 31]. In particular, point cloud is one of the
most natural data structures to represent the 3D geome-
try. After the difficult problem of irregular data format was
addressed by DeepSets [40], PointNet [4] and its vari-
ants [26, 32], point cloud data can be directly processed
by DNNs, and has become a promising data structure for
3D computer vision tasks. Hua et al. [11] propose a point-
wise convolution operator that can output features at each
point, which can offer competitive accuracy while being
easy to implement. Yang et al. [36] construct losses based
on mesh shape and texture to generate adversarial examples,
which aim to project the optimized “adversarial meshes”
to 2D with a photorealistic renderer, and still able to mis-
lead different DNNs. Xiang et al . [34] attack point clouds
built upon C&W loss and point cloud-specific perturbation
metric with high success rate. Zheng et al. [42] propose
a malicious point-dropping method to generate adversarial
1961