Abstract

With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.

Keywords

Adversarial systemComputer scienceDeep neural networksTaxonomy (biology)Vulnerability (computing)Deep learningArtificial intelligenceComputer security

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Volume
30
Issue
9
Pages
2805-2824
Citations
1652
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1652
OpenAlex
62
Influential
1186
CrossRef

Cite This

Xiaoyong Yuan, Pan He, Qile Zhu et al. (2019). Adversarial Examples: Attacks and Defenses for Deep Learning. IEEE Transactions on Neural Networks and Learning Systems , 30 (9) , 2805-2824. https://doi.org/10.1109/tnnls.2018.2886017

Identifiers

DOI
10.1109/tnnls.2018.2886017
PMID
30640631
arXiv
1712.07107

Data Quality

Data completeness: 88%