Abstract

Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.

Keywords

Adversarial systemComputer scienceTransferabilityRobustness (evolution)Margin (machine learning)Benchmark (surveying)Baseline (sea)Artificial intelligenceMachine learningCode (set theory)Law

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Pages
2725-2734
Citations
1098
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1098
OpenAlex

Cite This

Cihang Xie, Zhishuai Zhang, Yuyin Zhou et al. (2019). Improving Transferability of Adversarial Examples With Input Diversity. , 2725-2734. https://doi.org/10.1109/cvpr.2019.00284

Identifiers

DOI
10.1109/cvpr.2019.00284