Abstract
Although deep neural networks (DNNs) have achieved great success in many\ntasks, they can often be fooled by \\emph{adversarial examples} that are\ngenerated by adding small but purposeful distortions to natural examples.\nPrevious studies to defend against adversarial examples mostly focused on\nrefining the DNN models, but have either shown limited success or required\nexpensive computation. We propose a new strategy, \\emph{feature squeezing},\nthat can be used to harden DNN models by detecting adversarial examples.\nFeature squeezing reduces the search space available to an adversary by\ncoalescing samples that correspond to many different feature vectors in the\noriginal space into a single sample. By comparing a DNN model's prediction on\nthe original input with that on squeezed inputs, feature squeezing detects\nadversarial examples with high accuracy and few false positives. This paper\nexplores two feature squeezing methods: reducing the color bit depth of each\npixel and spatial smoothing. These simple strategies are inexpensive and\ncomplementary to other defenses, and can be combined in a joint detection\nframework to achieve high detection rates against state-of-the-art attacks.\n
Keywords
Affiliated Institutions
Related Publications
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Giv...
A survey on Image Data Augmentation for Deep Learning
Abstract Deep convolutional neural networks have performed remarkably well on many Computer Vision tasks. However, these networks are heavily reliant on big data to avoid overfi...
Representation Learning: A Review and New Perspectives
The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more...
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks
Recently, channel attention mechanism has demonstrated to offer great potential in improving the performance of deep convolutional neural networks (CNNs). However, most existing...
LRBM: A Restricted Boltzmann Machine Based Approach for Representation Learning on Linked Data
Linked data consist of both node attributes, e.g., Preferences, posts and degrees, and links which describe the connections between nodes. They have been widely used to represen...
Publication Info
- Year
- 2018
- Type
- preprint
- Citations
- 1758
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.14722/ndss.2018.23198
- arXiv
- 1704.01155