Abstract

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.

Keywords

MNIST databaseDeep neural networksGeneralityComputer scienceArtificial intelligenceConvolutional neural networkArtificial neural networkPattern recognition (psychology)Image (mathematics)Contextual image classificationObject (grammar)Deep learningMachine learning

Affiliated Institutions

Related Publications

Deep Colorization

This paper investigates into the colorization problem which converts a grayscale image to a colorful version. This is a very difficult problem and normally requires manual adjus...

2015 540 citations

Publication Info

Year
2015
Type
article
Pages
427-436
Citations
3232
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3232
OpenAlex

Cite This

Anh‐Tu Nguyen, Jason Yosinski, Jeff Clune (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. , 427-436. https://doi.org/10.1109/cvpr.2015.7298640

Identifiers

DOI
10.1109/cvpr.2015.7298640