TechNews Pictorial PriceGrabber Video Thu Nov 28 05:34:55 2024

0


When deep learning mistakes a coffee maker for a cobra
Source: Ecole Polytechnique Federale de Lausanne


Credit: Ecole Polytechnique Federale de Lausanne

Is this your sister?" That's the kind of question asked by image-recognition systems, which are becoming increasingly prevalent in our everyday devices. They may soon be used for tumor detection and genomics, too. These systems rely on what is known as "deep-learning" architectures – an exciting new development in artificial learning. But EPFL researchers have revealed just how sensitive these systems actually are: a tiny universal perturbation applied across an image can throw off even the most sophisticated algorithms.

Deep-learning systems, a major breakthrough in computer-based image recognition, are however surprisingly sensitive to minor changes in the data they analyze. Researchers at EPFL's Signal Processing Laboratory (LTS4), headed by Pascal Frossard, have shown that even the best deep-learning architectures can be fooled by introducing an almost invisible perturbation into digital images. Such a perturbation can cause a system to mistake a joystick for a Chihuahua, for example, or a coffee-maker for a cobra. Yet the human brain would have no problem correctly identifying the objects. The researchers' findings – which should help scientists better understand, and therefore improve, deep-learning systems – will be presented at the IEEE Computer Vision and Pattern Recognition 2017 conference, a major international academic event. We spoke with Alhussein Fawzi and Seyed Moosavi, the two lead authors of the research.

What is deep learning and what is the problem with today's systems?

Fawzi: Deep learning, or artificial neural networks, is an exciting new development in artificial intelligence. All the major tech firms are banking on this technology to develop systems that can accurately recognize objects, faces, text and speech. Various forms of these algorithms can be found in Google's search engine and in Apple's SIRI, for instance. Deep-learning systems can work exceptionally well; companies are considering using them to detect tumors from a CAT scan or to operate self-driving cars. The only problem is that they are often black boxes and we don't always have a good grasp of how they function. Should we just trust them blindly?

Is it really surprising that we can fool these systems?

Moosavi: Researchers had already shown two years ago that artificial neural networks could easily be tricked by small perturbations designed specifically to confuse them on a given image. But we found that a single, universal perturbation could cause a network to fail on almost all images. And this perturbation is so tiny that it is almost invisible to the naked eye. That's alarming and shows that these systems are not as robust as one might think.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |