TechNews Pictorial PriceGrabber Video Wed Nov 27 20:47:16 2024

0


Deep neural network can match infrared facial images to thos
Source: Arxiv


Deep Perceptual Mapping (DPM): densely computed features from the visible domain are mapped through the learned DPM network to the corresponding thermal domain. Credit: arXiv:1507.02879 [cs.CV]

A pair of researchers affiliated with Karlsruhe Institute of Technology and the Institute of Anthropomatics & Robotics has created a deep neural network application that is able to successfully match faces recorded using infrared light with those taken using natural lighting. M. Saquib Sarfraz and Rainer Stiefelhagen have written a paper describing their research and findings and have posted it on the preprint server arXiv.

Infrared photography allows humans to see things in the dark that they would not be able to see otherwise―it has become an important tool for both police work and those engaged in warfare. One serious limitation of it however is its resolution―people looking at an image of a person that was created using an infrared camera typically cannot make out who that person is―there is just too little correlation between the image and what the person looks like in natural light. To make the jump, the researchers turned to a deep neural network.

Deep neural networks are software/hardware systems that have been designed to learn about certain things based on large datasets and then to make predictions about current or future things based on what is learned―similar, of course, to the way the human brain works. To use such a system for correlating infrared images with natural light counterparts, then, would require a large dataset of both types of images of the same people. The duo discovered that such a dataset existed as part of other research being done at the University Notre Dame. After being given access to it, they "taught" their system to pick out natural light images of people based on half of the infrared images in the dataset they were given. The other half was used to test how well the system worked.

The results were not perfect, by any means―the system was able to make correct matches 80 percent of the time (which dropped to just 55 percent when it had only one photo to use), but marks a dramatic improvement in the technology― Sarfraz and Stiefelhagen believe they could improve the accuracy dramatically if they could get their hands on a much larger dataset.

Explore further: Computer algorithm outperforms humans on 'Labeled Faces in the Wild' benchmark

More information: Deep Perceptual Mapping for Thermal to Visible Face Recognition, arXiv:1507.02879 [cs.CV] arxiv.org/abs/1507.02879

Abstract
Cross modal face matching between the thermal and visible spectrum is a much de- sired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship be- tween the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity in- formation. We show substantive performance improvement on a difficult thermal-visible face dataset. The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |