TechNews Pictorial PriceGrabber Video Fri Nov 22 15:58:11 2024

0


‘Mind-reading’ artificial intelligence produces a description of what you’re thi
Source: Luke Dormehl




File photo: 3d rendering of human brain on technology background represent artificial intelligence and cyber space concept (monsitj, iStock)    (monsitj)

Think that Google’s search algorithms are good at reading your mind? That’s nothing compared to a new artificial intelligence research project coming out of Japan, which can analyze a person’s brain scans and provide a written description of what they have been looking at.

To generate its captions, the artificial intelligence is given an fMRI brain scan image, taken while a person is looking at a picture. It then generates a written description of what they think the person was viewing. An illustration of the level of complexity it can offer is: “A dog is sitting on the floor in front of an open door” or “a group of people standing on the beach.” Both of those turn out to be absolutely accurate.

“We aim to understand how the brain represents information about the real world,” Ichiro Kobayashi, one of the researchers from Japan’s Ochanomizu University, told Digital Trends. “Toward such a goal, we demonstrated that our algorithm can model and read out perceptual contents in the form of sentences from human brain activity. To do this, we modified an existing network model that could generate sentences from images using a deep neural network, a model of visual system, followed by an RNN (recurrent neural network), a model that can generate sentences. Specifically, using our dataset of movies and movie-evoked brain activity, we trained a new model that could infer activation patterns of DNN from brain activity.”

Before you get worried about some dystopian future in which this technology is used as a supercharged lie detector, though, Kobayashi points out that it still a long way away from real-world deployment. “So far, there are not any real-world applications for this,” Kobayashi continued. “However, in the future, this technology might be a quantitative basis of a brain-machine interface.”

As a next step, Shinji Nishimoto, another researcher on the project, told Digital Trends that the team wants to use it to better understand how the brain processes information.

“We would like to understand how the brain works under naturalistic conditions,” Nishimoto said. “Toward such a goal, we are planning to investigate how various forms of information — vision, semantics, languages, impressions, etcetera — are encoded in the brain by modeling the relationship between our experiences and brain activity. We also aim to investigate how multimodal information is related to achieve semantic activities in the brain. In particular, we will work on generating descriptions about what a person thinks.”


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |