TechNews Pictorial PriceGrabber Video Wed Nov 27 13:35:53 2024

0


Microsoft's Artificial Intelligence Engine Can Tell a Beag
Source: Adario Strange


Although many of us still rely on search engines like Google to provide us with answers to a wide range of questions, Microsoft Research has just delivered a peek at a future in which neural networks will deliver the kind of visual recognition answers we usually rely on humans to provide.

Microsoft Research's executive vice president of technology and research, Harry Shum, showed off some of that bleeding-edge computing power during the company's Faculty Summit on Monday via an artificial intelligence system that successfully recognized several dog breeds from just a photograph.

SEE ALSO: Elon Musk: We Should Fear a 'Terminator' Future

The system is called Project Adam, and it took over 18 months of work to create the neural network, which boasts over two billion connections, a system attempting to mimic the hierarchical way in which the human brain processes and identifies visual information.

To demonstrate how Project Adam works, team member Johnson Apacible used the recently unveiled virtual assistant Cortana, integrated with Project Adam, to correctly identify (see video below) the breeds of several dogs in real-time, after the system briefly analyzed a photo of each dog.



Parsing through over 14 million images taken from ImageNet, an image research database put together by Stanford University, Princeton University and Stony Brook University, Project Adam uses its deep neural network (DNN) architecture to accurately identify the objects appearing in photos.

Similarly, when Project Adam was asked what breed of dog it was being shown when, in fact, a photo of a human had been presented for analysis, it was able to tell the difference between a human and a dog, stating, through the Cortana interface, "I believe this is not a dog."

" Imagine if you could help blind people see by pointing a cellphone at a scene and having it describe the scene to them Imagine if you could help blind people see by pointing a cellphone at a scene and having it describe the scene to them," said team leader Trishul Chilimbi, describing how such a system could eventually be of use to end-users, in a statement. "We could do things like take a photograph of food we’re eating and have it provide us with nutritional information. We can use that to make smarter choices."

The team claims Project Adam is 50 times faster and twice as accurate as Google's 16,000 computer neural network-powered image recognition system introduced a couple of years ago.

Chilimbi believes the system is better than the competition because of the unique way in which it handles the data being processed across its servers.

"One of the innovations we came up with was saying that not only can we make it asynchronous, but we went whole hog and decided not to pretend it's synchronous in any way," said Chilimbi. (Asynchrony is the process that breaks up data in discrete chunks for individual processing rather than in the typically synchronous fashion in which some fundamental machine-learning training algorithms operate.) "The asynchrony also helps us escape from ruts where the task accuracy does not improve much ... Much like how humans learning a new task often find themselves plateauing after a period of rapid improvement."

The team hopes to one day use the dynamic to create "brain-scaled" neural networks to power applications of the future.



As for this week's demonstration, image recognition is one thing, but some of those impressed by the feat might already be wondering: Is the team behind Project Adam also working on the same kind of neural network-powered recognition for things like text and music?

"Yes, certainly," Chilimbi told Mashable. "But more interestingly we are interested in pursuing multi-modal learning where we learn jointly across all these modalities. For example, if I've never seen the Statue of Liberty before but I’ve had someone describe it to me, then I can still recognize it the first time I see it from the textual description."

However, don't expect any of this to pop up in your next Windows Phone update ― this is all still in the experimental phase. "We are exploring various opportunities," says Chilimbi, "but do not have a concrete timeline to share at this time."


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |