TechNews Pictorial PriceGrabber Video Wed Dec 25 11:57:20 2024

0


Artificial Intelligence Finally Entered Our Everyday World
Source: Cade Metz


Andrew Ng hands me a tiny device that wraps around my ear and connects to a smartphone via a small cable. It looks like a throwback―a smartphone earpiece without a Bluetooth connection. But it’s really a glimpse of the future. In a way, this tiny device allows the blind to see.

Ng is the chief scientist at Chinese tech giant Baidu, and this is one of the company’s latest prototypes. It’s called DuLight. The device contains a tiny camera that captures whatever is in front of you―a person’s face, a street sign, a package of food―and sends the images to an app on your smartphone. The app analyzes the images, determines what they depict, and generates an audio description that’s heard through to your earpiece. If you can’t see, you can at least get an idea of what’s in front of you.

Artificial intelligence is changing not only the way we use our computers and smartphones but the way we interact with the real world.

DuLight is still in the earliest stages of development. It doesn’t work as well as it one day will. But it points to a future where machines can perceive and even understand their surroundings as well as―or even better than―humans can. This kind of artificial intelligence is changing not only the way we use our computers and smartphones but the way we interact with the world.

Ng’s prototype relies on a technology called deep learning. Inside the massive computer data centers that underpin Baidu’s online services, the company runs massive neural networks―networks of hardware and software that approximate the web of neurons in the human brain. By analyzing enormous collections of digital images, these networks can learn to identify objects, written words, even human faces. Feed enough cat photos into a neural net, and it can learn to identify a cat. Feed it enough photos of a cloud, and it can learn to identify a cloud.

This same technology already is handling a wide range of other tasks within Baidu and US tech giants like Google, Facebook, and Microsoft. At Google, neural nets enable you to instantly search for specific people, places, and things buried in your personal collection of photos. It helps recognize the commands you speak into your Android phone. At Facebook, deep learning technology helps identify faces in the photos you post in status updates. At Skype, which Microsoft owns, it drives a service that instantly translates conversations from one language to another.

A neural net, you see, can help with many different modes of perception. This technology is even beginning to understand the natural way humans speak. Witness a chatbot Google is building by feeding old movie dialogue into a neural net so the bot can learn to carry on a conversation.



All of this seems like science fiction. But 2015 is the year artificial intelligence technology took off in a big way in the real world. DuLight and the Google chatbot may be experiments, but Facebook’s face recognition, Microsoft’s Skype translation, and Google’s Android voice recognition are very real―and available to all. Google is also using this technology to drive its Internet search engine, the linchpin of its online empire. Twitter is using it to identify pornography, which gives people the opportunity to block it. Baidu uses it to target advertisements and identify malware.

And thanks to some big moves at the end of the year, advances in deep learning will only accelerate.

In early November, Google surprised the tech world by open sourcing the software engine that drives its deep learning services, sharing this all-important tech with everyone. Not that Google gave away all its technology―let alone the massive amounts of data that really feed deep learning systems to make them truly useful―but it did open source enough to help drive the evolution of deep learning outside the company.

Weeks later, Facebook open sourced designs for the custom-built hardware server that drives its deep learning work. A day after that, a group led by Tesla Motors founder Elon Musk and Y Combinator president Sam Altman unveiled a $1 billion nonprofit called OpenAI that vows to share all of its AI research and technology with the outside world.

OpenAI is a big deal because it will be overseen by Ilya Sutskever, formerly one of Google’s top AI researchers. Musk and Altman have also pitched the organization as a way of protecting the world from the dangers of AI. They worry that tech like deep learning will become so powerful that it will become something of a super-human intelligence beyond control, and they believe the best way of avoiding that is to put AI in everyone’s hands.

“Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” says Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”

It’s too early to tell whether this counterintuitive argument will hold up. Science is still a long way from super-human AI―if it arrives at all. The dangers are certainly worth considering. But society should embrace the good that AI can do. DuLight is a prime example. You’ll find similar work under way at Facebook. This fall, the company showed off technology for the visually impaired that will automatically analyze photos in their Facebook News Feeds and, via a text-to-speech engine, describe what’s in those photos. Its effect will be both immediate and enormous. More than 50,000 visually impaired people already use Facebook―even though they can’t see what’s in the photos. Now, a machine can act as their eyes. The impact of AI is now plain for all to see.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |