TechNews Pictorial PriceGrabber Video Tue Nov 26 20:48:02 2024

0


What does the rapid advance of Artificial Intelligence mean
Source: Erik Larson


Artificial Intelligence has always been hyped by its often charismatic enthusiasts; lately it seems that the hype may be coming true. Media pundits, technologists, and increasingly the broader public see the rise of artificial intelligence as inevitable. Companies with mass data fed into AI systems ― like Google, Facebook, Yahoo, and others ― make headlines with technological successes that were science fiction even a decade ago: we can talk to our phones, get recommendations that are personalized to our interests, and may even ride around in cars driven by computers soon. The world has changed, and AI is a big part of why.

As one might expect, pundits and technologists talk about the "AI revolution" in the most glowing terms, equating advances in computer tech with advances in humanity: standards of living, access to knowledge, and a spate of emerging systems and applications ranging from improved hearing and vision aids for the impaired, to the cheaper manufacture of goods, to better recommendations from Amazon, Netflix, Pandora, and others. Artificial Intelligence is a measuring rod for progress, scientifically, technologically, and even socially.

But technological progress cuts both ways. Not surprisingly, the excitement about a coming artificial intelligence has inspired worrisome and cautionary commentary about the potential downside. This downside, like the upside, is expressed in stark and emotional terms. Nick Bostrom's 2014 bestseller, Superintelligence: Paths, Dangers, Strategies, warns that AI could spell the end of humanity (literally). And the former IBM researcher turned e-marketing CEO Louis Del Monte, in his 2013 book, The Artificial Intelligence Revolution: Will Artificial Intelligence Serve Us or Replace Us?, agrees that AI is happening so fast that the changes could be cataclysmic. National Geographic writer and filmmaker William Barrat, too, joined the fray in full-apocalyptic mode in Our Final Invention: Artificial Intelligence and the End of the Human Era.

Robot Recharging Batteries



Image: Corbis William Whitehurst

Silicon Valley sells progress, and so it's no wonder that the Valley has generally embraced the positive hype about artificial intelligence today. Hopeful new start-ups bang the drum of AI, and expect to ride some of the wave of excitement into venture capital and future success. Yet an eclectic bunch of investors and iconoclasts in the Valley have also plummeted head long into worries over AI coming too soon, and changing human society too fast. Most of those concerns focus on the singularity, a soon-to-arrive crossover point in the affairs of man and machine, where machines overtake human intelligence, and we cease to be the most interesting feature of the planet.

Elon Musk, the founder of Tesla and SpaceX, has openly speculated that humans could be reduced to "pets" by the coming superintelligent machines. Musk has donated $10 million to the Future of Life Institute, in a self-described bid to help stave off the development of "killer robots." At Berkeley, the Machine Intelligence Research Institute (MIRI) is dedicated to addressing what Bostrom and many others describe as an "existential threat" to humanity, eclipsing previous (and ongoing) concerns about the climate, a nuclear holocaust, and other major denizens of our modern life. Luminaries like Stephen Hawking and Bill Gates have also commented on the scariness of artificial intelligence.

        The idea that AI represents a clear and present danger has an old pedigree.

The idea that AI represents a clear and present danger has an old pedigree. As far back as 2000, Bill Joy, the former chief technology officer of now-defunct Sun Microsystems, penned one of the most famous apocalyptic rants about the threat to humanity from AI ever, in his article "Why the Future Doesn't Need Us" ― published by (who else?) Wired, and widely discussed as the new century began. Yet the message was drowned out by more palpable worries: the terrorist attacks of September 11, 2001. Today, over a decade later, Joy's anxiety over killer robots, made possible by rapid advances in AI, is back. It competes now with encomiums to AI as the milestone of our future success.

Overly ebullient discussion of smart gadgets and AI has always adorned the glossy pages of magazines like Wired. Of late, however, the once seemingly academic and speculative subject has spread to major media, too. The New York Times worries about "Artificial Intelligence as a Threat," in a November 2014 article (curiously appearing in Fashion and Style). John Markoff, a technology writer for the Times, has written dozens of articles on the topic, with titles leaving little to the imagination: "The Rapid Advance of Artificial Intelligence," to pick one. Many other media outlets publish similar stories. AI, it seems, is coming ― and fast.

Media attention isn't limited to magazines and media publications. Nonfiction books about the positive potential of AI have also exploded in recent years. Erik Brynjolfsson and Andrew McAfee, both of MIT's Center for Digital Business and the Sloan School of Management, argue in their 2014 book, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, that we're rapidly entering a new era altogether, as machines begin assuming roles that were once the sole purview of humans. From robotics in manufacturing to personalization on the web, AI is changing the landscape of the new economy, they argue. Mostly, the machine age is a benefit, as boring or dangerous jobs are passed off to machines, and interesting work is helped along by intelligent computing assistants. Artificial Intelligence is upon us, say Brynjolfsson and McAfee, but it's basically wonderful news: for business, for our standards of living, and for the future of humanity.

Both the Pollyanna vision of Brynjolfsson and McAfee, and the Apocalyptico of Musk and other doomsayers assume a premise that remains contestable: that Artificial Intelligence is coming quickly, and that the evidence for truly intelligent computers is compelling. Is it?

Scientifically, the idea that computing power driving modern "smart" technologies like Siri, or Google Now, or even self-driving cars, constitutes evidence of genuine, human-like intelligence is far more contestable than much of the current discussion admits. Major challenges remain, as scientists like Berkeley's machine learning maestro, Michael Jordan, pointed out recently in his introduction on Big Data and AI in the National Research Council's 2013 report "Frontiers in Massive Data Analysis." Jordan raises technical challenges, but other detractors are more philosophical. Humanists, call them.

        New Humanists. Their voices are strong, sensible, and increasingly (perhaps ironically): popular.

New Humanists. Their voices are strong, sensible, and increasingly (perhaps ironically): popular.

Former Harvard Business Review editor Nicholas Carr wrote a cautionary tale on the creeping automation of daily life last year. And technology expert Jaron Lanier's You Are Not a Gadget (2010), along with his follow-up, Who Owns the Future?, expound themes profoundly skeptical of recent claims about a coming AI. The complaint centers on visions of our future that don't include us. It's a return of Humanism, in other words, in the eye of a storm about increasingly personable machines. Carr and Lanier are notoriously skeptical of technological progress generally, but neither are Luddites (Lanier is one of the original pioneers of virtual reality software). What they hold in common is a firm belief that "artificial intelligence" is a misnomer ― real intelligence comes from human minds ― and a conviction that a fascination with computer intelligence tends to diminish and even imperil human intelligence.

Lanier, Carr, and a growing counter-culture movement of writers and technologists, skeptical of what they see as a mythology about artificial intelligence that's akin to a new and false religion, point out the virtue of human intelligence and the importance of a human-centered view of our future.

Carr sounded an initial salvo in this counter movement, in a 2007 article in The Atlantic, titled "Is Google Making Us Stupid?" The article triggered uproar, but also a discussion about the role of technology in our lives, even as the excitement about the web neared a fever pitch. Backlash from tech pundits was rapid; Carr's points directed at a commonsense public, though, seemed timely and poignant. "Are we spending too much time online?" And: "Is our digital obsession with all things Internet distracting us from more serious and noble pursuits?" (Like, say, picking up that old copy of Moby Dick, and finally finishing it?) Implicit in Carr's discussion was the idea that computation was, after all, mere automation, and automating something, no matter how "smart" it seems, can't capture and can't replace our own, human experiences. Carr began a discussion that has continued in recent years, paradoxically in the shadow of burgeoning discussion about AI ― both the Pollyanas, and the Apocalyptos.

The success of Carr's early Atlantic piece led straightaway to his bestselling book, The Shallows, which also earned Carr a finalist spot for the Pulitzer. Other humanistic-minded authors, too, began appearing around the time of Carr's article. Seemingly disparate, they nonetheless shared a common idea: Digital technology isn't becoming smart in the way people are. It's our smarts that ultimately count; our technological tools can't save us (or, neither, by "coming alive" against our wills, destroy us). Humanists like Carr disagree profoundly with any discussion about AI that tends to glorify tools at the expense of people. The smart robots premise is titillating, yet in the real world these narratives can be harmful. (More on that in a minute.)

The ranks of the Humanists continue to grow. Political philosopher turned motorcycle mechanic turned bestselling author Mathew Crawford wrote Shop Class as Soul Craft, a book ostensibly about the value and virtue of working with one's hands, but more deeply about the perils of forgetting oneself in a digital maze of abstractions, typified by the smart tech craze. Presupposed in Crawford's work was the same belief in the centrality of humanity ― the focus on people, not machines ― that inspires Carr, and Lanier's work.

A lone android with a human flesh colored face amongst a crowd of robots.



Image: Corbis Mark Stevenson/Stocktrek Images

Shortly after Carr's iconic diss of Google and all things Internet, Silicon Valley, too, found its humanist voice in the likes of former Valley entrepreneur Andrew Keen. Keen wrote a scathing critique of the Web as the place where literary standards go to die, in effect. The shallow "Web 2.0" fad had quickly infected journalism to its core, argued Keen: blogs, mainly, but later Facebook and Twitter, all downplaying human literary abilities, replaced with anonymous scribbles and fragments from social media, analyzed for advertising value by Big Data and AI. The machines may be impressive, said Keen, but what about our own standards? Keen's The Cult of the Amateur challenged what Lanier has called a worldview of "Cybernetic Totalism" in the modern Web that views humans as swarms of helper-bees, dutifully and mindlessly working on behalf of "the hive," our modern digital network environments, where quality is supposed to emerge from scores of anonymous people feeding increasingly powerful machines. Wikipedia is the perfect hive app, to Lanier. He's written trenchantly that the individual intelligence and expertise of contributors to Wikipedia subserves goals of the communal project. What isn't disguised ― what's never disguised ― is the gee-whiz aspect of the technology frameworks that make it all possible. As Lanier points out, the focus on tech, not people, nicely props the AI idea that machines are our future.

Humanists have a seemingly simple point to make, but combating advances in technology with appeals to human value is an old stratagem, and history hasn't treated it kindly. Yet the modern counter-cultural movement seems different, somehow. For one, the artificial intelligence folks have reached a kind of narrative point-of-no-return with their ideas of a singularity: the idea that smart machines are taking over is sexy and conspiratorial, but most people understand the differences between people ― our minds, or souls ― and the cold logic of the machines we build. The modern paradox remains: even as our technology represents a crowning enlightenment of human innovation, our narratives about the modern world increasingly make no room for us. Consciousness, as Lanier puts it provocatively, is attempting to will itself out of existence. But how can that succeed? And to the paradox: how can we both brilliantly innovate, and become unimportant, ultimately slinking away from the future, ceding it to the machine's we've built?

All this represents a profound challenge with more than a technological answer.

        When machine intelligence arrives (if it does), what will have happened?

When machine intelligence arrives (if it does), what will have happened? Computers with personalities would be akin to the discovery of alien life in other galaxies. Yet humanists argue that mindless automation will continue to get more powerful, and more pervasive, but fundamentally the world remains ours to create. Humanists thus refocus the discussion on real consequences of unbridled automation, and the diminishment of human excellence resulting from over excitement about machines.

If commonsense remains valid and computers ultimately must lack real intelligence, then hype about smart robots can only do harm, by self-consciously imperiling our own standards, and our own intelligence. Lanier suggests that when progress in artificial intelligence becomes our benchmark, we begin acting in subtle, compensatory ways that place our tools above ourselves. It's these subtle shifts away from our own natures, say the New Humanists, which lead us astray. It happens, perhaps, much like falling in love: first slowly, then all at once. The deafening silence of a world without human excellence at its center is a picture almost too chilling to entertain. If New Humanists are right, though, we're already on our way. The lesson of AI is not that the light of mind and consciousness is beginning to shine in machines, but rather the dimming of our own lights at the dawn of a new era.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |