Anne Applebaum: The threat from artificial intelligence may already be here Source: Anne Applebaum
You know the scenario from 19th-century fiction and Hollywood movies: Mankind has invented a computer, or a robot or another artificial thing that has taken on a life of its own. In "Frankenstein," the monster is built from corpses; in "2001: A Space Odyssey," it's an all-seeing computer with a human voice; in "Westworld," the robots are lifelike androids that begin to think for themselves. But in almost every case, the out-of-control artificial life form is anthropomorphic. It has a face or a body, or at least a human voice and a physical presence in the real world.
But what if the real threat from "artificial life" doesn't look or act human at all? What if it's just a piece of computer code that can affect what you see and therefore what you think and feel? In other words — what if it's a bot, not a robot?
For those who don't know (and apologies to those who are wearily familiar), a bot really is just a piece of computer code that can do things that humans can do. Wikipedia uses bots to correct spelling and grammar on its articles; bots can also play computer games or place gambling bets on behalf of human controllers. Notoriously, bots are now a major force on social media, where they can "like" people and causes, post comments, react to others. Bots can be programmed to tweet out insults in response to particular words, to share Facebook pages, to repeat slogans, to sow distrust.
Slowly, their influence is growing. One tech executive told me he reckons that half of the users on Twitter are bots, created by companies that either sell them or use them to promote various causes. The Computational Propaganda Research Project at the University of Oxford has described how bots are used to promote either political parties or government agendas in 28 countries. They can harass political opponents or their followers, promote policies, or simply seek to get ideas into circulation.
About a week ago, for example, sympathizers of the Polish government — possibly alt-right Americans — launched a coordinated Twitter bot campaign with the hashtag "#astroturfing" (not exactly a Polish word) that sought to convince Poles that anti-government demonstrators were fake, outsiders or foreigners paid to demonstrate. An investigation by the Atlantic Council's Digital Forensic Research Lab pointed out the irony: An artificial Twitter campaign had been programmed to smear a genuine social movement by calling it ... artificial.
That particular campaign failed. But others succeed — or at least they seem to. The question now is whether, given how many different botnets are running at any given moment, we even know what that means. It's possible for computer scientists to examine and explain each one individually. It's possible for psychologists to study why people react the way they do to online interactions — why fact-checking doesn't work, for example, or why social media increases aggression.
But no one is really able to explain the way they all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions. Another Digital Forensic Research Lab investigation into pro-Trump and anti-Trump bots showed the extraordinary number of groups that are involved in these dueling conversations — some commercial, some political, some foreign. The conclusion: They are distorting the conversation, but toward what end, nobody knows.
Which is my point: Maybe we've been imagining this scenario incorrectly all of this time. Maybe this is what "computers out of control" really look like. There's no giant spaceship, nor are there armies of lifelike robots. Instead, we have created a swamp of unreality, a world where you don't know whether the emotions you are feeling are manipulated by men or machines, and where — once all news moves online, as it surely will — it will soon be impossible to know what's real and what's imagined. Isn't this the dystopia we have so long feared?
| }
|