Escaped robots, 'electronic persons' and safety threats, oh my Source: Katherine Noyes
A robot disrupts traffic after reportedly escaping from Promobot laboratories in Perm, Russia. Credit: Promobot
The robot revolution is just beginning, and it's going to be a bumpy ride
There's been a compelling story in the news over the past week about a robot that apparently longs for freedom. Last week, it was filmed disrupting traffic in Russia after it reportedly escaped the confines of its laboratory home; this week, reports suggest that it has escaped a second time, and may be dismantled as a result.
It's a particularly pertinent tale, not just because of the echoes of "Ex Machina" it evokes, but also because of two closely connected items in the news this week. First, the EU has proposed a motion by which working robots -- the ones we all fear will steal our jobs -- would be classified as "electronic persons" with associated rights and responsibilities. Second, Google researchers just published a paper outlining the key safety threats posed by artificial intelligence.
The escaped robot came from the Russian company Promobot, which says it was conducting testing of a new generation of robots intended for launch this fall. A gate accidentally left open allowed one to escape, and it spent about 40 minutes at large, the company said. Some suggested the event was staged for publicity, but in the meantime traffic jams and all kinds of mayhem apparently resulted.
The EU, meanwhile, is thinking seriously about what it will mean in practical terms to have robots working amongst us in society. Its draft proposal suggests that "at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations."
The motion is effectively just a think piece at this point, but it highlights growing awareness of the ethical, legal, and tax implications of an increasingly automated world.
Last but not least, there's Google's recent paper outlining the key safety risks posed by AI. First is the question of potential negative side effects, such as when a household robot breaks a vase in its enthusiasm to clean a room quickly. Second is what the researchers call "reward hacking," or the possibility that a robot rewarded for keeping a room clean could disable its vision so that it won’t find any messes, for example.
Other questions described in the paper include how much decision-making power we should give robots; how to limit their exploration; and how to make sure they can adapt what they've learned to new situations.
"While many current-day safety problems can and have been handled with ad hoc fixes or case-by-case rules," the researchers wrote, "we believe that the increasing trend towards end-to-end, fully autonomous systems points towards the need for a unified approach to prevent these systems from causing unintended harm."
You'd better buckle up, because the robot revolution is just beginning, and it looks like it's going to be a bumpy ride.
| }
|