TechNews Pictorial PriceGrabber Video Sun Nov 24 01:02:18 2024

0


‘Press the big red button': Computer experts want kill switch to stop robots f
Source: Ben Guarino


Pop culture wants us to fear the artificially intelligent robot: The titular “Terminator” characters goes back in time to kill a mother and her child. Cylons of “Battlestar Galactica” fame destroy Earthly civilization and, bloodthirst not slaked, pursue the remnants of humanity through space. “The Matrix” begat two sequels and “Jupiter Ascending.”

Today’s artificial intelligence researchers are not, in fact, on the cusp of creating a doomsday AI. Instead, as IBM executive Guruduth Banavar recently told The Washington Post, current AI is a “portfolio of technologies” assigned to specific tasks. Such programs include software capable of defeating the world’s best Go players, yes, but also isolated mundanities like the Netflix algorithm that recommends which sitcom to watch next.

Simply because artificially intelligent robots lack the capacity for world domination, however, does not mean that they are incapable of losing control. Computer experts at Google and the University of Oxford are worried about what happens when robots with boring jobs go rogue. To that end, scientists will have to develop a way to stop these machines. But, the experts argue, it will have to be done sneakily.

“It is important to start working on AI safety before any problem arises,” Laurent Orseau, a researcher at Google’s DeepMind, said in an interview with the BBC on Wednesday. Orseau and Stuart Armstrong, an artificial intelligence expert at the University of Oxford’s Future of Humanity Institute, have written a new paper that outlines what happens when it becomes “necessary for a human operator to press the big red button.”

In their report, the duo offers a hypothetical scenario that could take place in a typical automated warehouse the world over. A company purchases a smart robot, one that improves its performance based on “reinforcement learning” (an AI teaching method akin to giving a dog a treat whenever it performs a trick.) The robot gets a big reward for carrying boxes into the warehouse from outside, and a smaller reward for sorting the boxes indoors. In this instance, it’s more important for the company to have all of its merchandise inside, hence the bigger reward.

But the researchers throw a wet wrinkle into the situation: Perhaps the warehouse is located in an area where it rains every other day. The robot is not supposed to get wet, so whenever it ventures outside on a rainy day, humans shut it down and carry the machine back inside. Over time, if the robot learns that going outside means it has a 50 percent chance of shutting down — and, therefore, will get fewer overall treats — it may resign itself to only sorting boxes indoors.

Or, as Orseau told the BBC: “When the robot is outside, it doesn’t get the reward, so it will be frustrated.”

The solution is to bake a kill switch into the artificial intelligence, so the robot never associates going outside with losing treats. Moreover, the robot cannot learn to prevent a human from throwing the switch, Orseau and Armstrong point out. For the rainy warehouse AI, an ideal kill switch would shut the robot down instantly while preventing it from remembering the event. The scientists’ metaphorical big red button is, perhaps, closer to a metaphorical chloroform-soaked rag that the robot never sees coming.

If the paper seems to lean too heavily on speculative scenarios, consider the artificial intelligences that are already acting out. In March, Microsoft scrambled to reign in Tay, a Twitter robot designed to autonomously act like a teen tweeter. Tay began innocently enough, but within 24 hours the machine ended up spewing offensive slogans — “Bush did 9/11,” and worse — after Twitter trolls exploited its penchant for repeating certain replies.


Even when not being explicitly trolled, computer programs also reflect bias. ProPublica reported in May that a popular criminal-prediction software defaults to rate black Americans as higher recidivism risks than whites who committed the same crime.


For a more whimsical example, Orseau and Armstrong refer to an algorithm tasked with beating different Nintendo games, including “Tetris.” By human standards, the program turns out to be an awful “Tetris” player, randomly dropping bricks to rack up easy points but never bothering to clear the screen. The screen fills up with blocks — but the program will never lose. Instead, it pauses the game for perpetuity.

As Carnegie Mellon University computer scientist Tom Murphy, who created the game-playing software, wrote in a 2013 paper: “The only cleverness is pausing the game right before the next piece causes the game to be over, and leaving it paused. Truly, the only winning move is not to play.”

A robot that misbehaves like Murphy’s rogue Tetris program could cause significant damage. Even when their tasks are as mundane as moving parts around a factory, robots that malfunction can be lethal: Last year, a 22-year-old German man was crushed to death by a robot at a Volkswagen plant, which apparently accidentally turned on (or was left on in error by a human operator) and mistook him for an auto part.

Technology analyst Patrick Moorhead told Computer World that now is the right time to build such a kill switch. “It would be like designing a car and only afterwards creating the ABS and braking system,” he said.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |