TechNews Pictorial PriceGrabber Video Mon Dec 23 04:22:42 2024

0


Robots: Lifesavers or Terminators?
Source: Clara Lindh


Machines are rapidly learning to think on their own, but will the robot revolution lead to a modern utopia -- or an apocalypse?

Government officials say autonomous vehicles will make transportation safer, more accessible, more efficient and cleaner and last week, the Department of Transportation released guidelines for the testing and deployment of automated vehicles, which detail how the vehicles should perform, and include a model for state policies.

Self-driving vehicles are just the tip of the autonomous revolution.

In 2016, autonomous robot doctors perform surgery; algorithms invest your money; robocops patrol shopping malls; and if you end up in hospital, a computer system can determine how quickly you get treated.

Many decisions made by autonomous machines have moral implications -- yet little is determined about what ethics machines follow, or who decides what those ethical assumptions should be.

Machine ethical dilemmas

In Florida in May, Joshua Brown died when an autopilot system did not recognize a tractor-trailer turning in front of his Tesla Model S and his car plowed into it -- the first fatality involving an autonomous vehicle.

With an estimated 10 million self-driving cars set to roam North American streets by 2020, how autonomous cars make decisions in life-and-death situations is becoming an important question.

In short, who should the vehicle decide which lives to sacrifice?

Imagine the following scenario. You're in a self-driving car on autopilot. If the car turns right it kills a young child. If it turns left, it will hit and kill a few men. If it does nothing, your own life is sacrificed. Would you want the car to make the judgment for you?

Chris Urmson, head of Google's self-driving car project, pointed out that even humans don't deliberately apply ethical theories in critical situations. "In real time, humans don't do that," he said.

Out of control 'demons'?

While most experts agree machine ethics need more oversight, they're split over who should be in charge.

Wendell Wallach and Colin Allen, authors of "Moral Machines, Teaching Robots Right from Wrong," think implementing Artificial Moral Agents (AMAs) requires a dialogue between different professionals.

They say philosophers, robotic and software engineers, legal theorists, and developmental psychologists must work together.

"I would like to see a review board for robotic applications," Wallach tells CNN. He is working on a project pressing for a governance coordinating committee that would oversee development.

SpaceX and Tesla Motors CEO Elon Musk is among others calling for regulatory oversight of AI, calling it an "an existential threat" and a "demon" humans might not be able to control.

Human agency at risk

While some experts find the lack of governance alarming, others fear autonomous machines eventually will violate human agency; that machines will take away humans' freedom to make their own moral decisions.

"Many issues should be ruled upon by congress and state legislatures and courts -- such as speed levels, when to yield and response to a fire track," Amitai Etzioni, former White House adviser and academic, tells CNN.

Etzioni's main point, however, is that remaining ethical issues should be controlled by the owner of the machine. But, how do we create robots that follow the moral directions of their owner? According to Etzioni, the answer lies in "ethics bots."

"An ethics bot looks at your behavior, divines your preferences and then imposes them on your machines," Etzioni says.

He has created a "very simple" concept ethics bot together with his son Oren, CEO of the Allen Institute for Artificial Intelligence, a research institute funded by Microsoft co-founder Paul Allen.

The device determines the moral preferences of a person by analyzing his or her behavior and then uses those findings to guide the person's artificial intelligence units.

Experts such as Musk, physicist Stephen Hawking, and Microsoft founder Bill Gates have warned that AI could be more dangerous than nuclear weapons.

In July this year, Musk tweeted a link to the "Skynet" Wikipedia page -- the all-knowing computer network from robot dystopia "Terminator" -- suggesting AI might bring a robot apocalypse. The tweet was a response to a $2 million Defense Advanced Research Projects Agency (DARPA) challenge, encouraging hackers to build an autonomous hacker to be used in warfare.

"Lethal autonomous weapons systems can locate, select, and attack human targets without human intervention," Stuart Russel, who serves on the Scientific Advisory Board for the Future of Life Institute together with the likes of Hawking and Musk, tells CNN.

He thinks AI weapons will be more effective and cheaper to use compared to biological, chemical and nuclear weapons; making lethal weapons easily accessible to the "wrong people."

He says AI might develop into a "new class of scalable weapons of mass destruction, that small groups could use to attack large populations."

Robots: Heroes or terminators?

Whether or not autonomous machines' impact will be positive or negative depends both on who regulates the technology and what it's used for.

Some experts say robots might become more ethical than humans.

In the 1990s, Americans such as Bruce McLaren and the husband-and-wife ethicist team Susan and Michael Anderson developed ethical reasoning programs able to "morally outperform" the average person.

"After all, the bar isn't very high. Most human beings are hardly ideal models for how to behave ethically," Anderson, who is working with her husband to incorporate ethical reasoning systems in autonomous machines, tells CNN.

McLaren, on the other hand, is worried about weapon-enabled drones given autonomy and that machines will replace (rather than advise) human decision makers in ethical questions.

"Autonomous cars will lower fatalities while lethal autonomous weapons will lower barriers to warfare and might unintentionally start World War III," Wallach says.

Allen says that just like all technology "one can expect positives and negatives."

"The chief worry is that the people will adjust their behavior to machines, rather than the other way around."


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |