TechNews Pictorial PriceGrabber Video Mon Dec 23 05:10:02 2024

0


Terminator conundrum: US ponders robots that could kill on their own
Source: Matthew Rosenberg and John Markoff


Jacob Regenstein, an engineer, holds a fake rifle while testing an airborne autonomous drone. Photo: Hilary Swift

The small drone, with its six whirring rotors, swept past the replica of a Middle Eastern village and closed in on a mosque-like structure, its camera scanning for targets.

No humans were remotely piloting the drone, which was nothing more than a machine that could be bought on Amazon. But armed with advanced artificial intelligence software, it had been transformed into a robot that could find and identify the half-dozen men carrying replicas of AK-47s around the village and pretending to be insurgents.

As the drone descended slightly, a purple rectangle flickered on a video feed that was being relayed to engineers monitoring the test. The drone had locked onto a man obscured in the shadows, a display of hunting prowess that offered an eerie preview of how the Pentagon plans to transform warfare.

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the centre of its strategy to maintain the United States' position as the world's dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.


The Defense Department is designing robotic fighter jets that would fly into combat alongside manned aircraft. It has tested missiles that can decide what to attack, and it has built ships that can hunt for enemy submarines, stalking those it finds over thousands of miles, without any help from humans.

"If Stanley Kubrick directed Dr. Strangelove again, it would be about the issue of autonomous weapons," said Michael Schrage, a research fellow at the Massachusetts Institute of Technology Sloan School of Management.

Defense officials say the weapons are needed for the United States to maintain its military edge over China, Russia and other rivals, who are also pouring money into similar research (as are allies, such as Britain and Israel). The Pentagon's latest budget outlined $US18 billion to be spent over three years on technologies that included those needed for autonomous weapons.

"China and Russia are developing battle networks that are as good as our own. They can see as far as ours can see; they can throw guided munitions as far as we can," said Robert O. Work, the deputy defense secretary, who has been a driving force for the development of autonomous weapons. "What we want to do is just make sure that we would be able to win as quickly as we have been able to do in the past."

Just as the Industrial Revolution spurred the creation of powerful and destructive machines like airplanes and tanks that diminished the role of individual soldiers, artificial intelligence technology is enabling the Pentagon to reorder the places of man and machine on the battlefield the same way it is transforming ordinary life with computers that can see, hear and speak and cars that can drive themselves.

The new weapons would offer speed and precision unmatched by any human while reducing the number — and cost — of soldiers and pilots exposed to potential death and dismemberment in battle. The challenge for the Pentagon is to ensure that the weapons are reliable partners for humans and not potential threats to them.
Captain Mike Malandra, centre, the ground branch head for the science and technology division at the Marine Corps ...
Captain Mike Malandra, centre, the ground branch head for the science and technology division at the Marine Corps Warfighting Laboratory. Photo: Hilary Swift

At the core of the strategic shift envisioned by the Pentagon is a concept that officials call centaur warfighting. Named for the half-man and half-horse in Greek mythology, the strategy emphasises human control and autonomous weapons as ways to augment and magnify the creativity and problem-solving skills of soldiers, pilots and sailors, not replace them.

The weapons, in the Pentagon's vision, would be less like the Terminator and more like the comic-book superhero Iron Man, Work said in an interview.

        The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them.

"There's so much fear out there about killer robots and Skynet," the murderous artificial intelligence network of the Terminator movies, Work said. "That's not the way we envision it at all."

When it comes to decisions over life and death, "there will always be a man in the loop," he said.
Joe Kehoe, with an engineering team, draws on a satellite photo while giving instructions before an autonomous drone ...
Joe Kehoe, with an engineering team, draws on a satellite photo while giving instructions before an autonomous drone tracking test. Photo: Hilary Swift

Beyond the Pentagon, though, there is deep skepticism that such limits will remain in place once the technologies to create thinking weapons are perfected. Hundreds of scientists and experts warned in an open letter last year that developing even the dumbest of intelligent weapons risked setting off a global arms race. The result, the letter warned, would be fully independent robots that can kill, and are cheap and as readily available to rogue states and violent extremists as they are to great powers.

"Autonomous weapons will become the Kalashnikovs of tomorrow," the letter said.

The debate within the military is no longer about whether to build autonomous weapons but how much independence to give them. General Paul J. Selva of the Air Force, the vice chairman of the Joint Chiefs of Staff, said recently that the United States was about a decade away from having the technology to build a fully independent robot that could decide on its own whom and when to kill, though it had no intention of building one.

Other countries were not far behind, and it was very likely that someone would eventually try to unleash "something like a Terminator," Selva said, invoking what seems to be a common reference in any discussion on autonomous weapons.

Yet US officials are only just beginning to contend with the implications of weapons that could someday operate independently, beyond the control of their developers. Inside the Pentagon, the quandary is known as the Terminator conundrum, and there is no consensus about whether the United States should seek international treaties to try to ban the creation of those weapons, or build its own to match those its enemies might create.

For now, though, the current state of the art is decidedly less frightening. Exhibit A: the small, unarmed drone.

It could not turn itself on and just fly off. It had to be told by humans where to go and what to look for. But once aloft, it decided on its own how to execute its orders.

The project is run by the Defense Advanced Research Projects Agency, known as DARPA, which is developing the software needed for machines that could work with small units of soldiers or Marines as scouts or in other roles.

Unlike the drones currently used by the military, all of which require someone at a remote control, "this one doesn't," said Major Christopher Orlowski of the Army, a program manager at DARPA. "It works with you. It's like having another head in the fight."

It could also easily be armed. The tricky part is developing machines whose behaviour is predictable enough that they can be safely deployed, yet flexible enough that they can handle fluid situations. Once that is mastered, telling it whom or what to shoot is easy; weapons programmed to hit only certain kinds of targets already exist.

Yet the behavioral technology, if successfully developed, is unlikely to remain solely in American hands. Technologies developed at DARPA do not typically remain secret, and many are now ubiquitous, powering everything from self-driving cars to the internet.

Experts outside the Pentagon are far less convinced that the United States will be able to maintain its dominance by using artificial intelligence. The defense industry no longer drives research the way it did during the Cold War, and the Pentagon does not have a monopoly on the cutting-edge machine-learning technologies coming from startups in Silicon Valley, and in Europe and Asia.

Unlike the technologies and material needed for nuclear weapons or guided missiles, artificial intelligence as powerful as what the Pentagon seeks to harness is already deeply woven into everyday life. Military technology is often years behind what can be picked up at department stores.

"Let's be honest, American defense contractors can be really cutting edge on some things and really behind the curve on others," said Major Brian Healy, 38, an F-35 pilot. The F-35, America's newest and most technologically advanced fighter jet, is equipped with a voice command system that is good for changing channels on the radio, and not much else.

"It would be great to get Apple or Google on board with some of the software development," he added.

Beyond the practical concerns, the pairing of increasingly capable automation with weapons has prompted an intensifying debate among legal scholars and ethicists. The questions are numerous, and the answers contentious: Can a machine be trusted with lethal force? Who is at fault if a robot attacks a hospital or a school? Is being killed by a machine a greater violation of human dignity than if the fatal blow is delivered by a human?

A Pentagon directive says that autonomous weapons must employ "appropriate levels of human judgment." Scientists and human rights experts say the standard is far too broad and have urged that such weapons be subject to "meaningful human control."

But would any standard hold up if the United States was faced with an adversary of near or equal might that was using fully autonomous weapons? Peter Singer, a specialist on the future of war at New America, a think tank in Washington, suggested there was an instructive parallel in the history of submarine warfare.

Like autonomous weapons, submarines jumped from the pages of science fiction to reality. During World War I, Germany's use of submarines to sink civilian ships without first ensuring the safety of the crew and passengers was seen as barbaric. The practice quickly became known as unrestricted submarine warfare, and it helped draw the United States into the war.

After the war, the United States helped negotiate an international treaty that sought to ban unrestricted submarine warfare.

Then came the Japanese attack on Pearl Harbor on December 7, 1941. That day, it took just six hours for the US military to disregard decades of legal and ethical norms and order unrestricted submarine warfare against Japan. American submarines went on to devastate Japan's civilian merchant fleet during World War II, in a campaign that was later acknowledged to be tantamount to a war crime.

"The point is, what happens once submarines are no longer a new technology, and we're losing?" Singer said. He added: "Think about robots, things we say we wouldn't do now, in a different kind of war."

The New York Times

Episode 1 of Decoding Genius, a podcast series that asks what is a genius and how do you become one, available now for download from decodinggenius.com.au or via iTunes​​. The Decoding Genius podcast was produced by Made by Fairfax in partnership with GE.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |