White House worries about bad A.I. coding Source: Patrick Thibodeau
The White House is doing a lot more thinking about the arrival of automated decision-making -- super-intelligent or otherwise.   
No one in government is yet screaming "Skynet," but in two actions this week the concerns about our artificial intelligence future were sketched out.
The big risks of A.I. are well-known (a robot takeover), but the more immediate worries are about the subtle, or not-so-subtle, decisions made by badly coded and designed algorithms.
President Barack Obama's administration released a report this week that examines the problem associated with poorly designed systems that, increasingly, are being used in automated decision making.
Algorithmic systems can affect employment, education, access to credit -- anything that relies on computer-assisted decisions.
Indeed, the government argues, algorithms may have so much power over our lives in areas such as employment, education and access to credit, that it may be important to develop ethical frameworks for designing these systems. They may need, as well, to be transparent for testing and auditing.
A second effort looks at our algorithmic future through a series of four workshops held across the U.S. to examine A.I.'s impact on society.
"A.I. systems can also behave in surprising ways," Ed Felten, the chief technologist at the Federal Trade Commission, said in a White House post. "And we're increasingly relying on A.I. to advise decisions and operate physical and virtual machinery -- adding to the challenge of predicting and controlling how complex technologies will behave."
The U.S. will produce an A.I. report after it holds workshops beginning May 24 in Seattle. That will be followed by meetings in Washington, Pittsburgh and New York City in July.
A more near-term concern are algorithmic systems designed to inadvertently discriminate because of bad design.
"Hard-to-detect flaws could proliferate," warned the White House in a report this week on algorithmic systems released by U.S. CTO Megan Smith and other officials.
Smith said the point of the report isn't to offer remedies, "but rather to identify these issues and prompt conversation, research -- and action -- among technologists, academics, policy makers, and citizens alike," she said in a blog post.
These algorithmic systems can mess up in a number of ways. They can poorly select data, use incomplete and incorrect data and might include a selection bias.
A system may also use a poorly designed matching system or could inadvertently restrict the flow of information. And programmers might reason that if two factors frequently occur together -- say, income level and a particularly ethnicity -- "there is necessarily a causal relationship between the two" setting up the stage for discrimination, according to the report.
| }
|