TechNews Pictorial PriceGrabber Video Wed Nov 27 07:40:21 2024

0


Turing Test Alternative Proposed By Georgia Tech??s Mark Rie
Source: Chuck Bednar


Image Caption: Georgia Tech professor Mark Riedl has created the Lovelace 2.0 Test of Artificial Creativity and Intelligence. Credit: Thinkstock.com


For decades, the Turing Test has been the standard method used to measure whether or not a machine or computer program exhibits human-level intelligence, but now a Georgia Institute of Technology researcher has devised an alternative method that relies not on its ability to converse, but on its ability to create a convincing story, poem or painting.

The new test is known as Lovelace 2.0 and it was developed by Mark Riedl, an associate professor in the Georgia Tech School of Interactive Computing. In his work, Riedl sought to improve upon the original Lovelace Test, which was first proposed back in 2001, by creating clear and measurable parameters by which to judge the artistic work created by the artificial intelligence (AI) system.

“For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator,” he explained to BBC News technology reporter Jane Wakefield. “Creativity is not unique to human intelligence, but it is one of the hallmarks of human intelligence.”

The original Lovelace Test required that the AI agent develop a creative work in such a way that the person or team who designed it cannot explain how it developed said item, meaning that the creation must have been made in a way deemed valuable, novel and surprising. In Riedl’s new updated version of the test, however, the evaluator is asked to work within defined constraints without making value judgments about the artistic object.

The Georgia Tech researcher proposes that Lovelace 2.0 is an alternative to the Turing Test, which was originally proposed by computing pioneer Alan Turing back in 1950. Originally known as the Imitation Game, the Turing Test has long been used to test the intelligence capabilities of computational systems, despite the fact that it often relies on deception and the fact that the man who developed it never envisioned using it as a diagnostic tool.

“It’s important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” Riedl said in a recent statement. “And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”

According to Wakefield, a computer is considered to have passed the Turing Test if it is mistaken for a human more than 30 percent of the time during a five-minute series of keyboard conversations. In June, a program designed to simulate a 13-year-old Ukrainian allegedly passed the test, though some experts dispute those claims. Riedl told the BBC that even “no existing story generation system can pass the Lovelace 2.0 test.”

“I think this new test shows that we all now recognize that humans are more than just very advanced machines, and that creativity is one of those features that separates us from computers �C for now,” added Professor Alan Woodward, a computer expert from the University of Surrey who thinks it could help make a key distinction.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |