The average high schooler is still slightly smarter than AI Source: Lulu Chang
Despite all the warnings of woe that accompany discussions of artificial intelligence and its potential for destroying humanity, we really don’t have anything to worry about … yet. Unless, of course, the mental capacity of an average high schooler is frightening to you. As per the latest collaboration between the Allen Institute for Artificial Intelligence (AI2) and the University of Washington, an AI system managed to score a 500 out of 800 on the math section of the SAT, slightly lower than the average score of 513 achieved by high school seniors. So while it’s impressive that AI can make it through the test at all, when it comes to outsmarting us, it’s probably not quite there yet.
A score of 500, which indicates around 49 percent accuracy, may not sound like much, especially considering computers are supposed to be wildly good at subjects like math ― after all, it’s just computation, right? Wrong. What puts the AI system (named GeoS) head and shoulders above your standard computer is its ability to read the questions straight off the page, taking the test in the same way a human would. Whereas most computers are fed information in their own language, GeoS had to adapt not only to English, but also interpret the charts, graphs, and other data that one would find on the math section of the SAT. That 500 isn’t looking so shabby anymore, is it?
“Our biggest challenge was converting the question to a computer-understandable language,” Ali Farhadi, an assistant professor of computer science and engineering at the University of Washington and research manager at AI2, said in a statement. “One needs to go beyond standard pattern-matching approaches for problems like solving geometry questions that require in-depth understanding of text, diagram and reasoning.” But thanks to their hard work, the GeoS team has successfully created “the first automated system to solve unaltered SAT geometry questions by combining text understanding and diagram interpretation.”
In their paper detailing the results of their achievements, researchers explain, “Our method consists of two steps: interpreting a geometry question by deriving a logical expression that represents the meaning of the text and the diagram, and solving the geometry question by checking the satisfiablity of the derived logical expression.” While humans can complete these tasks (with varying levels of success) naturally, computers must be taught how to, well, think like a human.
“Much of what we understand from text and graphics is not explicitly stated, and requires far more knowledge than we appreciate,” AI2 CEO Oren Etzioni said in a press release. “Creating a system to be able to successfully take these tests is challenging, and we are proud to achieve these unprecedented results.” So sure, AI isn’t “intelligent” by standard definitions ― not quite yet. But boy, is it getting close.
| }
|