What people can learn from algorithms — and algorithms can learn from people Source: Kevin Hartnett
Most of the time, it probably doesn’t feel like your brain can compete with a computer. You forget where you put your keys, have a hard time making big life decisions, and routinely fall into any number of cognitive traps that behavioral psychologists have identified over the last two decades. Yet according to an emerging way of thinking about the brain, it might be precisely these limitations that recommend computers as a metaphor for human cognition — and also could help computer scientists design machines that are able to arrive at good answers, even when the information they have to work with is incomplete.
This is the spirit in which a new book arrives, “Algorithms to Live By,” by Tom Griffiths, director of the Computational Cognitive Science Lab at the University of California, Berkeley, and Brian Christian, a science journalist. The authors explain that when you think about the brain as a computer that engages in algorithmic thinking, some decisions become easier and it becomes clearer why hard cases trip us up.
“In the popular press there’s a narrative about human decision-making, which is that humans are buggy and irrational, error prone, full of janky hardware. We think the story is more subtle and interesting,” says Christian.
In one sense, “Algorithms to Live By” is a self-help guide, in the style of books that explain how to apply lessons from fields like neuroscience or economics to your everyday life. An algorithm is simply a procedure a computer uses to solve a certain kind of problem—like putting a string of digits in order or finding the shortest route between two points. When you think about them the right way, these kinds of tasks resemble the decisions people have to make all the time.
“It’s about recognizing that problems we have to solve have an underlying computational structure and there might be solutions to them,” says Griffiths.
View Story
A surefire way to tell if your food has gone bad
Researchers in South Korea use a simple laser technique to assess the presence of bacteria on food.
        On the hunt for Planet Nine
        The laws of chemistry conspire against a better cup
One example is what’s known as an “optimal stopping problem.” In real life, this describes situations where you’re trying to decide how long to conduct a search before committing to a choice — be it an apartment, a spouse, even a place to eat. On the one hand you want to be thorough, but on the other, you don’t want to let a good thing slip by or search forever.
Computers work through these kinds of problems all the time and there’s a provably optimal approach: spend the first 37 percent of the resources you’ve allocated to searching without making a commitment. Then, jump on the first thing you find that’s better than the best thing you’ve seen already. In situations where you can go back and snag options you’ve already passed on, spend a little more than 37 percent of your time browsing; in situations where there’s a chance your “offer” will be rejected, be more conservative and spend about 25 percent of your time browsing.
A key aspect of this algorithmic approach is that it doesn’t guarantee you won’t pass on an apartment you should have taken or marry someone you should have passed on. What it does do is give you the best odds of making the right decision.
“We have this [view] that rational decision-making is exhaustive, consider all your options, but when computers are facing hard problems, we see all those things go out the window. You don’t consider all your options, you use chance as an element of finding solution,” says Griffiths.
The idea that chance has a role to play in computing runs against conventional wisdom, but is more and more important to research in computer science.
“We think of computers and algorithms as absolute and deterministic things that go chug chug chug and get a single output,” says Noah Goodman, who studies computation and cognition at Stanford University. “There’s been a paradigm shift from thinking about computation as something with no guessing, just step by step computing, to [an understanding] that if you have incomplete knowledge and uncertainty, all you can do is compute a best guess.”
Uncertainty factors into human decision-making all the time and increasingly it’s obvious that it plays a big part in many of the cutting edge tasks we’d like computers to perform. In many situations, we want computers to chew on noisy data and make a best guess. A simple example of this is speech-to-text recognition. When we hear other people talk, we’re continually estimating what we think we’ve heard. Imagine you’re in a noisy room. Did the person next to you just say “I put the cat on my head” or did she say, “I put the hat on my head.”? The actual data we have might be unclear, but context helps us proceed with confidence.
“That’s a key idea behind modern artificial intelligence,” says Goodman. “We’re going to assume the world is noisy, learn what we can from noisy uncertain data, and then we’re going to use that to make the best prediction or decision we can. We’ve been slowly learning how to make computers do that.”
Over the centuries, Goodman says, people have analogized the human mind to the leading technology of their eras — pneumatic statues for Descartes, sophisticated clockworks in the 19th century, computers more recently. Yet if there’s a difference this time, it’s that now the metaphor runs both ways. Sure, algorithmic thinking might guide us to a better way to organize our closets (in computer science terms, its an example of a caching problem). But in a deeper way, the science of human cognition is teaching computers how to proceed even when they don’t know all the answers. After all, that’s something people have been doing for a very long time.
Kevin Hartnett is a writer in South Carolina. He can be reached at kshartnett18@gmail.com.
| }
|