We don't understand AI because we don't understand intelligence Source: Jessica Conditt
Baby on floor looking at toy robot
Artificial intelligence prophets including Elon Musk, Stephen Hawking and Raymond Kurzweil predict that by the year 2030 machines will develop consciousness through the application of human intelligence. This will lead to a variety of benign, neutral and terrifying outcomes. For example, Musk, Hawking and dozens of other researchers signed a petition in January 2015 that claimed AI-driven machines could lead to "the eradication of disease and poverty" in the near future. This is, clearly, a benign outcome.
And then there's the neutral result: Kurzweil, who first posited the idea of the technological singularity, believes that by the 2030s people will be able to upload their minds, melding man with machine. On the terrifying side of things, Musk envisions a future where humans will essentially be house cats to our software-based overlords, while Kurzweil takes it a step further, suggesting that humans will essentially be eradicated in favor of intelligent machines.
These claims are not ludicrous on their own. We've seen rapid advancements in technology over the past decades; we know computers are growing more powerful and more accessible by the month. Already in 2011, a supercomputer named Watson won a game of Jeopardy against two former champions, using a mixture of AI and all-important natural-language processing. The future is here and it may soon outstrip us.
Kurzweil's timeline of the technological singularity is based on the law of accelerating returns, wherein the more powerful computers become, the faster they advance. It's a timeline of extreme exponential growth, and right now we're smacking into the steep curve that leads to conscious machines and a world where robots are the dominant creatures on earth.
That's what Kurzweil believes. That's what Musk, Hawking and many other AI scientists believe. And isn't that a human thing, to believe in something? However, by 2045, belief will also be a machine thing, according to these researchers. We just need to create the most advanced AI possible, and then bam -- conscious machines.
This is where they lose me.
I agree that technology will continue to advance in unprecedented, accelerated ways; we're seeing this happen right now, and there's no reason to believe we are anywhere near a computational plateau. However, it is a huge leap from advanced technology to the artificial creation of consciousness. Essentially, the most extreme promises of AI are based on a flawed premise: that we understand human intelligence and consciousness.
AI experts are working with a specific definition of intelligence, namely the ability to learn, recognize patterns, display emotional behaviors and solve analytical problems. However, this is just one definition of intelligence in a sea of contested, vaguely formed ideas about the nature of cognition. Neuroscience and neuropsychology don't provide a definition of human intelligence -- rather, they have many. Different fields, even different researchers, identify intelligence in disparate terms.
Broadly, scientists regard intelligence as the ability to adapt to an environment while realizing personal goals, or even as the ability to select the best response to a particular setting. However, this is based largely on the biological understanding of intelligence, as it relates to evolution and natural selection. In practice, neuroscientists and psychologists offer competing ideas of human intelligence within and outside of their respective fields.
Consider the following overview from psychologists Michael C. Ramsay and Cecil R. Reynolds:
"Theorists have proposed, and researchers have reported, that intelligence is a set of relatively stable abilities, which change only slowly over time. Although intelligence can be seen as a potential, it does not appear to be an inherent fixed or unalterable characteristic. ... Contemporary psychologists and other scientists hold that intelligence results from a complex interaction of environmental and genetic influences. Despite more than one hundred years of research, this interaction remains poorly understood and detailed. Finally, intelligence is neither purely biological nor purely social in its origins. Some authors have suggested that intelligence is whatever intelligence tests measure."
Robot sitting on a bunch of books. Contains clipping path
This does not describe a field flush with consensus. And psychology is just one of a dozen industries concerned with the human brain, mind and intelligence.
Our understanding of technology may be advancing at an ever-accelerating rate, but our knowledge of these more vague concepts -- intelligence, consciousness, what the human mind even is -- remains in a ridiculously infantile stage. Technology may be poised to usher in an era of computer-based humanity, but neuroscience, psychology and philosophy are not. They're universes away from even landing on technology's planet, and these gaps in knowledge will surely drag down the projected AI timeline.
Most experts who study the brain and mind generally agree on at least two things: We do not know, concretely and unanimously, what intelligence is. And we do not know what consciousness is.
"To achieve the singularity, it isn't enough to just run today's software faster," Microsoft co-founder Paul Allen wrote in 2011. "We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this."
Defining human intelligence and consciousness is still more philosophy than neuroscience. So let's get philosophical.
Conscious creativity
Musk, Kurzweil and other proponents of the technological singularity suggest over and over again that ever-increasing computational power will automatically lead to human intelligence and machine consciousness. They imply that the more rapidly technology advances, the more rapidly other scientific fields will also advance.
"It is not my position that just having powerful enough computers, powerful enough hardware, will give us human-level intelligence," Kurzweil said in 2006. "We need to understand the principles of operation of the human intelligence, how the human brain performs these functions. What is the software, what is the algorithms, what is the content? And for that we look to another grand project, which I label reverse-engineering the human brain, understanding its methods. And we see the same exponential progress we see in other fields, like biology."
The two halves of a medical model of a human brain. Metaphor for indecision or mental illness.
Kurzweil recognizes the need to understand human intelligence before accurately rebuilding it in a machine, but his solution, reverse-engineering a brain, leaps across the fields of neuroscience, psychology and philosophy. It assumes too much -- mainly that building a brain is the same thing as building a mind.
These two terms, "brain" and "mind," are not interchangeable. It's feasible that we can re-create the brain; it's an infinitely complex structure, but it's still a physical thing that can, eventually, be fully mapped, dissected and re-formed. Just this month, IBM announced it had created a working, artificial neuron capable of reliably recognizing patterns in a noisy data landscape while behaving unpredictably -- specifically what a natural neuron should do. Creating a neuron is light-years away from rebuilding an entire human brain, but it's a piece of the puzzle.
However, it's still not a mind. Even if scientists develop the technology to create an artificial brain, there is no evidence that this process will automatically generate a mind. There's no guarantee that this machine will suddenly be conscious. How could there be, when we don't understand the nature of consciousness?
        Even if scientists develop the technology to create an artificial brain, there is no evidence that this process will automatically generate a mind.
Consider just one aspect of the mind, consciousness and intelligence: creativity. On its own, creativity is a varied and murky thing for each individual. For one person, the creative process involves spending weeks isolated in a remote cabin; for another, it takes three glasses of whiskey; for still another, creativity manifests in unpredictable flashes of inspiration that last minutes or months at a time. Creativity means intense focus for some and long bouts of procrastination for others.
So tell me: Will AI machines procrastinate?
Perhaps not. The singularity suggests that, eventually, AI will be billions of times more powerful than human intelligence. This means AI will divest itself of messy things like procrastination, mild alcoholism and introversion in order to complete tasks similar to those accomplished by their human counterparts. There's little doubt that software will one day be able to output beautiful, creative things with minimal (or zero) human input. Beautiful things, but not necessarily better. Creative, but not necessarily conscious.
Singularities
Kurzweil, Musk and others aren't predicting the existence of Tay the Twitter bot; they're telling the world that we will, within the next 20 years, copy the human brain, trap it inside an artificial casing and therefore re-create the human mind. No, we'll create something even better: a mind -- whatever that is -- that doesn't need to procrastinate in order to be massively creative. A mind that may or may not be conscious -- whatever that means.
The technological singularity may be approaching, but our understanding of psychology, neuroscience and philosophy is far more nebulous, and all of these fields must work in harmony in order for the singularity's promises to be fulfilled. Scientists have made vast advances in technological fields in recent decades, and computers are growing stronger by the year, but a more powerful computer does not equate to a breakthrough in philosophical understanding. More accurately mapping the brain does not mean we understand the mind.
The technological singularity has a longer tail than the law of accelerating returns suggests. Nothing on earth operates in a vacuum, and before we can create AI machines capable of supporting human intelligence, we need to understand what we're attempting to imitate. Not ethically or morally, but technically. Before we can even think of re-creating the human brain, we need to unlock the secrets of the human mind.
| }
|