The Quest for the Brain Chip Source: Bragi Lovetrue
Alan Turing insisted human brains and modern computers share the computational model that bears his name, whereas von Neumann believed brains are fundamentally different from the architecture that bears his. What if they can't be both right?
The topic of brain chips has seen a surge of interest lately around the world. A variety of scientific and commercial efforts seek ways to literally model the brain in silicon in hopes of enabling unprecedented human-like capabilities by decidedly unhuman devices like drones, robots and driverless cars. Large-scale collaborations that bring together neuroscientists and computer scientists ― such as Europe’s The Human Brain Project, the DARPA-funded SYNAPSE program and the American BRAIN Initiative ― capture headlines and imaginations.
Having recently attended two relevant conferences ― the Brain Forum and the CapoCaccia Neuromorphic Engineering Workshop ― I have to wonder, though, if we are setting off in the right direction in our pursuit of perhaps the loftiest of all scientific aspirations.
Let’s get our bearings first. A computer is a machine that represents and processes information. Ever since the advent of modern computers, the goal of brain science is to understand how the brain works as a computer, and the goal of artificial intelligence is to build brain-like computers. The founding fathers of modern computers, however, differ in their views of whether brains are essentially modern computers: Alan Turing insisted that brains and modern computers share the same computational model that bears his name, whereas John von Neumann believed that brains are fundamentally different from the architecture of modern computers that also bears his name.
Deep learning and neuromorphic engineering are prime examples of the cross-fertilization between the goals of brain science and artificial intelligence. There is a wide consensus among deep learning and neuromorphic engineering research groups that both Turing and von Neumman are correct: Understanding how the brain works and building brain-like computers need to change the von Neumann architecture while preserving the Turing computational model.
However, it can’t be the case that both Turing and von Neumann are correct for one simple reason: a computer architecture is merely a physical implementation scheme of a computational model that is actually a mathematical construction. As a matter of fact, the von Neumann architecture can’t be fundamentally changed without a fundamental change to the underlying Turing computational model, and vice versa. How soon should we expect either the theoretical or the engineering fundamental change to happen? Which fundamental change should we expect to come first?
It is natural to think that a fundamentally new theory of computation should come before a fundamentally new architecture of a computer since the history of technology is a never-ending demonstration of a theoretical model heralding a physical system. On the other hand, the history of science has countless examples of how blind we are at understanding the theoretical models of even simple physical system through reverse engineering.    After all, Turing’s theory came almost a decade earlier than von Neumman’s architecture for modern computers.
Given that the brain is arguably the most complex physical system in the universe, we would be most blind at reverse-engineering the brain. Such blindness has directed the long list of broken promises in the short history of artificial intelligence. The ignorance of such blindness is precipitating artificial intelligence and brain science into another crisis of putting the cart before the horse. Brain projects like the Human Brain Project and the BRAIN Initiative promote an open collaboration of collecting enormous volumes of comprehensive brain data, but there exists not even a remote analogy of such commitment to an open collaboration of proposing comprehensive brain theories. Neuromorphic projects like IBM TrueNorth and Qualcomm Zeroth claim to have successfully developed brain-inspired architectures fundamentally different from that of von Neumann, but none of them has a clue about the supposedly novel computational model fundamentally different from Turing’s.
Why are many bright minds and resourceful organizations expecting a revolution but gearing towards an evolution? My observations at the aforementioned events indicate that it may be a problem of benchmarking progresses in artificial intelligence and brain science. Evolution needs benchmarks for improvements that give us better results, yet revolution requires benchmarks for breakthroughs that bring us closer to our goals. The engineering culture of getting something done to make things work better has led to very effective benchmarks like ImageNet in computer vision, yet it further tethers the community to the ignorance of our blindness to the flaws of reverse-engineering. It is simply impossible to accelerate fast enough to stay in the correct lanes if we keep gearing towards evolution on the highway of revolution. Although it’s much harder to come up with a breakthrough benchmark, we would nonetheless be able to see quite a few principled clues from comparing biological brains with modern computers.
| }
|