The Difficulty of Artificial Intelligence Source: Vince Wilson
Almost every year, a new movie or book comes out portraying a sentient machine. These works of fiction often feature a robot (or android if you prefer) with very human-like qualities. From Isaac Asimov’s robo-psychological concepts derived from the future invention of the positronic brain (I, Robot and Bicentennial Man for example,) to the menacing apocalyptic Terminator (a new re-imagining coming out in 2015) of James Cameron, these stories proliferate popular culture.
Although hinted at in past and upcoming fiction and believed by many to be a certain and definitive destiny for scientific achievement, is it really conceivable that a man-made machine can have bestowed upon it, whether metaphysical or metaphorical, a soul?
In 1965, R&D director Gordon E. Moore proposed in Electronics Magazine that processing speed (defined as the amount of instructions per second that a computational device is capable of) would double every year or even faster: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will remain nearly constant for at least 10 years.”
This conceptualization has become known as Moore’s Law. Indeed, since then processing power and electronic storage capacity (how much data a device can hold and maintain) has increased exponentially.
To put this in perspective, the Apollo 11’s guidance system had approximately 64Kbyte of memory and ran at 0.043MHz. The program is so small that it can fit in a pdf file (http://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/apollo/public/archive/1701.pdf)! There are kitchen appliances in your house right now with more processing power than NASA had available until the late 1970s and early 1980s.
So where are our Bicentennial Men and Johnny Fives? Why are there no android assistants serving us breakfast in the morning and shopping for us?
On my Android smart phone I have a voice activated digital assistant I named Kim. Kim gives me a daily briefing of the news, weather, SMS messages and calendar updates every morning at 9:00. I can talk to Kim and she will respond. However, compared to a real person, her conversational abilities are extremely limited.
She is limited by the fact she has no imagination, no creativity and no sense of self, not to mention limited memory. Kim is not a very good AI in the sense of what we expect from fictional accounts of futuristic life-like machines.
Below, I have compiled a list of the Top Five Limits Scientists Need to Overcome for AI. The definition of AI in this case is a machine that is indistinguishable from a human being. In other words, if you were to call the AI on the phone and converse with it, there is no test you could perform that would definitively tell you it was an AI unless you asked “it” point blank if it were man-made, i.e., artificial. This would include the famous (well, to some) Turing test:
        The "standard interpretation" of the Turing Test, in which player C, the interrogator, is tasked with trying to determine which player �C A or B �C, is a computer and which is human. The interrogator is limited to using the responses to written questions to make the determination.
        - http://crl.ucsd.edu/~saygin/papers/MMTT.pdf
I believe it is worth mentioning that this list can also be considered a testament to how amazing the human mind is. In fact, in every example on this list I will be referencing the difficulties of duplicating what is arguably nature’s greatest invention. The fact that our brains do not cook in our heads every time we think is a prime example.
5. Heat
The human brain has evolved to create as little heat as possible despite the amount of energy being expended to process thoughts and ideas as well as unconscious processes such as keeping the heart beating and breathing.
When your laptop heats up your lap, it is expending the heat from the Central Processing Unit (CPU). As the CPU becomes more taxed from processing, the electrons running through the internal wiring and resistors become excited. Not all the energy goes into processing computations, however, and much of it is released as heat energy.
Cooling an artificial brain effectively at such small scales is a major obstacle to in the race to true AI. Simply cooling the artificial brain with liquid nitrogen or helium is not cost-effective or practical and is perhaps even dangerous. Advances in room temperature superconducting will need to be made first. Superconductive materials align electrons more effectively for less resistance, but usually require sub-zero temperatures.
4. Memory
Not memory capacity, which is covered below in number three.
In this instance, a human memory resolution capability, or the subconscious ability to forget, is what I mean, although capacity is an issue.
Memory is unreliable. It is corruptible and no one truly has a good memory when emotion is involved. Ask any law enforcement officer who has interviewed three witnesses to the same crime. They will all have different things to say.
Modern computers are a poor analog for human brains. We do not store info in bits and bytes. We do not compartmentalize every piece of data we take in from our five senses. We forget.
Our brains contain about 90 billion neurons. Each Neuron can make about 1000 connections which are represented by synaptic nerves. Basic multiplication will show you that we are capable of 100 trillion data points of life-time storage.
This concept is flawed however. Even this amazing amount of memory would be filled during childhood if every bit of data coming from our five senses were saved as raw memory.
Paul Reber, Professor of Psychology at Northwestern University, wrote for Scientific American addressing memory:
        …neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, if your brain worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three million hours of TV shows. You would have to leave the TV running continuously for more than 300 years to use up all that storage.
Let’s put this into perspective for those who do not know the difference between bits, bytes and so on:
        720×480 - D-VHS, DVD, miniDV, Digital8, Digital Betacam (NTSC)
        720×480 - Widescreen DVD (anamorphic)(NTSC)
        720x576 - D-VHS, DVD, miniDV, Digital8, Digital Betacam (PAL/SECAM)
        720x576 - Widescreen DVD (anamorphic)(PAL/SECAM)
        1280×720 - D-VHS, HD DVD, Blu-ray, HDV (miniDV)
        1440×1080 - HDV (miniDV)
        1920×1080 - HDV (miniDV), AVCHD, HD DVD, Blu-ray, HDCAM SR
        1998x1080 - 2K Flat (1.85:1)
        3840x2160 - 4K UHDTV
        4096×2160 - 4K Digital Cinema
        7680×4320 - 8K UHDTV
        15360x8640 - 16K Digital Cinema
As of this writing, it is not uncommon for most people to have a 1 TB hard drive in their computer. If Moore’s Law stays true, we should have a 1 PB hard drive in around 10 years! But that still does not solve the problem because we do not remember everything, and we shouldn’t.
As stated earlier, if we remembered every single aspect of our second-to-second sensory input, our brains would be full before we reached adulthood.
In his quote, Dr. Reber observed that it would take 300 years to use view all the video that would fill a 2.5 PB hard drive. But how “high-definition” are our eyes? Higher than the best HD TV in the world, I am sure.
Resolution is measured in height and width. Here are some popular formats:
Data Measurement Size
        Bit - Single Binary Digit (1 or 0)
        Byte - 8 bits
        Kilobyte (KB) - 1,024 Bytes
        Megabyte (MB) - 1,024 Kilobytes
        Gigabyte (GB) - 1,024 Megabytes
        Terabyte (TB) - 1,024 Gigabytes
        Petabyte (PB) - 1,024 Terabytes
Now, the human eye is much more complicated than resolution alone. To find out how so, watch the video from VSauce.
We can guesstimate that the resolution of the human eye is about 576 megapixels (www.clarkvision.com/articles/eye-resolution.html), but even this is inaccurate!
And as far as input, don’t forget hearing, touch, taste and smell; how much memory do they take up?
We have short term and long term memory. Although there is a lot we still need to learn about memory, many believe there three main stages in which we store memories.
It starts with perception. This is when we initially take in the sensory input which will be become a memory. Then the memory is stored in short term memory. Short term memory can store up to seven different distinct memories on average and it can only do this for approximately 20-30 seconds. Your subconscious, based on what is important to you (several factors including personal health and well-being, beliefs, lifestyle and many others come into play here); helps decide what is stored in permanent memory. You can also consciously store memories by willing it. Anything your personality deems unworthy of permanent storage is thrown out and forgotten.
Additionally, memories are not like on and off switches or carefully compartmentalized data. They are not particles of information filed away to be referenced later. They are more like colored sands that have been poured in a jar and shaken. Mingling and interacting, evolving and changing. Emotions and trauma cloud your memories. Although you might think a traumatic event has burned a memory into your mind, it actually corrupts it with emotion and heightens aspects for dramatic effect. Although this sounds like a terrible way to run a brain, it helps us in the long term by reducing clutter.
An AI would need to be able to forget, remember, and learn accordingly, based on what is important to its mission statement and design parameters.
3. Storage Capacity
Although this was mostly covered in Number 4, it deserves a mention on its own.
With the ever increasing storage capabilities available, this limit should be overcome in the very near future and will more than likely surpass human beings.
2. Processing Speed
This aspect becomes very important if you want an actual android or humanoid servant �C something that can play ball, serve drinks, run errands and even be a caregiver for the elderly, infirm or, yes, even children.
As an example, just playing baseball is an amazing achievement in evolution. When a ball is thrown to the batter from the pitcher’s mound, a startling amount of information is processed in a microsecond. The batter must calculate when the pitcher releases the ball, what kind of technique the pitcher is using (curve ball, slider, etc,) and when and how to swing, and at what angle, height, and speed...and all in the blink of an eye! no less.
Over 300 muscles are also being used too on a subconscious level to hit that ball; the batter's brain tells every muscle needed to hit that ball how to move and when in perfect sequence.
Recent research has even hinted that a degree of precognition may be in play! Your brain can sense touch faster than it could possibly be sent through your nervous system. Your nerves do not transfer information at the speed of light. There is too much resistance. Nevertheless, your brain picks up touch and other sensations faster than it physically should! Will we be able to duplicate this phenomenon in a machine somehow?
1. Programming/Instinct
This is simple enough and elaborates on aspects mentioned earlier. AI will need to evolve. A lot of that evolution can happen within the confines of a computer simulation, but eventually it will have to be let loose into the real world to see if it still works. The programming will have to be able to learn and adapt. It will have to be able to make sure a humanoid android walks upright and not like an ape, or backwards on its hands and feet like it’s possessed by demons just because, for the machine, it may be more efficient. It must understand that it would, in fact, be creepy.
Many readers who are science fiction fans have heard of Isaac Asimov’s Three Laws of Robotics. They are:
        A robot may not injure a human being or, through inaction, allow a human being to come to harm.
        A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
        A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This is a logic loop and it seems to make perfect sense. Is the Skynet Terminator apocalypse avoided? Probably not. When AI becomes sophisticated enough, the machine will have a psychology. This was only partially explored in Asimov’s vision with Robo-Psychology.
The question is: how do you, for lack of a better term, hard-wire in these rules? What motivation would the machine have to obey them? Under the right circumstances, a human can be compelled to kill, self mutilate and cannibalize. Is it possible for a future robot to become insane? What if, in the madness of the machine, it perceives people as chickens and not humans, or an abusive human master as a monster?
In Conclusion…
There is a new movie coming from the director of District 9 called CHAPPiE. CHAPPiE is a robot with a semi-humanoid appearance that is created to think and feel on its own. Will CHAPPiE inspire much discussion on AI when it comes out? I suppose it depends on the success of the film in the world box office.
I do not know when we will have AI. I do know it will not be any time very soon, but perhaps we may see it within our lifetime if memory, storage, and processing technology continue to advance as rapidly as science has predicted.
| }
|