A Critique of Mind Augmenting Technologies
"The Six Major Flaws in the Goal of Strong AI"
A Supplemental Paper
Attempts at enhancing Capability through artificial intelligence have divided AI into Weak and Strong. Weak essentially means that it does a pretty good job of imitating humans. Strong means that it is intelligent in important ways. One way is to be more intelligent than humans. Intelligent Machines are all based on the principles of Strong AI. However, there are SIX major flaws in the goal of Strong AI:
1. Strong AI wants only Intelligence
2. Strong AI ignores Emotion
3. Strong AI ignores different kinds of Intelligence
4. Strong AI ignores Creativity
5. Strong AI ignores Common Sense
6. Strong AI ignores Wisdom
1. Strong AI wants only Intelligence
The first obvious flaw is that Strong AI attempts to implement only one of the mind's tools. For one example, it ignores automatic (or unconscious) processing, the "zone" of the concert pianists and racecar drivers. For another, intuition. And the other five flaws: emotion, kinds of intelligence, creativity, common sense, and wisdom.
Clearly, implementing only intelligence is not a full implementation of mind. While not as restrictive as a chess-playing computer, a computer that exhibits only intelligence no matter how powerful is nothing more than a problem solver. But problem solvers know nothing about problems other than how to (try to) solve them. What they are not, are problem selectors; that is, they cannot choose which problem is important to solve. Or set priorities for which problems to solve, in what order.
Of course, a really intelligent problem-solving machine could be a useful adjunct for human problem solving. However, the idea of turning over decisions to this kind of super-intelligent computer is, well, just dumb. A computer too limited to see the forest for the trees, or vice versa, needs to be controlled by humans, however less "intelligent" the humans may be. Of course, someone needs to make sure those humans possess qualities like common sense, creativity, and wisdom.
2. Strong AI ignores Emotion.
Concentrating on intelligence ignores the value of emotion for decision making. In On Intelligence, Jeff Hawkins dismisses any need for emotion in intelligence. However, he says intelligence has a need for meaning. But how can you have meaning without involvement, without caring about the meaning? Here are further references about intelligence's need for emotion from three respected investigators of the mind:
Looking for Spinoza, Antonio Damasio.
The Feeling of What Happens, Antonio Damasio.
The Muse in the Machine, David Gelernter.
The Emotion Machine, Marvin Minsky
In a 2003 match he hoped would regain the upper hand over the world's best chess-playing computer, human chess champion Gary Kasparov accepted a draw when most observers thought his superior position presupposed a win. "I had one item on my agenda today, not to lose," he said. "I decided it would be wiser to stop playing." To avoiding losing to the computer was more important, more meaningful, to him than risking the win.
In other words, his calculation of a win, while likely, was not guaranteed. The tie was. His choice to draw was a decision extending far beyond the calculations of this one game. You may disagree with his decision, but that's why decisions are guided by emotions and calculations are not. Calculations are not decisions.
The computer was only playing one match. The results of previous matches were of no import to it. Clearly, they were to Kasparov. The computer was in fact not smart to offer a draw in this game. The computer was unable that is, it was outside the realm of its programming to see that the draw, despite Kasparov's superior position, did not make sense. Had the computer been a really smart chess player, it would have forced Kasparov to sweat it out, thus creating the possibility of a mistake and a win for the computer!
Why was the computer not programmed to consider this larger picture? I submit it would have been too difficult, this one example being only one of many possibilities in the human chess player's universe. This is the difficulty of programming for human psychology, emotions and all.
3. Strong AI ignores different kinds of Intelligence.
Strong AI ignores the many kinds of intelligence. Howard Gardner has more than a fistful. Robert J. Sternberg has three, but a different three from Gardner's set. Heck, I have three of my own, also different. (While I did not originally think of these as types of intelligence, I think they can offer something useful if viewed as such.) Mine are scientist, engineer, and artist. And like all these lists, these types of intelligence appear in various combinations in different individuals.
Two interesting side notes. 1. One person who appears to have all three of my kinds of intelligence in abundance is Leonardo da Vinci. 2. The three Grandmasters of Science Fiction are Asimov (scientist), Heinlein (engineer), and Bradbury (artist).
Of course, these lists of kinds of intelligence only refer to those distinguishable by various tests. Who's to say there aren't many other kinds of intelligence hidden in the unconscious mind? And with so many kinds of intelligence, the idea of a general intelligence becomes harder to defend. Yet the standard IQ tests persist.
As do the common conceptions of genius, smart, average, not too bright, and retarded. These are very small pigeonholes within which to stuff individuals of greatly varying abilities. For example, biographies abound with stories of geniuses with very low emotional IQs. Yet, regardless of how many kinds of intelligence we can measure, measurement is not what counts. One's abilities are best measured by results, not potential. Perhaps, maximizing one's potential is another, more valuable, kind of intelligence!
4. Strong AI ignores Creativity.
Creativity is many things, but it's often characterized as thinking out of the box. Programming computers is all about keeping them in the box. Despite decades of vast effort, natural language recognition by computers is still a sub-standard skill. Yet, that skill does not even attempt understanding humor, much less the apparent nonsense of a question like, Why is a raven like a writing-desk?
Getting computers to recognize a creative leap is one thing. Getting them to make those leaps is quite another. Yet the need for creativity is implicit in most solutions; that is, any human solutions of importance. In fact, most human problems are not solved with a single "best" solution. Knowing all the possible outcomes requires many contingent solutions.
But knowing all possible outcomes goes far beyond simple extrapolation of existing data. In the vast majority of situations, we can't know all the possible outcomes. One reason is that we can never be sure our data (about everything that is related and everything that might happen) is complete. Yet, we we humans deal with this all the time. How? We imagine possible outcomes.
So let's simplify the question of how to make a computer creative and simply ask how can we supply it with an imagination. Sounds silly, doesn't it: A computer with an imagination. But ask yourself, who wants a super-intelligent machine supplying unimaginative answers? Unfortunately, answers aren't the biggest problem.
Without the creativity of imagination, the ability to define the right problem, to ask the right questions, is practically non-existent. Without creative imagination, the humans in charge of any future super-intelligent computers will be unable to give the computer the right problem to solve. Unfortunately, this is quite likely to be the case.
Those in power, especially those with power in governments, usually reach their positions precisely because they lack creative imagination. Putting these people in charge of future super-intelligent computers is asking for trouble. Or should I say, super-trouble?
5. Strong AI ignores Common Sense.
Common sense is apparently so common, i.e., taken for granted, it's of little or no interest to any scientific discipline. You won't find it under the label of "common sense." Yet, we see titles like Judgment under Uncertainty: Heuristics and Biases (Kahneman, Slovic, & Tversky, 1982). And it appears in the literature as cognitive illusions, risk evaluation, estimation, and prediction (many by two of the previous authors, Kahneman & Tversky).
Clearly, these are either aspects of, or related to, common sense. Yet, common sense as a single entity is rarely defined, and less often tested for, in psychology. This despite the popularly accepted attitude that we'd know it if we saw it. But without definitions and tests, how can we relate it to Artificial Intelligence? Because, if there's one thing seriously lacking in the goals, and products, of Strong AI, it's common sense.
It has been said often that the most intelligent computer can't do the simple tasks of an ordinary three-year old child. To which I would add, it probably can't even describe how to do them so that a three-year old could follow the instructions. Yet, without the ability to perform such minimal, and common, tasks, how smart is the smartest computer?
No less a leading light than Marvin Minsky says AI's biggest deficiency is "[T]he lack of people with an interest in commonsense reasoning for computers." He also probably knows the reason: creating common sense computers as opposed to creating super-intelligent computers is simply not glamorous. It's not the type of project to attract the lucrative grants. And these days almost all scientific research is skewed to attract the big money (and the appointments, prizes, tenure, and such).
The goal of Douglas Lenat's never-ending Cyc project is common sense knowledge. Since when is reasoning reducible to knowledge? Perhaps common sense is merely making the best sense possible out of incomplete knowledge.
So what is common sense, especially as distinguished from intelligence?
"Software has no common sense" Lauren Ruth Weiner.
"Common sense is not common; it is only less rare than genius." Lee Frank
"Common sense is not so common." Voltaire.
"Common sense ain't common." Will Rogers.
Psychology has endless tests for intelligence; perhaps they have something for common sense? No way. You do the Google and tell me. All psychology can tell us is that for a common sense test to be meaningful, it has to be separate from knowledge and intelligence. Fine. And wisdom?
6. Strong AI ignores Wisdom.
"Perspective is worth 50 points of IQ." Alan Kay.
"Wisdom is knowledge applied in the broadest possible perspective." Lee Frank.
"Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?" T.S. Eliot.
What I didn't say in the last section, because I hoped it was implied, was that common sense was being ignored simply because it was too hard to implement. Well, if common sense is hard for a computer, take a look at wisdom.
In the first section, I suggested that super-intelligent computers, Strong AI's goal, was nothing more than problem-solving. And that when it comes to solving problems, selecting the right problem to solve is far more important. That would be the first step towards wisdom. And quite likely an impossible step for any computer.
To illustrate: as an informed human being, how could you choose between worldwide problems such as AIDS or starvation? Or perhaps controlling population might take precedence, if it greatly helps reduce the other two problems. And are you also preparing for the next wayward comet?
Of course, no one would be foolish enough to pick only one of these problems as being the important one to solve. An intelligent approach would be to figure out the allocation of resources to all these problems (and many others). But how much effort does one devote to juggling resources and also searching for newly emerging significant problems? And how do you design a computer to search out new problems?
Finally, a true implementation of wisdom would be a system that could apply all the missing traits of Strong AI: emotion, all the various kinds of intelligence, creativity, and common sense. All that plus the multifaceted aspects of wisdom, including: the broadest possible perspective, the need for problem finding, when to abandon a problem, and the value of learning from failure.
Here's a chart to show the relationship of common sense to intelligence and both to wisdom. The X-axis represents a continuum from Simple to Complex; the Y-axis a continuum from General to Specialized. If intelligence is specialized and more complex, then common sense is general and (relatively) simple. And wisdom is both general and complex.
· Wisdom is the art of problem finding.
· Wisdom knows that a merely adequate solution to the right problem is better than the best solution to the wrong problem.
· Wisdom knows when to abandon solving a problem and to seek a new problem.
· Wisdom knows that we can learn more from failing to solve one difficult problem than we can from solving a hundred easy problems.
· Wisdom knows that wisdom in some areas is no guarantee of wisdom is all areas.
· "We are all ignorant; only on different subjects." Will Rogers.