There is a fundamental question in the field of Artificial Intelligence — what is Artificial Intelligence? What constitutes intelligence? When will a program be called intelligent? At the moment in artificial intelligence, the fields are not unified. We have hundreds of scientists pursuing different areas of AI - parallelism, evolutionary computing, image recognition, voice recognition and synthesis. Yet, none (with perhaps the exception of long-term robotic projects like Cog) attempt to model intelligence to a level of human capacitance. In order to deal with the question artificial intelligence, we are going to assume that we are attempting to create a program of such magnitude. We will ignore the computational limitations, since those will disappear in time.
One famous skeptic of AI, Dreyfus, says that a computer will never be intelligent unless it can display a good command of commonsense. Dreyfus then follows up by saying that computers will never be able to fully grasp commonsense, since much of our commonsense is on a 'know-how' basis. For example, the notion that one solid cannot easily penetrate another is commonsense, yet the knowledge required to ride a bicycle is not something you can gain from a book, or from someone telling you. You can only learn through experience. Dreyfus doesn't stop there, he suggests that all knowledge is received in such a way - when learning about an apple, for example, we learn how to use it, how to eat them, where to get them and many other things not immediately attainable without direct experience with them.
Now, current computers can only really 'represent' things (actually, that's all they do) - so how to take a skill, emotion, or something else equally abstract and change it into a series of 0s and 1s is close to impossible according to Dreyfus. This presents quite a large problem to the overall field of artificial intelligence, since it is very hard to contradict Dreyfus when assuming both of his premises are true - that general intelligence requires commonsense, or that commonsense requires know-how.
Most defenders of general intelligence AI (GIAI) agree with the first premise, but disagree with the second, stating that it is not impossible, merely very difficult to do. A good example of this is Doug Lenat's CYC Project - see the Does the Top-Down Approach or the Bottom-Up Approach Best Model the Human Brain? essay for more details. Such an approach, though, basically attempts at taking such knowledge and converting it into a computational form through human conversion. Therefore, I myself have to strongly agree with a lot of what Dreyfus says.
Dreyfus' arguments against artificial intelligence specifically target GOFAI (good, old-fashioned AI) approaches to problems. With the advent of parallelism, and bottom-up approaches to AI problems, Dreyfus' arguments may not apply. For example, with Cog, the entire aim of the project was to build the robot from the bottom-up, allowing it to learn things, with nothing hard-coded into the robot itself. This way, some incredibly interesting behaviour has arisen from the robot - this kind of emergent behaviour is something that can only be described as artificial intelligence. I have got to give credit to the Cog team with coming up with a paragraph that I really feel sums up the problem with modern GOFAI attempts:
"Three conceptual errors commonly made by classical AI researchers are presuming the presence of monolithic internal models, monolithic control, and the existence of general purpose processing. These and other errors primarily derive from naive models based on subjective observation and introspection, and biases from common computational metaphors (mathematical logic, Von Neumann architectures, etc.). A modern understanding of cognitive science and neuroscience refutes these assumptions.A deep explaination into all of these is out of the scope of this essay (see Robotics Essays for more information), but basically what this is saying is that often people have tried to model the brain on a computer by modelling the brain on the computer! That is, they use the computer as the analogy when trying to figure out what how the brain works - therefore, our ideas of the brain are often distorted, oversimplified, or merely too computationally based.
Let us now assume that the bottom-up approach yields a robot (or computer program) that exhibits intelligent behaviour enough to the extent that it passes the Turing Test. Is it intelligent? John Searle would argue that is wasn't - it merely was mimicking intelligent behaviour. He uses an interesting analogy - called "The Chinese Room." What he states is that imagine he was in room with two windows, labelled I and O (input and output, I assume). He gets handed a piece of paper with a complicated series of strokes on them. These complicated strokes happen to be questions written in Chinese. Now, in his room he was a huge book of all possibile questions in Chinese, and has very detailed instructions about how to look up the question and answer given the strokes. When he finds the answer he writes it on another piece of paper, and hands it out the O-window. He argues that he could take any Chinese question, output the correct answer and still not understand Chinese. Critics of this argument say that a computer should be represented as the room as a whole, and not just Searle - therefore a computer would understand Chinese. Searle fired back by saying he could memorize the entire book, and still do it, and STILL not understand Chinese - the arguments go on and on. I have my own reservations toward this analogy. I agree that if a program was a huge hash table of possible answers that were arranged according to question, and it was a simple case of retrieval, then the program wouldn't be intelligent. Yet, most modern programs do not act as retrieval systems. Admittedly, they do perform set, pre-defined computations upon the data structures in the memory, but this is very different to a hash-table!
I must admit that over the past couple of months, I have been seriously disillusioned with GOFAI and natural language processing. I believe that early researchers were incredibly optimistic as to what they could achieve, and the positive momentum they started only showed signs of slowing a few years ago. Despite this, I still think that Searle is going about the entire problem the wrong way. Dismissing AI as mimicry is a dangerous assumption in and of itself.
A wonderful topic to throw about is the topic of mimicking intelligence. Can you mimic intelligence? Deep Blue beat Kasparov in the game that often signified man's intelligence. Did Deep Blue exhibit intelligence? It played (and won) a game that requires a significant amount of 'thought' and 'planning.' Deep Blue analyzes the board through immense computational power, so what does and does not constitute intelligence? If a human made a list of all plausible moves given the board diagram, then started removing options using a set of rules until he came up with what he thought was the best move, would that be classified as intelligence. Definitely - this type of approach to problems is taken in many fields (granted, not chess) and the question of whether intelligence is being used is never raised. Now, take a computer and do the same thing and ask yourself the same question. Why do people find it so hard to see the same thing?
Some people will say that it is merely mimicking intelligence. What is the formal definition of mimicking intelligence? If forced into answering this question, my second reply would be "the ability to display all traits of intelligence" - my first reply would be that no such thing exists. To me, 'mimicking intelligence' is an oxymoron. If all traits of intelligence are exhibited (for the meanwhile, let us assume traits of intelligence reduce down to the ability to have a meaningful conversation with a human) then intelligence exists! I cannot see how humans as a race can classify how intelligence should be defined when we do not how our own brains work.
The human race is easily threatened because we have been on top for so long that we have never had to deal with something potentially superior to ourselves. Indeed, Kasparov (and many others) saw the match between himself and Deep Blue as a way for the human race to "help defend our dignity." The inherent narcissistic tendencies of the human race have been deflated slowly since Copernicus told us we weren't the centre of the universe. Then again, by Darwin who said we'd evolved from protozoans. Now, perhaps by IBMs Deep Blue team telling us its not just us who can play chess!
Artificial Intelligence is fraught with philosophical questions, since much of the brain and its functionings are yet unanswered. In this essay, I merely moved from topic to topic as I wrote. These topics are by NO means the only ones brought up by Artificial Intelligence. Also, as AI advances (and I believe it will) toward completely humanoid robots, many more philosophical, moral, ethical and indeed even theological questions will arise. If computers and their apparent lack of intelligence isn't being battered, their apparent lack of a conciousness (or indeed, soul) is. Now there's some food for thought...
"...You can't do without philosophy, since everything has its hidden meaning which we must know..."