Alan Turing made a proposal in 1950 in a paper called "Computing Machinery and Intelligence". His proposal described a method that would test a computer's ability to possess intelligence by evaluating its ability to carry out human-like conversations.
The Turing Test is carried out by engaging a human judge in a conversation with two entities: a human and a machine. Turing thought that if the judge was not able to differentiate between the machine and the human being, then it would be reasonable to say that the machine possessed intelligence.
The Turing Test was inspired by a game called the “Imitation Game”, in which a person tries to differentiate between a man and a woman (placed in two separate rooms) by asking them a series of questions and reading their subsequent type-written answers.
Not everyone agrees with Turing, though. Here are some reasons stated by those who think that the Turing Test is not a valid way to measure intelligence:
Two famous arguments against the Turing Test are the Chinese Room argument and Blockhead argument.
The Chinese Room argument, by John Searle goes like this:
Let’s assume we do have a computer that behaves as if it understands the Chinese language. What the computer actually does however is process the Chinese symbols it receives as input from a human being, and compares them to a pre-existing look-up-table from which it generates a reply. If the look-up-table were large enough, it could generate accurate enough results to make the human think that the computer is able to understand and speak the Chinese language. However, in reality, the computer does not actually comprehend what the person is saying. It is merely programmed to match inputs with a pre-existing list of outputs. Therefore, although the human may be convinced that the person he is communicating with is actually another Chinese speaker, he will be mistaken.
There are still more people who disagree with Searle. For instance, Ray Kurzweil believes that with enough development, computers will be able to manipulate logic, or in effect, read “Chinese”. Once it is taught basic logic and how to teach itself more advanced concepts, a computer will be able to “think” things through, essentially becoming conscious.
The Blockhead argument, by philosopher Ned Block appeared in a paper entitled "Psychologism and Behaviourism". It claims that a non-intelligent system can actually be made to pass the Turing Test.
Block argues that there are only a finite set of grammatically and syntactically correct responses to any input from a human judge. Although the number of such responses is huge, it is still theoretically possible to program a computer with each of these potential responses.
Such a machine, Block argues, can converse with a human on any topic, because it already has all the possible replies pre-programmed in. Hence, the machine would be able to pass the Turing Test despite the fact that it fails to possess any actual intelligence.
As of 2006, no machine has yet been able to pass the Turing Test. While some chat bots like ELIZA and ALICE have received accolades from the AI community, they are still a long distance away from being able to converse well enough to trick human beings into thinking that they are actually talking to another human.