Since the advent and progression of computers, the ultimate goal of many humans has been to create a computer that thinks like a person: one that exhibits Artificial Intelligence.
But what, exactly, is Artificial Intelligence? When computers were first introduced, many people thought that an intelligent computer was one that could solve mathematical problems as quickly as people could. However, computers can solve thousands of calculations in a fraction of a second. In actuality however, very few mathematical problems can be solved faster by a human than they can be by a computer. So a new proposition arose—perhaps an intelligent computer can beat a person in chess. IBM’s Deep Blue proved in the 80’s that computers could master chess logic better than humans could, and the millions of chess programs reassert that fact daily. However, that’s all these programs can do: they simply treat the game as a complicated math problem.
If neither of these demonstrate intelligence, then what does?
Intelligence of any sort is the ability to learn and remember. Computers can remember whatever the programmer tells them to remember, but to make a truly intelligent machine, it needs the ability to learn—to recognize when it makes a mistake and correct it. Similarly, they need to adapt to changes in their environment. In other words, they need to notice when things that they’ve learned no longer apply and be able to correct this error. In order to do all of this, it needs to know its own abilities and limitations so it can learn accordingly.
For an unintelligent being (e.g. a robot) to have intelligence, it needs to have goals and desires. It can remember what it has done wrong and work to fix its mistakes, but until it knows what it wants or needs to do, it will not be very useful.
It would also need to know a bit about the world around it and be able to make predictions. This could be from common knowledge: for instance if it’s dark outside but it’s in the middle of the day, it may need to protect itself from rain; or from observations—if an owner always asks for the same thing at 9:23 AM, the robot would notice this and have it ready beforehand.
Of course, in order to be intelligent, it must do all of these things well. If a serial killer calls the house every day for a week when the intelligent computer is still beginning to learn, and if the computer knows that serial killers are not good people, it would very easily assume that the telephone is evil. This can be avoided with a working knowledge of household appliances, but when an unexpected situation arises, it needs to be able to figure out if its observations are correct and worth remembering.
The standard litmus test for animated intelligence is known as the Turing Test. Alan B. Turing first proposed it in 1950, and it states that a computer is intelligent if a person cannot tell the difference between it and an actual person.
No machine has yet met this standard, though many chatterboxes have tried.