Introduction and background
Anyone who has worked with computers eventually comes to the realization
that there are some things the human mind can solve easily, but are virtually
impossible for even the most powerful super-computer to compute. Artificial intelligence originally was developed during WWII as a code-breaking technique. Code-Breakers would create a large system of rules which would be followed by an expert system to simulate intelligence in code-breaking. In the past few years computer scientists, electrical engineers, and many others have realized that the same structure which makes the human brain so versatile can also be applied to solving problems in computing. In fact a
great deal of arificial intelligence techniques derive their names from
biological origins. For example genetic algorithims are based on the
biological theory of evolution. Neural Networks came about from studying how
neurons in the brain interact to process data and reach decisions.
Human ability to deal with "fuzzy" data has also become a major topic of research
with vast implications for the computing field. For example human ability to
understand how much "a few" is rather than requiring a crisp number such as
One of the first people to devolp the theory of intelligent computers was
the mathematician Alan Turing. Alan Turing was born on June 23, 1912 in
london. As a boy Mr. Turing enrolled at the Sherbourne school in Deorset where he
showed phenomenal ability in the field of mathematics. Around this time
Mr. Turing became an atheist and began to wonder how the brain worked if there was
no soul behind it. Mr. Turing believed that a machine could function in the
same manner as a human brain and produce intelligent results. The machine
Turing thought up was capable of reading a tape of infinite length. When
the machine read the tape from left to right it would execute the command on
the tape, much like computers of today read binary code and execute it. Mr.
Turing also proposed that by altering the tape as part of the output the
machine could "learn" from what it had done. From this Mr. Turing developed
his famous "imitation test." This test was first published in Brain
magazine in 1950. Mr. Turing's test was to place a person in
contact with a computer and a human. If the person cannot determine which is
the computer and which is the human by asking a series of questions, then the
computer is thinking the same manner as a human.
Artificial Intelligence has not yet caught up with the sentient programs
envisioned by science fiction writers and movie makers. However, Artificial
Intelligence is used in today's world for everyting from medical applications
to finances to computer games to speech recognition. Below is a list of applications
that are in use or under development.
- OCR or optical character recognition is one of the most extensively used
application of neural network technology. This technology allows scanned
documents to be converted from bitmaps into text files using neural networks.
- Neural Networks have also been used in conjunction with fuzzy logic to
enable hand-writing recognition.
- Extensive work has been done using a variety of AI techniques to provide
accurate financial forecasting. As of now, however, these techniques have
proven only marginally effective.
- Fuzzy logic has been used to more precisely control mechanical devices
such as arie conditioners. Fuzzy logic is effective because it allows the
device to do tasks such as cool the air "a lot" or "a little" or "none"
rather than merely a binary choice of "cool" or "off."
- Many of the speech recognition products on today's market use some form
of AI to transcribe written speech.
- Natural Language processing, which allows a user to ask the computer
questions in ordinary english, utilizes some form of AI to process the request.
Gray, Paul. "Alan Turing." Time 29 Mar. 1999: 147.
Rao, Valluru B., and Rao, Hayagriva V. C++ Neural Networks and Fuzzy Logic.
Henry and Holt Publishing Comp., 1995