about neural networks
|"Man is not a machine... Although man most
certainly processes information, he does not necessarily process it
in the way computers do. Computers and men are not species of the
same genus... However much intelligence computers may attain, now
or in the future, theirs must always be an intelligence alien to genuine
human problems and concerns."
- Joseph Weizenbaum
What are they?
Think about the computer you are sitting at. It's fast,
it's clever... it will solve difficult equations in the blink of an eye.
But there are some problems it cannot solve: equations that take a number
of years to compute, and problems that cannot easily be described in terms
Such problems include understanding speech, interpreting
images such as handwriting, and weather forecasting. And while computers
cannot interpret handwriting or speech, the human brain can do it with
ease. How does the human brain work? Well, it uses neurons -- chemical-based
switches, which turn on and off about 1000 times a second. Why should
these be better than computers, whose switches turn on and off about a
of billion times a second? The answer lies in that while computers only
have one switch, the human brain has thousands, and can process all the
information at the same time.
Artificial Neural Networks (ANN) were designed to enable a computer to
overcome these limitations in traditional computers, to provide an easy,
reliable way to solve previously unsolvable problems.
How do they do this?
ANNs mimic the actions of animal brains. Instead of processing
information sequentially, like a computer, they process information in
parallel. So instead of analysing an image pixel by pixel, an ANN
would analyse each pixel at the same time, resulting in a sharp
decrease in the length of time it takes to analyse something like a
handwriting sample. Basically, the network is constructed like a very
small-scale brain, with tens or hundreds of neurons where a brain has
thousands. Instead of having physical neurons acting as switches, a
Neural Network consists of a series of interconnected nodes in the
software which act as neurons. According to diffrences in the nodes and the connections between them, the network processes information in different ways. That said, this is a rather simplistic
description of how they work: if you feel up to the calculus, we suggest
you try the Neural
Networks FAQ or this slightly more technical summary.
You can find some key points here.
Types of ANN
There is a proliferation of different types of ANN, each
with its own particular architecture designed to solve a certain
problem. Some of the more common types are:
Networks at your Fingertips is an excellent resource with examples
of many of the different types of networks, while the Neural
Networks FAQ does a good job of listing as many types as is humanly
possible. You can also try out the Dynamic
Associative Neural Memory Simulator.
- Backpropagation Networks, or BPN networks: currently the most widely
used type of network, and the most general-purpose network. Comprised of
one input layer, one or more middle layers, and one output layer.
- Adaptive Resonance Theory, or ART networks: developed to overcome
certain limitations in BPN networks: they are able to store new
information when presented with unforeseen patterns. The group consists
of ART1 and ART2 networks.
- Kohonen networks: a blanket term for a group of about three basic
types of network, invented by Teuvo Kohonen: Self-Organising Map
(SOM), Vector Quantization (VQ) and Learning Vector Quantization (LVQ)
- Hopfield networks: a rather unique type of network, which is
implemented as a single layer, with each node giving feedback to each of
the other nodes.
Who uses ANNs?
Neural Networks are interesting for quite a lot of very different people:
- Computer scientists want to find out about the properties of non-symbolic
information processing with neural nets, and about learning systems in
- Statisticians use neural nets as flexible, nonlinear regression and
- Engineers of many kinds exploit the capabilities of neural networks in many
areas, such as signal processing and automatic control.
- Cognitive scientists view neural networks as a possible apparatus to
describe models of thinking and conscience (High-level brain function).
- Neuro-physiologists use neural networks to describe and explore medium-level
brain function (e.g. memory, sensory system, motorics).
- Physicists use neural networks to model phenomena in statistical mechanics
and for many other tasks.
- Biologists use Neural Networks to interpret nucleotide sequences.
- Philosophers and some other people may also be interested in Neural Networks
for various reasons, such as the ethical and moral questions ANNs raise.
Artificial Neural Networks first entered the limelight in 1943, a
decade before the widespread advent of computers, when Warren McCulloch
and Walter Pitt published a paper on how neurons might work, and created
a simple neural network with an electrical circuit. With the development
of computers in the 1950s, the field received an increasing amount of
attention until the late 1960s. Interest dwindled rapidly, for various
reasons: not only the potential of neural networks had been overhyped,
but limitations were being discovered in certain existing types of networks.
People were beginning to weigh the potential impact of thinking machines
on society, and it was not all postive.
This slump lasted until the early 1980s, when interest was renewed by
several well-publicised papers on Neural Networks, since which there
has been considerable growth in the field.
Will Neural Networks ever be able to rival the human brain? It seems likely.
But, like all breakthroughs, it does present an interesting moral question,
much like those in posed by the genomics world: should we be allowed to
play god? Only time can give resolution to that debate, but whether you
agree with it or not, technology marches on.