Development of a Computer
complexity of computer integrated circuits doubles very year. Retailers
and buyers offen find out to their horror that even as their new
top-of-the-line PCs are being delivered to their door step, research
labs are already churning out new faster, more powerful chips that
would make thier new computers obsolete. It is impossible to chase
the technology so closely that you are always ahead in terms of
your hardware, unless you are the very same person researching that
Gordon E. Moore, Chairman of
Intel corperation until he retired in 1989 (Intel is one of the
biggest microprocessor companies of all times since the 1960s) was
one of the first men to witness the genesis of an invention that
would profoundly change human civilization. He was the supervisor
of the first project ever to build a "computer on a chip";
a micro processor.
From as early as 1965, Moore noticed that microchips were doubling
in circuit density (and thus in their potential computational power)
every year or so. In fact, the development of the microchip is so
predictable and important, it has been canonized as a law--"Moore's
Moore first mentioned the famous observation of such a pattern
(almost just like a comment) to the media when he was interviewed.
When hugely publicised, the "little comment" became known
as "Moore's Law." However, as year after year went by with Moore's
law still proving itself to be true, many realised that "Moore's
Law" was anything but a joke. Microprocessor circuit densitys were
doubling every year and speeds of new processors were increasing
Even after 25 years,Moore's law still holds. The truth finally
set in, that computer technology was improving at blinding speeds.
The next question that pop ups is when is it ever going to stop.
Pushing the microprocessors
The strategy used to increase
the speed and performance of processor chips is actually to pack
more circuits into smaller, more compact dimensions. There may be
some of you who may be surprised to hear that. (There are offen
misconceptions by people who think the bigger a computer is, the
faster it should be able to work.)
It would only make sense that smaller and shorter circuits in a
microship would mean better performance. Since eletricity needs
time to pass through circuits, shorter, smaller circuits in a processor
would mean shorter relay time between its other transistors and
components, hence improving performance. The first processor chip
introduced by Intel ,the 108 000 hz 4004 chip uses technology that
packs circuits of 10 micron size onto the chip. Today, Intel's new
generation chips, the pentium 3 processors probably uses 0.13 micro
technology and goes up to speeds of 1000 Mhz (that's 1 billion hertzs
per second !!), a far cry from the first 4004 chip.
The past and today
When the microprocessor
race first started, experts in the microrocessor industry thought
that the strategy of shrinking circuits could not continue for long.
They expected that circuits could go down to 1 micron size before
they couldn't be shrunk anymore. The technology at that time simply
wasn't able to let them even imagine the creation of such compact
Today, the research labs at Intel are working with circuits sizes
of less than a quarter of a micron (0.13 micron as previously mentioned),
doing what once was thought impossible. Each leap in processing
power of these chips requires new technology that shrinks the size
of circuit lines so that ever more devices could be packed onto
a sliver of silicon.
In the 25 years since the first true microprocessor, Intel and
its competitors have been able to pull off the stream of technological
breakthroughs needed to sustain the computer revolution. But how
far can semiconductor technology go? As microcircuit transistors
shrink from microscopic to nanoscopic dimensions, is Moore's Law
about to run out of steam?
In a 1997 interview with Scientific
Americian, Moore himself said that the strategy of squeezing
more into less may not hold forever and sooner or later, they will
hit a wall. Already, chips today packs 10s of millions of circuits
into circuit lines of near 10 nanometers size. It is not known how
much more performance such a strategy can produce in newer chips.
Future of the
When the advances
of microprocessor technology finally hits the wall and circuits
are beyond anymore shrinking, more computers are expected to use
dual processing, triple processing or even more. This means that
instead of using single chips to perform operations, the compter
shares the job between 2 or more processors.
Already, supercomputers at companies like Intel, NASA and IBM use
fleets of processors and are hence able to process jobs at amazing
speeds impossible for single processor computers. Workstations,
animation CAD computers and other video editing computers use the
dual processor technology.
The only problem that lies here is with the operating systems.
As the number of processors increase, the operating system, which
takes care of all the tasks inside a computer will have to be more
complex to be able to support them. Further more, the task of splitting
the operations is complicated would be a big problem as more and
more processors are incoperated.
To be able to let the processors run at top speeds, memory allocation
areas like the RAM (random access memory), cache and also the BUS
(the connection that links up the component) will also have to increase
in speed and size.
However, it will still be sometime
before the current technology hits a wall and we will have to resort
to such tactics and the end-users and buyers like us will have nothing
to worry about for the time being. No matter what, computers will
only get better and faster, even if Moore's law doesn't hold anymore.