Workings of a Computer
Many people think that computers
are too complicated and complex for them to understand. If they
had taken a closer look, they would have realised that all digital
computers actually work on the same basic concept. The basic working
concept of a computer is actually very simple.
Computers are basically machines
built to store and manipulate information. Computers always consist
of many components linked to a main system board or built in the
system board itself. In these components, information is stored
manipulated and put through reqiured procedures to produce the required
The question here is, how is the information stored and manipulated.
You see, in digital computers, data is represented using the binary
system. Data is stored on main memory using eletrical pulses, a
strong eletrical pulse through a wire/circuit will mean a "1"
state and a weaker or no eletriacal pulse would mean a "0"
state.On diskdrives and magnetic tape, data is stored on metal coated
surfaces with a magnetised spot meaning a "1" state and
a non-magnetised spot meaning a "0" state.
Just having 2 states of representation cannot store much information
Hence, many of such 2 state representations which we call "bits"
are put together to create more combinations.
if 2 bits were put together, we would have the following possible
2nd state: 0 1
3rd state: 1 0
4th state: 1 1
if there were instead 3 bits out together, then we would have the
following 8 states
0 0 0
2nd state: 0 0 1
3rd state: 0 1 0
8th state: 1 1 1
Judging from this pattern, the number of states increase as more
bits are put together. Infact, the formula for the number of states
is 2 to the power of the number of bits put together. Having 8 wires/magnetisable
spots together will therefore give us 256 number of states, plenty
and enough to let each state represent a letter in the aphlabet,
a puncuation and any special characters. Such 8 bits put together
is called a byte.
Representing symbols and numbers
is good, but to be really useful, a machine needs to be able to
manipulate these things. It turns out that is pretty simple too.
All the complex things a computer can compute are simply combinations
of two basic operations. The first is called NOT. NOT takes one
state as input and outputs the opposite state. Thus 1 becomes 0,
and 0 becomes 1; true becomes false, and false becomes true. Simple!
The next is a little more complex, it is called AND, and it requires
two bits of input and gives a single output. Given a 0 and a 0,
the answer is (as you might guess) 0. Given 1 and 0, the answer
is still 0. The other combination 0 and 1 yields 0 again. But 1
and 1 give 1. We can summarize this in a neat notation known as
a Truth Table:
NOT 1 | 0 AND 0 0 | 0
0 | 1 1 0 | 0
0 1 | 0
1 1 | 1
What happens if
you hook a NOT circuit's input to the output of an AND circuit? Why
you get a NAND circuit, and here's its truth table:
NAND 0 0 | 1
1 0 | 1
0 1 | 1
1 1 | 0
Which is opposite of the AND function.
are more operations than this, such as OR, NOR and XOR, but you
can easily make those out of the basic AND and NOT circuits as we
did with NAND. Electrical engineers call these circuits "gates,"
hinting at their electronic-decision-making purpose.
you cross-connect the inputs and outputs of a pair of NAND circuits
you get a new circuit which "remembers" what its inputs were set
to. This is called a "flip-flop" and it is a one bit memory. Cascade
these flip-flops side by side and you get a memory which can store
larger numbers (or a single symbol from a larger set of symbols).
Such a circuit is called a "register." Put these gates together
in a slightly different combination, and you get a circuit with
adds binary numbers, a single bit at a time; this is called (surprise)
an "adder." Cascade multiple adders side by side as we did with
flip-flops, and you get adders which can add bigger numbers. Well,
subtraction is just a special kind of adding, and multiplication
and division are simply repeated addition and subtraction respectively.
By now, I think you get an appreciation of the possibilities.
complex computers are made up of lots of gates, adders, registers
and wires to connect them together in different ways. Were it not
for one special thing I've not yet mentioned, combinations of these
parts often "flail" around, switching "aimlessly," almost always
getting stuck in some inconvenient configuration (a.k.a. "state").
(By the way this is what computer people mean when they say a computer
has "gone crazy and hung itself up.") Taming these gates can be
very tricky, and while some simple computers are built this way,
beyond a certain point, it's too hard to build a reliable system
this way. (This style of digital circuitry is called Asynchronous
or Combinatorial Logic, and has its uses). Better for our purposes,
is the idea of getting all the different parts to march together
in step like a battalion of solders. That is, when the drumbeat
sounds, the different components determine their new outputs from
the inputs present at the beat -- they change state in sync with
each other. (This is called Synchronous Logic.) The drumbeat in
a computer is called its "clock." OK, let's do something with this:
hook an adder and register together end to end (hooking the outputs
of the adder to the inputs of the register and vice-versa), then
feed a clock signal into it. This gives you a "counter," which is
a circuit that counts in binary: 0, 1, 10, 11 and so on up to the
number of bits in the register/adder combination.
me digress a little. This talk about clocks is where the "megahertz"
stuff you've undoubtedly heard about comes in. A 90 MHz computer
has a clock which "ticks" ninety million times a second. A computer's
clock synchronizes all the major activities of the computer. As
a general rule, making the clock go faster or slower, speeds-up
or slows-down the computer. (Of course, this is not without its
limits - going faster means more power consumption, heat, and electronic
noise, not to mention the fact that a given electronic component,
manufactured to given tolerances, can only switch so fast. Finally,
there is the speed-of-light, beyond which electricity cannot travel
-- one foot per nanosecond as Commodore Grace
Hopper , inventor of COBOL, liked to say.)
now our pile of electronics can make logical decisions (gates/NOT/AND),
do math (adders/counters), remember (flip-flops/registers), and
do combinations of these things in sequence (clock). If we connect
certain outputs back into certain inputs, once again adding "feedback"
to our system, then we will see cyclical patterns emerge in our
circuit: we will have built a computer! (In fact, the counter we
"built" in the paragraph above, is probably the simplest digital
computer.) Such a computer would be called "hard-wired" because
the patterns it would follow (and the manipulations that result)
would be determined by the parts and the way they were wired together.
John Von Neumann, the famous mathematician. His great contribution
to computers was the idea of letting values stored in memory (groups
of registers) determine how other values (registers/memory) in the
computer would be manipulated. That is, he created the "stored-program"
computer. With this great idea came the typical computer cycle of
fetching a control value (instruction) from a memory location determined
by the value of a certain register (program counter), carrying-out
the manipulation specified by the value (executing the instruction),
and finally adjusting (incrementing) the program counter to point
to the next instruction in memory. This classic fetch-execute-increment
cycle is the basis of most digital computers -- sometimes known
as "Von Neumann Machines." Since we can manipulate the values of
registers, why not the program counter? When this happens, we say
that the computer "branches" to the location in memory (that is,
our program) indicated by the new value of this register. Now we
are not restricted to sequential sets of operations -- we can take
alternate sequences, and even repeat sequences (or not, depending
on the instructions we execute.)
left out an important point: How do you get information into and
out of a computer? Well in the simplest case, an on-off switch connected
to the input of a computer's gate can serve as a one-bit input device.
Likewise, a simple light bulb hanging on the output of a gate could
serve as a single bit output device. To get more data in and out,
you simply gang more switches and lights side by side much as we
did in the case of flip-flops to make registers. For example, it
is real simple to make a set of switches or lights electrically
appear to the computer as a register or memory location -- we've
already seen that it's easy to move bits into and out of registers
and memory. (These are called "I/O ports" and "memory-mapped I/O"
respectively. There are trade-offs associated with each style.)
Do you remember those movies featuring computers chock-full of lights
and switches? Well these actually existed. Even today you can find
microcomputer training kits which feature lights and switches like
how do you get from lights and switches to things like keyboards,
printers, and displays? Remember we cascaded inputs and outputs
in the previous paragraph. These more advanced I/O devices usually
"talk" to the computer in 8 bit chunks (bytes, remember?). In each
of these, a certain pattern of bits is wired to represent a certain
action, or vice-versa. For example, when you press the 'A' key on
a keyboard, it typically sends the computer the value 65 (or 01000001
in binary). Likewise, most printers will print a capital 'A' in
response to the reception of a value of 65 at their input. Similarly,
a display would display a capital 'A'. Early I/O devices made this
association more-or-less directly (circuits dedicated to each operation).
Nowadays, most I/O devices make these associations with the assistance
of a small built-in computer! In fact, inputting and outputting
the alphabet is pretty basic for I/O devices, which typically go
beyond this (to reading and drawing pictures, listening or making
sounds, storing/retrieving bytes on a magnetic tape, and on and
how does this all relate to "computer languages?" People, don't
program a computer by flipping switches and watching lights do they?
Well, they used to; it's even done rarely today. But, no one really
wants to program a computer that way because it's so difficult.
So, we use the computer's ability to manipulate data to translate
strings of letters, words, and numbers into the 1's and 0's that
a computer can act directly upon. Specifically we translate abstract
operations and information representations into the very specific
form needed by the computer itself. Such translators are called
"compilers" and "interpreters," and they are very common and useful
programs. If fact, they represent a huge step in transforming a
computer from a high-tech door-stop to a useful tool. The language
of 1's and 0's which a given computer acts upon is called its "machine-language."
The more human oriented languages are known as "high-level" or "problem-oriented"
languages. Some of these languages include "BASIC," "FORTRAN," "COBOL,"
and more recently "Pascal," "Smalltalk," and 'C' (among many others!).
All of this leads to a very important concept: that data and programs
are interchangeable. (For example, a compiler considers your COBOL
program to be data, while the same computer can actually execute
the compiler's output -- which makes the output a program!). As
computers get faster, acquire more memory, and better I/O devices,
we can get them to do more sophisticated manipulations (which usually,
but not always, make for easier to use computers!).
I hope this gives you a sense for how computers work. What at first
seems hopelessly complicated, is really just a series of basic ideas
and components that act together in complex ways. There is more
to it than this, but just understanding these few concepts takes
much of the mystery out of computers, making them plausible, and
putting a working understanding within your grasp.