A bit is the most basic unit of memory, on the computer. It
originated from the word Binary Digit.
However, John Turkey first used the word bit in 1946. A bit is used to
represent an individual binary digit. Therefore, it only represents values
of "1" or "0". Bits are combined into groups of 8 cells, which represent
a byte. Then, when the computer uses the true false values of binary, a
bit physically becomes a transistor or capacitor in memory. It is stored
on a magnetic spot on a disk or tape or through high or low voltage pushing
through a circuit.
The speed of the computer can be determined by the number of
bits it can process at one time or by the number of bits it takes to represent
an address in computer memory. A bit is a unit used to measure how much
memory is needed to represent an address of a place in memory. Hence, the
classifications 32-bit and 16-bit. This can be confusing because
a computer is generally faster the longer the address that is needed. This
is partially due to the amount of data that can be processed at one time,
and the ability for it to process larger programs.
Graphics use "bits" as the qualification for the number bits
it takes to represent one point. A 1-bit image is monochrome. An 8-bit
image supports 256 colors or grayscales. Finally, a 24- or 32-bit graphic
supports "true color". In conclusion, the more bits you have the better
your graphics look.
Over the years, bit architecture has drastically improved.
This is illustrated in the picture below.
In 1955, the compression of
bit was 2,000 per square inch
The speed of transmissions per second has also improved over the two
years in question. In 1955, the transmission rate was 75 per second. In
1995, the transmission rate was 5 billions of bits per second.
In 1995, the compression of bit was 250,000,000 per
Search | Site
Map | Email (link disabled) | Forums
Main | Past
| Present | Future | Interviews
Java | About Us
All material and images on these pages are copyright Joseph,Ed. This page was developed for the Thinkquest1999