The earliest known device to record computations was the abacus. It dates back to ancient times and was invented by the Chinese. Ten beads were strung onto wires attached to a frame. Addition and subtraction were read from the final positions of the beads. It was considered the first manual tool used in calculating answers to problems that provided information and in a primitive way storing the results.
During the Middle Ages the first closed system in terms of calculating information was invented by use of a mechanical clock. The parts of the clock calculated the time of day. The time was displayed through the position of two hands on its face. The inventor pre-programmed the clock instructions through the manner in which the pull of the weights and the swing of the pendulum with the movement of the gears established the position of the hands on the clock face.
John Napier (Scotsman mid 1600s) discovered logarithms. He devised a system where he put the logarithms on a set of ivory rods called "Napiers Bones". By sliding the numbers up and down he invented a very primitive slide rule. Robert Bissaker perfected the system by placing numbers on sliding pieces of wood rather than ivory.
(1642) developed the first real calculator. Addition and subtraction were carried out by using a series of very light rotating wheels. His system is still used today in car odometers which track a cars mileage.
Gottfried van Leibnitz
(German mathematician) In 1690 Leibnitz developed a machine that could add, subtract, multiply, divide, and calculate square roots. The instructions were programmed into the machine. Programming was accomplished through the use of gears. The drawback to this machine was that the instructions could not be changed without changing the whole machine.
(early 1800s) Jacquard developed a loom controlled by punched cards. The cards were made of cardboard which were programmed with instructions. Each card represented a loop, and the machine read the cards as they were passed over a series of rods. The loom was the early ancestor of the IBM punched card.
(1812) Babbage was a genius of a man who saw few of his inventions actually built. He designed and built a model of what was called the difference engine. This invention was designed to perform calculations without human intervention. The ultimate goal of the machine was to have the machine calculate logarithm tables and print the results. Babbage was so far ahead of the times that the technology was not in place to manufacture the parts for his machine so he was only able to build a small model. In 1833, Babbage then designed the analytic engine. This machine had many of the same parts that could be found in modern day computers. It had an arthmetic unit which performed calculations. Another part of the computer was called the "store" which stored intermediate and final results and instructions. This was completed for each stage of calculation. It was to get its instructions from punched cards and worked through mechanical means. The machine would be able to perform any calculation. Before the machine could be made Babbage died. His son built a small model of the work that still exists today. Babbage became known as the father of the modern day computers.
Dr. Herman Hollerith
(late 1800 statistician) Hollerith used the punched card method to process data gathered in the census. The previous census had taken seven years to complete because of the large amount of data collected that needed to be processed. By developing the Hollerith code and a series of machines which could store census data on cards, he was able to accomplish the accounting of the census in two and a half years with an additional two million pieces of data added. His code was able to sort the data according to the needs of the United States Government. He was known for developing the first computer card and accomplishing the largest data processing endeavor undertaken at the time. Hollerith set up the Tabulating Machine Company which manufactured and marketed punched cards and equipment to the railroads. The railroads used the equipment to tabulate freight schedules. In 1911, the Tabulating Machine Company merged with other companies to form the International Business Machine Corporation (IBM).
(late 1890s) designed the mechanical adding machine. The machine operated by way of a crank and was key driven. The Burroughs Adding Machine Company was to become one of the giants of the computer industry. His machine could record, calculate, and summarize. Today, Burroughs has merged with UNISYS which builds computers.
The Years from 1900-1940
During the next forty years, more of the adding, calculating, and tabulating machines were developed. Eventually the machines evolved to a point where they could multiply, interpret the alphabetic data, recordkeeping, and other accounting functions. They were called accounting machines.
(1944) The Mark I ,through a collaboration with Harvard University, IBM, and the U.S. War Department, was developed to handle a large amount of number crunching. The complex equation solving that was needed to map logistics in the military was the driving force behind this project. ( The United States was at war with Germany.) The Mark I was the first automatic calculator. It was not electronic, but did use electromagnetic relays with mechanical counters. It was said that when it ran the clicking sound was unbearable. Paper tape with hole punched in it provided the instruction sets, and the output was returned through holes punched in cards.
J. Presper Eckert and John W. Mauchly
(ENIAC, 1946 University of Pennsylvania) The ENIAC (Electronic Numerical Integrator and Calculator) was an electronic computer sponsored by the war department. It was classified because of war purposes. The ENIAC was so large that it took up a room ten feet high by about ten feet wide and several hundred feet in length. It could perform multiplication in the 3/1000 of a second range. There were 18,000 vacuum tubes in the machine and instructions had to be fed into the machine by way of switches because there was no internal memory within the machine.
Jon Von Neumann
(late 1940s) devised a way to encode instructions and data in the same language. This paved the way for computer instructions to be stored in the computer itself. He was the forced behind the development of the first stored-program computer.
A Race Between the EDVAC and the EDSAC
Two groups of individuals were working at the same time to develop the first stored-program computer. In the United States, at the University of Pennsylvania the EDVAC (Electronic Discrete Variable Automatic Computer) was being worked on. In England at Cambridge, the EDSAC (Electronic Delay Storage Automatic Computer) was also being developed. The EDSAC won the race as the first stored-program computer beating the United States EDVAC by two months. The EDSAC performed computations in the three millisecond range. It performed arithmetic and logical operations without human intervention. The key to the success was in the stored instructions which it depended upon solely for its operation. This machine marked the beginning of the computer age.
First Generation (1951-1958)
John W. Mauchly and J. Presper Eckert
(1951) The first generation of computers started with the UNIVAC I (Universal Automatic Computer) built by Mauchly and Eckert. It was sold to the U.S. Census Bureau. This machine was dedicated to business data processing and not military or scientific purposes.
Characteristics of First Generation Computers
Use of vacuum tubes in electronic circuits: These tubes controlled internal operations and were huge. As a consequence the machines were large.
as primary internal-storage medium: Electric currents passed through wires which magnetized the core to represent on and off states
Limited main-storage capacity:
Slow input/output, punched-card-oriented: Operators performed input and output operations through the use of punched cards.
Low level symbolic-language programming: The computer used machine language which was cumbersome and accomplished through long strings of numbers made up of Zeroes and Ones. In 1952, Dr. Grace Hopper (University of Pennsylvania) developed a symbolic language called mnemonics (instructions written with symbolic codes). Rather than writing instructions with Zeroes and Ones, the mnemonics were translated into binary code. Dr. Hopper developed the first set of programs or instructions to tell computers how to translate the mnemonics.
Heat and maintenance problems: Special air-conditioning and maintenance were required of the machines. The tubes gave off tremendous amounts of heat.
Applications: payroll processing and record keeping though still oriented toward scientific applications thatn business data processing.
Examples: IBM 650 UNIVAC I
Second Generation Computers (1959-1964)
Characteristics of Second Generation Computers
Use of transitors for internal operations: tiny solid state transitors replace vacuum tubes in computers. The heat problem was then minimized and computers could be made smaller and faster.
Magnetic core as primary internal-storage medium: Electric currents pass through wires which magnetize the core to represent on and off states.Data in the cores can be found and retrieved for processing in a few millionths of a second.
Increased main-storage capacity: The internal or main storage was supplemented by use of magnetic tapes for external storage. These tapes substituted for punched cards or paper. Magnetic disks were also developed that stored information on circular tracks that looked like phonograph records. The disks provided direct or random access to records in a file.
Faster input/output; tape orientation: Devices could be connected directly to the computer and considered "on-line". This allowed for faster printing and detection and correction of errors.
High-level programming languages (COBOL,FORTRAN) : These languages resembled English. FORTRAN (FORmula TRANslator) was the first high-level language that was accepted widely. This language was used mostly for scientific applications. COBOL (Common Business-Oriented Language) was developed in 1961 for business data processing. Its main features include: file-processing, editing, and input/output capabilites.
Increased speed and reliability: Modular-hardware was developed through the design of electronic circuits. Complete modules called "breadboards" could be replaced if malfunctions occurred, or the machine "crashed". This decreased lost time and also new modules could be added for added features such as file-processing, editing , and input/output features.
Batch-oriented applications:billing, payroll processing, updating and inventory files: Batch processing allowed for collection of data over a period time and then one processed in one computer run. The results were then stored on magnetic tapes.
Examples:IBM 1401*(most popular business-oriented computer. Honeywell 200 CDC 1604
Third Generation Computers (1965-1970)
Characteristics of Third Generation Computers:
Use of integrated circuits: The use of integrated circuits (Ics) replaced the transitors of the second-generation machines. The circuits are etched and printed and hundreds of electronic components could be put on silicon circuit chips less than one-eighth of an inch square.
Magnetic core and solid-state main storage: Greater storage capacity was developed.
More flexibility with input/output; disk-oriented:
Smaller size and better performance and reliability: Advances in solid-state technology allowed for the design and building of smaller and faster computers. Breadboards could easily be replaced on the fly.
Extensive use of high-level programming languages: The software industry evolved during this time. Many users found that it was more cost effective to buy pre-programmed packages than to write the programs themselves. The programs from the second generation had to be rewritten since many of the programs were based on second generation architecture.
Emergence of minicomputers: The mini computers offered many of the same features as the mainframe computers only on a smaller scale. These machines filled the needs of the small business owner.
Remote processing and time-sharing through communication: Computers were then able to perform several operations at the same time. Remote terminals were developed to communicate with a central computer over a specific geographic location. Time sharing environments were established.
Availability of operating-systems(software) to control I/O and do tasks handled by human operators: Software was developed to take care of routine tasks required of the computer freed up the human operator.
Applications such as airline reservation systems, market forcasting, credit card billing: The applications also included inventory, control, and scheduling labor and materials. Multitasking was also accomplished. Both scientific and business applications could be run on the same machine.
Examples: IBM System/360 NCR 395 Burroughs B6500
Fourth Generation (1970-)
Characteristics of Fourth Generation Computers:
Use of large scale integrated circuits
Increased storage capacity and speed.
Modular design and compatibility between equipment
Special application programs
Versatility of input/ output devices
Increased use of minicomputers
Introduction of microprocessors and microcomputers
Applications: mathematical modeling and simulation, electronic funds transfer, computer-aided instruction and home computers. Internet Explosion.
1994 to the present. The world is changing rapidly and so is the explosion of information. The computer is an ever changing and evolving beast. Currently in 1998, the computers are at speeds of 400 MHz with harddrives averaging 6.4 gigabytes of memory. Components are becoming smaller and computers are becoming faster. Multimedia and webbased publishing are the current trends. There is a rush to incorporate networks and Internet access into the schools, and with the development of Internet II; virtual reality seems to becoming closer to a reality. Where will the future lead us next?
History of the Internet
The ARPANET evolved from a series of research experiments begun in late 1960s. (The premise for this researchwas a fear that a thermonuclear strike might knock out the militarys ability to communicate with its troops). The Department of Defense funded research on computer networking.The research was used to try and improve military communications.The project was called the Advanced Research Projects Administration (ARPA). A wide based area network called ARPANET resulted.)
Networks were very fragile. Just one computer being down would cause the whole network to come down. To provide better defense, the computers were kept decentralized, so that no main computer could be disabled. Inorder to make this happen, a computer protocol called Transmission Control Protocol (TCP/IP) was created. The protocol worked in a manner in which if information could not get to its destination through one route, then it would automatically be rerouted through another route.
How does this work? The communication is only between the computer sending the information and the computer receiving the information . The information required by the computers is very little. The network does not take care of the communication. It only provides the line or pipeline for information. The sending computer puts information in a packet.
The packet is enclosed in an Internet Protocol (IP) packet and adds the address to the receiving computer. All computers on the network are equal, no matter what the platform.
By 1980 ARPANET became the prototype Internet. There were 200 computers on the net. The Computer Science Network (CSNET) funded by the National Science Foundation (NSF) was then added to ARPANET. By 1983, the defense department used this combined network as its primary communications network. The number of computers connected at that time then rose to 562. In 1984, a total of 1,024 computers were connected.
In 1985, NSFNET a new network was created by the National Science Foundation. It was created to link five supercomputer centers across the country. The ARPANET was then hooked up to it, using the same protocols. The net only lasted until 1986, because the network capacity was not large enough to hold both groups. THE ARPANET was shut down very quietly and no one even noticed since the Internet was a network of networks.
In 1987 Merit Network Inc. was given a contract to manage an upgrade a new network. The National Science Foundation helped with the funding to install a high speed network that used 56,000 bit per second (56 Kbps) telephone lines. This occurred in 1988. In 1988, 28,174 computers were on the Internet. In 1989 there were 80,000. In 1990- 290,000 computers were using the Internet.
In 1992 a new network was built to expand the Internet. This network forms the main trunk of what is the Internet today. Currently Internet II is under construction. For more information on this faster and amazing new Internet follow the link below.
Other history links:
Digital Encyclopedia of History
[Hardware] [Operating Systems] [Software] [Adaptive Technology]
|Mail comments to Josh and Patrick|