A
computer is a general purpose device that can be
programmed
to carry out a set of arithmetic or logical operations. Since a
sequence of operations can be readily changed, the computer can solve
more than one kind of problem.
Conventionally, a computer consists of at least one processing element, typically a
central processing unit (CPU) and some form of
memory.
The processing element carries out arithmetic and logic operations, and
a sequencing and control unit that can change the order of operations
based on stored information. Peripheral devices allow information to be
retrieved from an external source, and the result of operations saved
and retrieved.
In
World War II,
mechanical analog computers were used for specialized military applications. During this time the first electronic
digital computers were developed. Originally they were the size of a large room, consuming as much power as several hundred modern
personal computers (PCs).
[1]
Modern computers based on
integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.
[2] Simple computers are small enough to fit into
mobile devices, and
mobile computers can be powered by small
batteries. Personal computers in their various forms are
icons of the
Information Age and are what most people think of as “computers.” However, the
embedded computers found in many devices from
MP3 players to
fighter aircraft and from toys to
industrial robots are the most numerous.
Etymology
The first use of the word “computer” was recorded in 1613 in a book
called “The yong mans gleanings” by English writer Richard Braithwait
I
haue read the truest computer of Times, and the best Arithmetician that
euer breathed, and he reduceth thy dayes into a short number. It
referred to a person who carried out calculations, or computations, and
the word continued with the same meaning until the middle of the 20th
century. From the end of the 19th century the word began to take on its
more familiar meaning, a machine that carries out computations.
[3]
History
Although rudimentary calculating devices first appeared in antiquity
and mechanical calculating aids were invented in the 17th century, the
first 'computers' were conceived of in the 19th century, and only
emerged in their modern form in the 1940s.
First general-purpose computing device
Charles Babbage, an English mechanical engineer and
polymath, originated the concept of a programmable computer. Considered the "
father of the computer",
[4] he conceptualized and invented the first
mechanical computer in the early 19th century. After working on his revolutionary
difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an
Analytical Engine, was possible. The input of programs and data was to be provided to the machine via
punched cards, a method being used at the time to direct mechanical
looms such as the
Jacquard loom.
For output, the machine would have a printer, a curve plotter and a
bell. The machine would also be able to punch numbers onto cards to be
read in later. The Engine incorporated an
arithmetic logic unit,
control flow in the form of
conditional branching and
loops, and integrated
memory, making it the first design for a general-purpose computer that could be described in modern terms as
Turing-complete.
[5][6]
The machine was about a century ahead of its time. All the parts for
his machine had to be made by hand - this was a major problem for a
device with thousands of parts. Eventually, the project was dissolved
with the decision of the
British Government
to cease funding. Babbage's failure to complete the analytical engine
can be chiefly attributed to difficulties not only of politics and
financing, but also to his desire to develop an increasingly
sophisticated computer and to move ahead faster than anyone else could
follow. Nevertheless his son, Henry Babbage, completed a simplified
version of the analytical engine's computing unit (the
mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers
During the first half of the 20th century, many scientific
computing needs were met by increasingly sophisticated
analog computers, which used a direct mechanical or electrical model of the problem as a basis for
computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.
[7]
The first modern analog computer was a
tide-predicting machine, invented by
Sir William Thomson in 1872. The
differential analyser,
a mechanical analog computer designed to solve differential equations
by integration using wheel-and-disc mechanisms, was conceptualized in
1876 by
James Thomson, the brother of the more famous Lord Kelvin.
[8]
The art of mechanical analog computing reached its zenith with the
differential analyzer, built by H. L. Hazen and
Vannevar Bush at
MIT starting in 1927. This built on the mechanical integrators of
James Thomson
and the torque amplifiers invented by H. W. Nieman. A dozen of these
devices were built before their obsolescence became obvious.
The modern computer
The principle of the modern computer was first described by
computer scientist Alan Turing, who set out the idea in his seminal 1936 paper,
[9] On Computable Numbers. Turing reformulated
Kurt Gödel's
1931 results on the limits of proof and computation, replacing Gödel's
universal arithmetic-based formal language with the formal and simple
hypothetical devices that became known as
Turing machines.
He proved that some such machine would be capable of performing any
conceivable mathematical computation if it were representable as an
algorithm. He went on to prove that there was no solution to the
Entscheidungsproblem by first showing that the
halting problem for Turing machines is
undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a 'Universal Machine' (now known as a
Universal Turing machine),
with the idea that such a machine could perform the tasks of any other
machine, or in other words, it is provably capable of computing anything
that is computable by executing a program stored on tape, allowing the
machine to be programmable.
Von Neumann acknowledged that the central concept of the modern computer was due to this paper.
[10] Turing machines are to this day a central object of study in
theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be
Turing-complete, which is to say, they have
algorithm execution capability equivalent to a
universal Turing machine.
Electromechanical computers
Replica of
Zuse's
Z3, the first fully automatic, digital (electromechanical) computer.
Early digital computers were electromechanical - electric switches
drove mechanical relays to perform the calculation. These devices had a
low operating speed and were eventually superseded by much faster
all-electric computers, originally using
vacuum tubes. The
Z2, created by German engineer
Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
[11]
In 1941, Zuse followed his earlier machine up with the
Z3, the world's first working
electromechanical programmable, fully automatic digital computer.
[12][13] The Z3 was built with 2000
relays, implementing a 22
bit word length that operated at a
clock frequency of about 5–10
Hz.
[14] Program code and data were stored on punched
film. It was quite similar to modern machines in some respects, pioneering numerous advances such as
floating point numbers. Replacement of the hard-to-implement decimal system (used in
Charles Babbage's earlier design) by the simpler
binary
system meant that Zuse's machines were easier to build and potentially
more reliable, given the technologies available at that time.
[15] The Z3 was probably a complete
Turing machine.
Electronic programmable computer
Purely
electronic circuit
elements soon replaced their mechanical and electromechanical
equivalents, at the same time that digital calculation replaced analog.
The engineer
Tommy Flowers, working at the
Post Office Research Station in
Dollis Hill in the 1930s, began to explore the possible use of electronics for the
telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the
telephone exchange network into an electronic data processing system, using thousands of
vacuum tubes.
[7] In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the
Atanasoff–Berry Computer (ABC) in 1942,
[16] The first electronic digital calculating device.
[17]
This design was also all-electronic and used about 300 vacuum tubes,
with capacitors fixed in a mechanically rotating drum for memory.
[18]
During World War II, the British at
Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine,
Enigma, was first attacked with the help of the electro-mechanical
bombes. To crack the more sophisticated German
Lorenz SZ 40/42 machine, used for high-level Army communications,
Max Newman and his colleagues commissioned Flowers to build the Colossus.
[18] He spent eleven months from early February 1943 designing and building the first Colossus.
[19] After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944
[20] and attacked its first message on 5 February.
[18]
Colossus was the world's first
electronic digital programmable computer.
[7]
It used a large number of valves (vacuum tubes). It had paper-tape
input and was capable of being configured to perform a variety of
boolean logical operations on its data, but it was not
Turing-complete.
Nine Mk II Colossi were built (The Mk I was converted to a Mk II making
ten machines in total). Colossus Mark I contained 1500 thermionic
valves (tubes), but Mark II with 2400 valves, was both 5 times faster
and simpler to operate than Mark 1, greatly speeding the decoding
process.
[21][22]
ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for the
United States Army.
The US-built
ENIAC
(Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the US. Although the ENIAC was similar
to the Colossus it was much faster and more flexible. It was
unambiguously a Turing-complete device and could compute any problem
that would fit into its memory. Like the Colossus, a "program" on the
ENIAC was defined by the states of its patch cables and switches, a far
cry from the
stored program
electronic machines that came later. Once a program was written, it had
to be mechanically set into the machine with manual resetting of plugs
and switches.
It combined the high speed of electronics with the ability to be
programmed for many complex problems. It could add or subtract 5000
times a second, a thousand times faster than any other machine. It also
had modules to multiply, divide, and square root. High speed memory was
limited to 20 words (about 80 bytes). Built under the direction of
John Mauchly and
J. Presper Eckert
at the University of Pennsylvania, ENIAC's development and construction
lasted from 1943 to full operation at the end of 1945. The machine was
huge, weighing 30 tons, using 200 kilowatts of electric power and
contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of
thousands of resistors, capacitors, and inductors.
[23]
Stored program computer
Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine.
[18] With the proposal of the stored-program computer this changed. A stored-program computer includes by design an
instruction set and can store in memory a set of instructions (a
program) that details the
computation. The theoretical basis for the stored-program computer was laid by
Alan Turing in his 1936 paper. In 1945 Turing joined the
National Physical Laboratory
and began work on developing an electronic stored-program digital
computer. His 1945 report ‘Proposed Electronic Calculator’ was the first
specification for such a device.
John von Neumann at the
University of Pennsylvania, also circulated his
First Draft of a Report on the EDVAC in 1945.
[7]
The Manchester Small-Scale Experimental Machine, nicknamed
Baby, was the world's first
stored-program computer. It was built at the
Victoria University of Manchester by
Frederic C. Williams,
Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.
[24] It was designed as a
testbed for the
Williams tube the first
random-access digital storage device.
[25]
Although the computer was considered "small and primitive" by the
standards of its time, it was the first working machine to contain all
of the elements essential to a modern electronic computer.
[26]
As soon as the SSEM had demonstrated the feasibility of its design, a
project was initiated at the university to develop it into a more usable
computer, the
Manchester Mark 1.
The Mark 1 in turn quickly became the prototype for the
Ferranti Mark 1, the world's first commercially available general-purpose computer.
[27] Built by
Ferranti, it was delivered to the
University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to
Shell labs in
Amsterdam.
[28] In October 1947, the directors of British catering company
J. Lyons & Company decided to take an active role in promoting the commercial development of computers. The
LEO I computer became operational in April 1951
[29] and ran the world's first regular routine office computer
job.
Transistor computers
The bipolar
transistor was invented in 1947. From 1955 onwards transistors replaced
vacuum tubes
in computer designs, giving rise to the "second generation" of
computers. Compared to vacuum tubes, transistors have many advantages:
they are smaller, and require less power than vacuum tubes, so give off
less heat. Silicon junction transistors were much more reliable than
vacuum tubes and had longer, indefinite, service life. Transistorized
computers could contain tens of thousands of binary logic circuits in a
relatively compact space.
At the
University of Manchester, a team under the leadership of
Tom Kilburn designed and built a machine using the newly developed
transistors instead of valves.
[30] Their first
transistorised computer and the first in the world, was
operational by 1953,
and a second version was completed there in April 1955. However, the
machine did make use of valves to generate its 125 kHz clock waveforms
and in the circuitry to read and write on its magnetic
drum memory, so it was not the first completely transistorized computer. That distinction goes to the
Harwell CADET of 1955,
[31] built by the electronics division of the
Atomic Energy Research Establishment at
Harwell.
[32][33]
The integrated circuit
The next great advance in computing power came with the advent of the
integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for the
Royal Radar Establishment of the
Ministry of Defence,
Geoffrey W.A. Dummer.
Dummer presented the first public description of an integrated circuit
at the Symposium on Progress in Quality Electronic Components in
Washington, D.C. on 7 May 1952.
[34]
The first practical ICs were invented by
Jack Kilby at
Texas Instruments and
Robert Noyce at
Fairchild Semiconductor.
[35]
Kilby recorded his initial ideas concerning the integrated circuit in
July 1958, successfully demonstrating the first working integrated
example on 12 September 1958.
[36]
In his patent application of 6 February 1959, Kilby described his new
device as “a body of semiconductor material ... wherein all the
components of the electronic circuit are completely integrated.”
[37] Noyce also came up with his own idea of an integrated circuit half a year later than Kilby.
[38] His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of
silicon, whereas Kilby's chip was made of
germanium.
This new development heralded an explosion in the commercial and personal use of computers and led to the invention of the
microprocessor.
While the subject of exactly which device was the first microprocessor
is contentious, partly due to lack of agreement on the exact definition
of the term "microprocessor", it is largely undisputed that the first
single-chip microprocessor was the Intel 4004,
[39] designed and realized by
Ted Hoff,
Federico Faggin, and Stanley Mazor at
Intel.
[40]
Programs
The defining feature of modern computers which distinguishes them from all other machines is that they can be
programmed. That is to say that some type of
instructions (the
program) can be given to the computer, and it will process them. Modern computers based on the
von Neumann architecture often have machine code in the form of an
imperative programming language.
In practical terms, a computer program may be just a few instructions
or extend to many millions of instructions, as do the programs for
word processors and
web browsers for example. A typical modern computer can execute billions of instructions per second (
gigaflops)
and rarely makes a mistake over many years of operation. Large computer
programs consisting of several million instructions may take teams of
programmers years to write, and due to the complexity of the task almost certainly contain errors.
Stored program architecture
This section applies to most common
RAM machine-based computers.
In most cases, computer instructions are simple: add one number to
another, move some data from one location to another, send a message to
some external device, etc. These instructions are read from the
computer's
memory and are generally carried out (
executed)
in the order they were given. However, there are usually specialized
instructions to tell the computer to jump ahead or backwards to some
other place in the program and to carry on executing from there. These
are called “jump” instructions (or
branches). Furthermore, jump instructions may be made to happen
conditionally
so that different sequences of instructions may be used depending on
the result of some previous calculation or some external event. Many
computers directly support
subroutines
by providing a type of jump that “remembers” the location it jumped
from and another instruction to return to the instruction following that
jump instruction.
Program execution might be likened to reading a book. While a person
will normally read each word and line in sequence, they may at times
jump back to an earlier place in the text or skip sections that are not
of interest. Similarly, a computer may sometimes go back and repeat the
instructions in some section of the program over and over again until
some internal condition is met. This is called the
flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket
calculator
can perform a basic arithmetic operation such as adding two numbers
with just a few button presses. But to add together all of the numbers
from 1 to 1,000 would take thousands of button presses and a lot of
time, with a near certainty of making a mistake. On the other hand, a
computer may be programmed to do this with just a few simple
instructions. For example:
mov No. 0, sum ; set sum to 0
mov No. 1, num ; set num to 1
loop: add num, sum ; add num to sum
add No. 1, num ; add 1 to num
cmp num, #1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running
Once told to run this program, the computer will perform the
repetitive addition task without further human intervention. It will
almost never make a mistake and a modern PC can complete the task in
about a millionth of a second.
[41]
Bugs
Main article:
Software bug
The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer
Errors in computer programs are called “
bugs.”
They may be benign and not affect the usefulness of the program, or
have only subtle effects. But in some cases, they may cause the program
or the entire system to “
hang,” becoming unresponsive to input such as
mouse clicks or keystrokes, to completely fail, or to
crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an
exploit,
code designed to take advantage of a bug and disrupt a computer's
proper execution. Bugs are usually not the fault of the computer. Since
computers merely execute the instructions they are given, bugs are
nearly always the result of programmer error or an oversight made in the
program's design.
[42]
Admiral
Grace Hopper, an American computer scientist and developer of the first
compiler, is credited for having first used the term “bugs” in computing after a dead moth was found shorting a relay in the
Harvard Mark II computer in September 1947.
[43]
Machine code
In most computers, individual instructions are stored as
machine code with each instruction being given a unique number (its operation code or
opcode
for short). The command to add two numbers together would have one
opcode; the command to multiply them would have a different opcode, and
so on. The simplest computers are able to perform any of a handful of
different instructions; the more complex computers have several hundred
to choose from, each with a unique numerical code. Since the computer's
memory is able to store numbers, it can also store the instruction
codes. This leads to the important fact that entire programs (which are
just lists of these instructions) can be represented as lists of numbers
and can themselves be manipulated inside the computer in the same way
as numeric data. The fundamental concept of storing programs in the
computer's memory alongside the data they operate on is the crux of the
von Neumann, or stored program, architecture. In some cases, a computer
might store some or all of its program in memory that is kept separate
from the data it operates on. This is called the
Harvard architecture after the
Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in
CPU caches.
While it is possible to write computer programs as long lists of numbers (
machine language) and while this technique was used with many early computers,
[44]
it is extremely tedious and potentially error-prone to do so in
practice, especially for complicated programs. Instead, each basic
instruction can be given a short name that is indicative of its function
and easy to remember – a
mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's
assembly language.
Converting programs written in assembly language into something the
computer can actually understand (machine language) is usually done by a
computer program called an assembler.
A 1970s
punched card containing one line from a
FORTRAN program. The card reads: “Z(1) = Y + W(1)” and is labeled “PROJ039” for identification purposes.
Programming language
Programming languages provide various ways of specifying programs for computers to run. Unlike
natural languages,
programming languages are designed to permit no ambiguity and to be
concise. They are purely written languages and are often difficult to
read aloud. They are generally either translated into
machine code by a
compiler or an
assembler before being run, or translated directly at run time by an
interpreter. Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Machine languages and the assembly languages that represent them (collectively termed
low-level programming languages) tend to be unique to a particular type of computer. For instance, an
ARM architecture computer (such as may be found in a
PDA or a
hand-held videogame) cannot understand the machine language of an
Intel Pentium or the
AMD Athlon 64 computer that might be in a
PC.
[45]
Higher-level languages
Though considerably easier than in machine language, writing long
programs in assembly language is often difficult and is also error
prone. Therefore, most practical programs are written in more abstract
high-level programming languages that are able to express the needs of the
programmer
more conveniently (and thereby help reduce programmer error). High
level languages are usually “compiled” into machine language (or
sometimes into assembly language and then into machine language) using
another computer program called a
compiler.
[46]
High level languages are less related to the workings of the target
computer than assembly language, and more related to the language and
structure of the problem(s) to be solved by the final program. It is
therefore often possible to use different compilers to translate the
same high level language program into the machine language of many
different types of computer. This is part of the means by which software
like video games may be made available for different computer
architectures such as personal computers and various
video game consoles.