computer history

Historical Interlude: The Birth of the Computer Part 2, The Creation of the Electronic Digital Computer

In the mid-nineteenth century, Charles Babbage attempted to create a program-controlled universal calculating machine, but failed for lack of funding and the difficulty of creating the required mechanical components.  This failure spelled the end of digital computer research for several decades.  By the early twentieth century, however, fashioning small mechanical components no longer presented the same challenge, while the spread of electricity generating technologies provided a far more practical power source than the steam engines of Babbage’s day.  These advances culminated in just over a decade of sustained innovation between 1937 and 1949 out of which the electronic digital computer was born.  While both individual computer components and the manner in which the user interacts with the machine have continued to evolve, the desktops, laptops, tablets, smartphones, and video game consoles of today still function according to the same basic principles as the Manchester Mark 1, EDSAC, and EDVAC computers that first operated in 1949.  This blog post will chart the path to these three computers.

Note: This is the second of four “historical interlude” posts that will summarize the evolution of computer technology between 1830 and 1960.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, Reckoners: The Prehistory of the Digital Computer, From Relays to the Stored Program Concept, 1935-1945 by Paul Ceruzzi, The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, and the articles “Alan Turing: Father of the Modern Computer” by B. Jack Copeland and Diane Proudfoot, “Colossus: The First Large Scale Electronic Computer” by Jack Copeland, and “A Brief History of Computing,” also by Copeland.

Analog Computing

102680080-03-01

Vannevar Bush with his differential analyzer, an analog computer

While a digital computer after the example of Babbage would not appear until the early 1940s, specialized computing devices that modeled specific systems mechanically continued to be developed in the late nineteenth and early twentieth centuries.  These machines were labelled analog computers, a term derived from the word “analogy” because each machine relied on a physical model of the phenomenon being studied to perform calculations unlike a digital computer that relied purely on numbers.  The key component of these machines was the wheel-and-disc integrator, first described by James Thomson, that allowed integral calculus to be performed mechanically.  Perhaps the most important analog computer of the nineteenth century was completed by James’s brother William, better known to history as Lord Kelvin, in 1876.  Called the tide predictor, Kelvin’s device relied on a series of mechanical parts such as pulleys and gears to simulate the gravitational forces that produce the tides and measured the water depth of a harbor at any given time of day, printing the results on a roll of paper.  Before Lord Kelvin’s machine, creating tide tables was so time-consuming that only the most important ports were ever charted.  After Kelvin’s device entered general use, it was finally possible to complete tables for thousands of ports around the world.  Improved versions of Kelvin’s computer continued to be used until the 1950s.

In the United States, interest in analog computing began to take off in the 1920s as General Electric and Westinghouse raced to build regional electric power networks by supplying alternating-current generators to power plants.  At the time, the mathematical equations required to construct the power grids were both poorly understood and difficult to solve by hand, causing electrical engineers to turn to analog computing as a solution.  Using resistors, capacitors, and inducers, these computers could simulate how the network would behave in the real world.  One of the most elaborate of these computers, the AC Network Analyzer, was built at MIT in 1930 and took up an entire room.  With one of the finest electrical engineering schools in the country, MIT quickly became a center for analog computer research, which soon moved from highly specific models like the tide predictor and power grid machines to devices capable of solving a wider array of mathematical problems through the work of MIT professor Vannevar Bush.

One of the most important American scientists of the mid-twentieth century, Bush possessed a brilliant mind coupled with a folksy demeanor and strong administration skills.  These traits served him well in co-founding the American Appliance Company in 1922 — which later changed its name to Raytheon and became one of the largest defense contractors in the world — and led to his appointment in 1941 to head the new Office of Scientific Research and Development, which oversaw and coordinated all wartime scientific research by the United States government during World War II and was instrumental to the Allied victory.

Bush built his first analog computer in 1912 while a doctoral student at Tufts College.  Called the “profile tracer,” it consisted of a box hung between two bicycle wheels and would trace the contours of the ground as it was rolled.  Moving on to MIT in 1919, Bush worked on problems involving electric power transmission and in 1924 developed a device with one of his students called the “product integraph” to simplify the solving and plotting of the first-order differential equations required for that work.  Another student, Harold Hazen, suggested this machine be extended to solve second-order differential equations as well, which would make the device useful for solving a wide array of physics problems.  Bush immediately recognized the potential of this machine and worked with Hazen to build it between 1928 and 1931.  Bush called the resulting machine the “differential analyzer.”

The differential analyzer improved the operation of Thomson’s wheel-and-disc integrator through a device called a torque amplifier, allowing it to mechanically model, solve, and plot a wider array of differential equations than any analog computer that came before, but it still fell short of the Babbage ideal of a general-purpose digital device.  Nevertheless, the machine was installed at several universities, corporations, and government laboratories and demonstrated the value of using a computing device to perform advanced scientific calculations.  It was therefore an important stepping stone on the path to the digital computer.

Electo-Mechanical Digital Computers

23593-004-D5156F2C

The Automatic Sequence Controlled Calculator (ASCC), also known as the Harvard Mark I, the first proposed electro-mechanical digital computer, though not the first completed

With problems like power network construction requiring ever more complex equations and the looming threat of World War II requiring world governments to compile large numbers of ballistics tables and engage in complex code-breaking operations, the demand for computing skyrocketed in the late 1930s and early 1940s.  This led to a massive expansion of human computing and the establishment of the first for-profit calculating companies, beginning with L.J. Comrie’s Scientific Computing Services Limited in 1937.  Even as computing services were expanding, however, the armies of human computers required for wartime tasks were woefully inadequate for completing necessary computations in a timely manner, while even more advanced analog computers like the differential analyzer were still too limited to carry out many important tasks.  It was in this environment that researchers in the United States, Great Britain, and Germany began attempting to address this computing shortfall by designing digital calculating machines that worked similarly to Babbage’s Analytical Engine but made use of more advanced components not available to the British mathematician.

The earliest digital calculating machines were based on electromechanical relay technology.  First developed in the mid nineteenth century for use in the electric telegraph, a relay consists in its simplest form of a coil of wire, an armature, and a set of contacts.  When a current is passed through the coil, a magnetic field is generated that attracts the armature and therefore draws the contacts together, completing a circuit.  When the current is removed, a spring causes the armature to return to the open position.  Electromechanical relays played a crucial role in the telephone network in the United States, routing calls between different parts of the network.  Therefore, Bell Labs, the research arm of the telephone monopoly AT&T, served as a major hub for relay research and was one of the first places where the potential of relays and similar switching units for computer construction was first contemplated.

The concept of the binary digital circuit, which continues to power computers to this day, was independently articulated and applied by several scientists and mathematicians in the late 1930s.  Perhaps the most influential of these thinkers — due to his work being published and widely disseminated — was Claude Shannon.  A graduate of the University of Michigan with degrees in electrical engineering and math, Shannon matriculated to MIT, where he secured a job helping Bush run his Differential Analyzer.  In 1937, Shannon took a summer job at Bell Labs, where he gained hands on experience with the relays used in the phone network and connected their function with another interest of his — the symbolic logic system created by mathematician George Boole in the 1840s.

Basically, Boole had discovered a way to represent formal logical statements mathematically by giving a true proposition a value of 1 and a false proposition a value of 0 and then constructing mathematical equations that could represent the basic logical operations such as “and,” “or” and “not.”  Shannon realized that since a relay either existed in an “on” or an “off” state, a series of relays could be used to construct logic gates that emulated Boolean logic and therefore carry out complex instructions, which in their most basic form are a series of “yes” or “no,” “on” or “off,” “1” or “0” propositions.  When Shannon returned to MIT that fall, Bush urged him to include these findings in his master’s thesis, which was published later that year under the name “A Symbolic Analysis of Relay and Switching Circuits.”  In November 1937, a Bell Labs researcher named George Stibitz, who was aware of Shannon’s theories, applied the concept of binary circuits to a calculating device for the first time when he constructed a small relay calculator he dubbed the K-Model because he built it at his kitchen table.  Based on this prototype, Stibitz received permission to build a full-sized model at Bell Labs, which was named the Complex Number Calculator and completed in 1940.  While not a full-fledged programmable computer, Stibitz’s machine was the first to use relays to perform basic mathematical operations and demonstrated the potential of relays and binary circuits for computing devices.

One of the earliest digital computers to use electromechanical relays was proposed by Howard Aiken in 1936.  A doctoral candidate in mathematics at Harvard University, Aiken needed to solve a series of non-linear differential equations as part of his dissertation, which was beyond the capabilities of Bush’s differential analyzer at neighboring MIT.  Unenthused by the prospect of solving these equations by hand, Aiken, who was already a skilled electrical engineer, proposed that MIT build a large-scale digital calculator to do the work.  The university turned him down, so Aiken approached the Monroe Calculating Machine Company, which also failed to see any value in the project.  Monroe’s chief engineer felt the idea had merit, however, and urged Aiken to approach IBM.

When last we left IBM in 1928, the company was growing and profitable, but lagged behind several other companies in overall size and importance.  That all changed with the onset of the Great Depression.  Like nearly every other business in the country, IBM was devastated by the market crash of 1929, but Tom Watson decided to boldly soldier on without laying off workers or cutting production, keeping his faith that the economy could not continue in a tailspin for long.  He also increased the company’s emphasis on R&D, building one of the world’s first corporate research laboratories to house all his engineers in Endicott, New York in 1932-33 at a cost of $1 million.  As the Depression dragged on, machines began piling up in the factories and IBM’s growth flattened, threatening the solvency of the company.  Watson’s gambles increasingly appeared to be a mistake, but then President Franklin Roosevelt began enacting his New Deal legislation.

In 1935, the United States Congress passed the Social Security Act.  Overnight, every company in the country was required to keep detailed payroll records, while the Social Security Administration had to keep a file on every worker in the nation.  The data processing burden of the act was enormous, and IBM, with its large stock of tabulating machines and fully operational factories, was the only company able to begin filling the demand immediately.  Between 1935 and 1937, IBM’s revenues rose from $19 million to $31 million and then continued to grow for the next 45 years.  The company was never seriously challenged in tabulating equipment again.

Traditionally, data processing revolved around counting tangible objects, but by the time Aiken approached IBM Watson had begun to realize that scientific computing was a natural extension of his company’s business activities.  The man who turned Watson on to this fact was Ben Wood, a Columbia professor who pioneered standardized testing and was looking to automate the scoring of his tests using tabulating equipment.  In 1928, Wood wrote ten companies to win support for his ideas, but only Watson responded, agreeing to grant him an hour to make his pitch.  The meeting began poorly as the nervous Wood failed to hold Watson’s interest with talk of test scoring, so the professor expanded his presentation to describe how nearly anything could be represented mathematically and therefore quantified by IBM’s machines.  One hour soon stretched to over five as Watson grilled Wood and came to see the value of creating machines for the scientific community.  Watson agreed to give Wood all the equipment he needed, dropped in frequently to monitor Wood’s progress, and made the professor an IBM consultant.  As a result of this meeting, IBM began supplying equipment to scientific labs around the world.

Aiken

Howard Aiken, designer of the Automatic Sequence Control Calculator

In 1937, Watson began courting Harvard, hoping to create the same kind of relationship he had long enjoyed with Columbia.  He dispatched an executive named John Phillips to meet with deans and faculty, and Aiken used the opportunity to introduce IBM to his calculating device.  He also wrote a letter to James Bryce, IBM’s chief engineer, who sold Watson on the concept.  Bryce assigned Clair Lake to oversee the project, which would be funded and built by IBM in Endicott according to Aiken’s design and then installed at Harvard.

Aiken’s initial concept basically stitched together a card reader, a multiplying punch, and a printer, removing human intervention in the process by connecting the components through electrical wiring and incorporating relays as switching units to control the passage of information through the parts of the machine.  Aiken drew inspiration from Babbage’s Analytical Enginge, which the mathematician first learned about soon after proposing his device when a technician informed him that the university actually owned a fragment of one of Babbage’s calculating machines that had been donated by the inventor’s son in 1886. Unlike Babbage, however, Aiken did not employ separate memory and computing elements, as all calculations were performed across a series of 72 accumulators that both stored and modified the data transmitted to them by the relays.  Without something akin to a CPU, the machine was actually less advanced than the Analytical Engine in that it did not support conditional branching — the ability to modify a program on the fly to incorporate the results of previous calculations — and therefore required all calculations to be done in a set sequence while requiring complex programs to use large instruction sets and long lines of paper tape.

Work began on the Automatic Sequence Control Calculator (ASCC) Mark I in 1939, but the onset of World War II resulted in the project being placed on the back burner as IBM shifted its focus to more important war work and Aiken entered the Navy.  It was finally completed in January 1943 at a cost of $500,000 and subsequently installed at Harvard in early 1944 after undergoing a year of testing in Endicott.  Measuring 8 feet tall and 51 feet long, the machine was housed in a gleaming metal case designed by Norman Bel Geddes, known for his art deco works such as the Metropolitan Opera House in New York.  By the time of its completion, the ASCC already lagged behind several other machines technologically and therefore did not play a significant role in the further evolution of the computer.  It is notable, however, both as the earliest proposed digital computer to actually be built and as IBM’s introduction to the world of computing.

zuse

Konrad Zuse, designer of the Z1, the first completed digital computer

While Howard Aiken was still securing support for his digital computer, a German named Konrad Zuse was busy completing one of his own.  Born in Berlin, Zuse spent most of his childhood in Braunsberg, East Prussia (modern Braniewo, Poland).  Deciding on a career as an engineer, he enrolled at the Technical College of Berlin-Charlottenburg in 1927.  While not particularly interested in mathematics, Zuse did have to work with complex equations to calculate the lode-bearing capability of structures, and like Aiken across the Atlantic he was not enthused at having to perform these calculations by hand.  Therefore, in 1935 Zuse began designing a universal automatic calculator consisting of a computing element, a storage unit, and a punched tape reader, independently arriving at the same basic design that Babbage had developed a century before.

While Zuse’s basic concept did not stray far from Babbage, however, he did incorporate one crucial improvement in his design that neither Babbage nor Aiken had considered, storing the numbers in memory according to a binary rather than a decimal system.  Zuse’s reason for doing so was practical — as an accomplished mechanical engineer he preferred keeping his components as simple as possible to make the computer easier to design and build — but the implications of this decision went far beyond streamlined memory construction.  Like Shannon, Zuse realized that by recognizing data in only two states, on and off, a computing device could represent not just numbers, but also instructions.  As a result, Zuse was able to use the same basic building blocks for both his memory and computing elements, simplifying the design further.

By 1938, Zuse had completed his first computer, a mechanical binary digital machine called the Z1. (Note: Originally, Zuse called this computer the V1 and continued to use the “V” designation on his subsequent computers.  After World War II, he began referring to these machines using the “Z” designation instead to avoid confusion with Germany’s V1 and V2 rockets.)  This first prototype was fairly basic, but it proved two things for Zuse: that he could create a working automatic calculating device and that the computing element could not be mechanical, as the components were just too unreliable.  The solution to this problem came from college friend Helmut Schreyer, an electrical engineer who convinced Zuse that the electrical relays used in telephone networks would provide superior performance.  Schreyer also worked as a film projectionist and convinced Zuse to switch from paper tape to punched film stock for program control.  These improvements were incorporated into the Z2 computer, completed in 1939, which never worked reliably, but was essential for securing funding for Zuse’s next endeavor.

Z3_1

A reconstruction of Konrad Zuse’s Z3, the world’s first programmable fully automatic digital computer

In 1941, Konrad Zuse completed the Z3 for the German government, the first fully operational digital computer in the world.  The computer consisted of two cabinets containing roughly 2,600 relays — 1,800 for memory, 600 for computing, and 200 for the tape reader — and a small display/keyboard unit for inputting programs.  With a memory of only 64 characters, the computer was too limited to carry out useful work, but it served as an important proof of concept and illustrated the potential of a programmable binary computer.

Unfortunately for Zuse, the German government proved disinterested in further research.  Busy fighting a war it was convinced would be over in just a year or two, the Third Reich limited its research activities to projects that could directly impact the war effort in the short-term and ignored the potential of computing entirely.  While Zuse continued to work on the next evolution of his computer design, the Z4, between 1942 and 1945, he did so on his own without the support of the Reich, which also turned down a computer project by his friend Schreyer that would have replaced relays with electronics.  Isolated from the rest of the developed world by the war, Zuse’s theories would have little impact on subsequent developments in computing, while the Z3 itself was destroyed in an Allied bombing raid on Berlin in 1943 before it could be studied by other engineers.  That same year, Great Britain’s more enthusiastic support of computer research resulted in the next major breakthrough in computing technology.

The Birth of the Electronic Computer

Colossus

Colossus, the world’s first programmable electronic computer

Despite the best efforts of Aiken and Zuse, relays were never going to play a large role in computing, as they were both unreliable and slow due to a reliance on moving parts.  In order for complex calculations to be completed quickly, computers would need to transition from electro-mechanical components to electronic ones, which function instead by manipulating a beam of electrons.

The development of the first electronic components grew naturally out of Thomas Edison’s work with the incandescent light bulb.  In 1880, Edison was conducting experiments to determine why the filament in his new incandescent lamps would sometimes break and noticed that an electrical charge would not flow through a negatively charged plate.  Although this effect had been observed by other scientists as early as 1873, Edison was the first to patent a voltage-regulating device based on this principle in 1883, which resulted in the phenomenon being named the “Edison effect.”

Edison, who did not have a solid grasp of the underlying science, did not follow up on his discovery.  In 1904, however, John Fleming, a consultant with the Marconi Company engaged in research relating to wireless telegraphy, realized that the Edison effect could be harnessed to create a device that would only allow the flow of electric current in one direction and thus serve as a rectifier that turned a weak alternating current into a stronger direct current.  This would in turn allow a receiver to be more sensitive to radio waves, thus making reliable trans-Atlantic wireless communication possible.  Based on his research, Fleming created the first diode, the Fleming Valve, in which an electric current was passed in one direction from a negatively-charged cathode to a positively-charged anode through a vacuum-sealed glass container.  The vacuum tube concept invented by Fleming remained the primary building block of electronic devices for the next fifty years.

In 1906, an American electrical engineer named Lee DeForest working independently of Fleming began creating his own series of electron tubes, which he called Audions.  DeForest’s major breakthrough was the development of the triode, which used a third electrode called a grid that could control the voltage of the current in the tube and therefore serve as an amplifier to boost the power of a signal.  DeForest’s tube contained gas at low pressure, which inhibited reliable operation, but by 1913 the first vacuum tube triodes had been developed.  In 1918, British physicists William Eccles and F.W. Jordan used two triodes to create the Eccles-Jordan circuit, which could flip between two states like an electrical relay and therefore serve as a switching device.

Even after the invention of the Eccles-Jordan circuit, few computer pioneers considered using vacuum tubes in their devices.  Conventional wisdom held they were unsuited for large-scale projects because a triode contains a filament that generates a great deal of heat and is prone to burnout.  Consequently, the failure rate would be unacceptable in a device requiring thousands of tubes.  One of the first people to challenge this view was a British electrical engineer named Thomas Flowers.

Tommy_Flowers

Tommy Flowers, the designer of Colossus

Born in London’s East End, Flowers, the son of a bricklayer, simultaneously took an apprenticeship in mechanical engineering at the Royal Armory, Woolwich, while attending evening classes at the University of London.  After graduating with a degree in electrical engineering, Flowers took a job with the telecommunications branch of the General Post Office (GPO) in 1926.  In 1930, he was posted to the GPO Research Branch at Dollis Hill, where he established a reputation as a brilliant engineer and achieved rapid promotion.

In the early 1930s, Flowers began conducting research into the use of electronics to replace relays in telephone switchboards.  Counter to conventional wisdom, Flowers realized that vacuum tube burnout usually occurred when a device was switched on and off frequently.  In a switchboard or computer, the vacuum tubes could remain in continuous operation for extended periods once switched on, thus greatly increasing their longevity.  Before long, Flowers began experimenting with equipment containing as many as 3,000 vacuum tubes.  Flowers would make the move from switchboards to computing devices with the onset of World War II.

With the threat of Nazi Germany rising in the late 1930s, the United Kingdom began devoting more resources to cracking German military codes.  Previously, this work had been carried out in London at His Majesty’s Government Code and Cypher School, which was staffed with literary scholars rather than cryptographic experts.  In 1938, however, MI6, the British Intelligence Service, purchased a country manor called Bletchley Park, near the intersection of the rail lines connecting Oxford and Cambridge and London and Birmingham, to serve as a cryptographic and code-breaking facility.  The next year, the government began hiring mathematicians to seriously engage in code-breaking activities.  The work conducted at the manor has been credited with shortening the war in Europe and saving countless lives. It also resulted in the development of the first electronic computer.

Today, the Enigma Code, broken by a team led by Alan Turing, is the most celebrated of the German ciphers decrypted at Bletchley, but this was actually just one of several systems used by the Reich and was not even the most complicated.  In mid-1942, Germany initiated general use of the Lorenz Cipher, which was reserved for messages between the German High Command and high-level army commands, as the encryption machine — which the British code-named “Tunny” — was not easily portable like the Enigma Machine.  In 1942, Bletchley established a section dedicated to breaking the cipher, and by November a system called the “statistical method” had been developed by William Tutte to crack the code, which built on earlier work by Turing.  When Tutte presented his method, mathematician Max Newman decided to establish a new section — soon labelled the Newmanry — to apply the statistical method with electronic machines.  Newman’s first electronic codebreaking machine, the Heath Robinson, was both slow and unreliable, but it worked well enough to prove that Newman was on the right track.

Meanwhile, Flowers joined the code-breaking effort in 1941 when Alan Turing enlisted Dollis Hill to create some equipment for use in conjunction with the Bombe, his Enigma-cracking machine.  Turing was greatly impressed by Flowers, so when Dollis Hill encountered difficulty crafting a combining unit for the Heath Robison, Turing suggested that Flowers be called in to help.  Flowers, however, doubted that the Heath Robisnon would ever work properly, so in February 1943 he proposed the construction of an electronic computer to do the work instead.  Bletchley Park rejected the proposal based on existing prejudices over the unreliability of tubes, so Flowers began building the machine himself at Dollis Hill.  Once the computer was operational, Bletchley saw the value in it and accepted the machine.

Installed at Bletchley Park in January 1944, Flowers’s computer, dubbed Colossus, contained 1600 vacuum tubes and processed 5,000 characters per second, a limit imposed not by the speed of the computer itself, but rather by the speed that the reader could safely operate without risk of destroying the magnetic tape.  In June 1944, Flowers completed the first Colossus II computer, which contained 2400 tubes and used an early form of shift register to perform five simultaneous operations and therefore operated at a speed of 25,000 characters per second.  The Colossi were not general purpose computers, as they were dedicated solely to a single code-breaking operation, but they were program-controlled. Unlike electro-mechanical computers, however, electronic computers process information too quickly to accept instructions from punched cards or paper tape, so the Colossus actually had to be rewired using plugs and switches to run a different program, a time-consuming process.

As the first programmable electronic computer, Colossus was an incredibly significant advance, but it ultimately exerted virtually no influence on future computer design.  By the end of the war, Bletchley Park was operating nine Colossus II computers alongside the original Colossus to break Tunny codes, but after Germany surrendered, Prime Minister Winston Churchill ordered the majority of the machines dismantled and kept the entire project classified.  It was not until the 1970s that most people knew that Colossus had even existed, and the full function of the machine remained unknown until 1996.  Therefore, instead of Flowers being recognized as the inventor of the electronic computer, that distinction was held for decades by a group of Americans working at the Moore School of the University of Pennsylvania.

ENIAC

ENIAC_Image_2

The Electronic Numerical Integrator and Computer (ENIAC), the first widely known electronic computer

In 1935, the United States Army established a new Ballistic Research Laboratory (BRL) at the Aberdeen Proving Grounds in Maryland dedicated to calculating ballistics tables for artillery.  With modern guns capable of lofting projectiles at targets many miles away, properly aiming them required the application of complex differential equations, so the BRL assembled a staff of thirty to create trajectory tables for various ranges, which would be compiled into books for artillery officers.  Aberdeen soon installed one of Bush’s differential analyzers to help compute the tables, but the onset of World War II overwhelmed the lab’s capabilities.  Therefore, it began contracting some of its table-making work with the Moore School, the closest institution with its own differential analyzer.

The Moore School of Electrical Engineering of the University of Pennsylvania owned a fine reputation, but it carried nowhere near the prestige of MIT and therefore did not receive the same level of funding support from the War Department for military projects.  It did, however, place itself on a war footing by accelerating degree programs through the elimination of vacations and instituting a series of war-related training and research programs.  One of these was the Engineering, Science, Management, War Training (ESMWT) program, an intensive ten-week course designed to familiarize physicists and mathematicians with electronics to address a manpower shortfall in technical fields.  One of the graduates of this course was a physics instructor at a nearby college named John Mauchly.

Born in Cincinnati, Ohio, John William Mauchly grew up in Chevy Chase, Maryland, after his physicist father became the research chief for the Department of Terrestrial Magnetism of the Carnegie Insitution, a foundation established in Washington, D.C. to support scientific research around the country.  Sebastien Mauchly specialized in recording atmospheric electrical conditions to further weather research, so John became particularly interested in meteorology.  After completing a Ph.D. at Johns Hopkins University in 1932, Mauchly took a position at Ursinus College, a small Philadelphia-area institution, where he studied the effects of solar flares and sunspots on long-range weather patterns.  Like Aiken and Zuse before him, Mauchly grew tired of solving the complex equations required for his research and began to dream of building a machine to automate this process.  After viewing an IBM electric calculating machine and a vacuum tube encryption machine at the 1939 World’s Fair, Mauchly felt electronics would provide the solution, so he began taking a night course in electronics and crafting his own experimental circuits and components.  In December 1940, Moore gave a lecture articulating his hopes of building a weather prediction computer to the American Association for the Advancement of Science.  After the lecture, he met an Iowa State College professor named John Atanasoff, who would play an important role in opening Mauchly to the potential of electronics by inviting him out to Iowa State to study a computer project he had been working on for several years.

atanasoff-berry-computer

The Atanasoff-Berry Computer (ABC), the first electronic computer project, which was never completed

A graduate of Iowa State College that earned a Ph.D. in theoretical physics from the University of Wisconsin-Madison in 1930, John Atanasoff, like Howard Aiken, was drawn to computing due to the frustration of solving equations for his dissertation.  In the early 1930s, Atanasoff experimented with tabulating machines and analog computing to make solving complex equations easier, culminating in a decision in December 1937 to create a fully automatic electronic digital computer.  Like Shannon and Zuse, Atanasoff independently arrived at binary digital circuits as the most efficient way to do calculations, remembering childhood lessons by his mother, a former school teacher, on calculating in base 2.  While he planned to use vacuum tubes for his calculating circuits, however, he rejected them for storage due to cost.  Instead, he developed a system in which paper capacitors would be attached to a drum that could be rotated by a bicycle chain.  By keeping the drums rotating so that the capacitors would sweep past electrically charged brushes once per second, Atanasoff believed he would be able to keep the capacitors charged and therefore create a low-cost form of electronic storage.  Input and output would be accomplished through punch cards or paper tape.  Unlike most of the other computer pioneers profiled so far, Atanasoff was only interested in solving a specific set of equations and therefore hardwired the instructions into the machine, meaning it would not be programmable.

By May 1939, Atanasoff was ready to put his ideas into practice, but he lacked electrical engineering skills himself and therefore needed an assistant to actually build his computer.  After securing a $650 grant from the Iowa State College Research Council, Atanasoff hired a graduate student recommended by one of his colleagues named Clifford Berry.  A genius who graduated high school at sixteen, Berry had been an avid ham radio operator in his youth and worked his way through college at Iowa State as a technician for a local company called Gulliver Electric.  He graduated in 1939 at the top of his engineering school class.  The duo completed a small-scale prototype of Atanasoff’s concept in late 1939 and then secured $5,330 from a private foundation to begin construction of what they named the Atanasoff-Berry Computer (ABC), the first electronic computer to employ separate memory and computing elements and a binary system for processing instructions and storing data, predating Colossus by just a few years.  By 1942, the ABC was nearly complete, but it remained unreliable and was ultimately abandoned when Atanasoff left Iowa State for a wartime posting with the Naval Ordinance Laboratory.  With no other champion at the university, the ABC was cannibalized for parts for more important wartime projects, after which the remains were placed in a boiler room and forgotten.  Until a patent lawsuit brought renewed attention to the computer in the 1960s, few were aware the ABC had ever existed, but in June 1941 Mauchly visited Atanasoff and spent five days learning everything he could about the machine.  While there is still some dispute regarding how influential the ABC was on Mauchly’s own work, there is little doubt that at the very least the computer helped guide his own thoughts on the potential of electronics for computing.

Upon completing the ESMWT at the Moore School, Mauchly was offered a position on the school’s faculty, where he soon teamed with a young graduate student he met during the course to realize his computer ambitions.  John Presper Eckert was the only son of a wealthy real estate developer from Philadelphia and an electrical engineering genius who won a city-wide science fair at twelve years old by building a guidance system for model boats and made money in high school by building and selling radios, amplifiers, and sound systems.  Like Tommy Flowers in England, Eckert was a firm believer in the use of vacuum tubes in computing projects and worked with Mauchly to upgrade the differential analyzer by using electronic amplifiers to replace some of its components.  Meanwhile, Mauchly’s wife was running a training program for human computers, which the university was employing to work on ballistics tables for the BRL.  Even with the differential analyzer working non-stop and over two hundred human computers doing calculations by hand, a complete table of roughly 3,000 trajectories took the BRL thirty days to complete.  Mauchly was uniquely positioned in the organization to understand both the demands being placed on Moore’s computers and the technology that could greatly increase the efficiency of their work.  He therefore drafted a memorandum in August 1942 entitled “The Use of High Speed Vacuum Devices for Calculating” in an attempt to interest the BRL in greatly speeding up artillery table creation through use of an electronic computer.

Mauchly submitted his memorandum to both the Moore School and the Army Ordinance Department and was ignored by both, most likely due to the continued skepticism over the use of vacuum tubes in large-scale computing projects.  The paper did catch the attention of one important person, however, Lieutenant Herman Goldstine, a mathematics professor from the University of Chicago currently serving as the liaison between the BRL and the Moore School human computer training program.  While not one of the initial recipients of the memo, Goldstine became friendly with Mauchly in late 1942 and learned of the professor’s ideas.  Aware of the acute manpower crisis faced by the BRL for creating its ballistic tables, Goldstine urged Mauchly to resubmit his memo and promised he would use all his influence to aid its acceptance.  Therefore, in April 1943, Mauchly submitted a formal proposal for an electronic calculating machine that was quickly approved and given the codename “Project PX.”

g

John Mauchly (right) and J. Presper Eckert, the men behind ENIAC

Eckert and Mauchly began building the Electronic Numerical Integrator and Computer (ENIAC) in autumn 1943 with a team of roughly a dozen engineers.  Mauchly remained the visionary of the project and was largely responsible for defining its capabilities, while the brilliant engineer Eckert turned that vision into reality.  ENIAC was a unique construction that had more in common with tabulating machines than later electronic computers, as the team decided to store numbers in decimal rather than binary and stored and modified numbers in twenty accumulators, therefore failing to separate the memory and computing elements.  The machine was programmable, though like Colossus this could only be accomplished through rewiring, as the delay of waiting for instructions to be read from a tape reader was unacceptable in a machine operating at electronic speed.  The computer was powerful for its time, driven by 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays, and could output a complete artillery table in just fifteen minutes.  The entire computer took up 1,800 square feet of floor space, consumed 150 kilowatts of power, and generated an enormous amount of heat.  Costing roughly $500,000, ENIAC was completed in November 1945 and successfully ran its first program the following month.

Unlike the previously discussed Z3, Colossus, and ABC computers, the ENIAC was announced to the general public with much fanfare in February 1946, was examined by many other scientists and engineers, and became the subject of a series of lectures held at the Moore School over eight weeks in the summer of 1946 in which other aspiring computer engineers could learn about the machine in detail.  While it was completed too late to have much impact on the war effort and exerted virtually no influence on future computers from a design perspective, the ENIAC stands as the most important of the early computers because it proved to the world at large that vacuum tube electronic computers were possible and served as the impetus for later computer projects.  Indeed, even before the ENIAC had been completed, Eckert and Mauchly were moving on to their next computer concept, which would finally introduce the last important piece of the computer puzzle: the stored program.

The First Stored Program Computers

Manchester_Mark2

The Manchester Small-Scale Experimental Machine (SSEM), the first stored-program computer to successfully run a program

As previously discussed, electronic computers like the Colossus and ENIAC were limited in their general utility because they could only be configured to run a different program by actually rewiring the machine, as there were no input devices capable of running at electronic speeds.  This bottleneck could be eliminated, however, if the programs themselves were also stored in memory alongside the numbers they were manipulating.  In theory, the binary numeral system made this feasible since the instructions could be represented through symbolic logic as a series of “yes or no,” “on or of,” “1 or 0” propositions, but in reality the amount of storage needed would overwhelm the current technology.  The mighty ENIAC with its 18,000 vacuum tubes could only store 200 characters in memory.  This was fine if all you needed to store were a few five or ten digit numbers at a time, but instruction sets would require thousands of characters.  By the end of World War II the early computer pioneers of both Great Britain and the United States began tackling this problem independently.

The brilliant British mathematician Alan Turing, who has already been mentioned several times in this blog for both his code breaking and early chess programming feats, first articulated the stored program concept.  In April 1936, Turing completed a paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” as a response to a lecture by Max Newman he attended at Cambridge in 1935.  In a time when the central computing paradigm revolved around analog computers tailored to specific problems, Turing envisioned a device called the Universal Turing Machine consisting of a scanner reading an endless roll of paper tape. The tape would be divided into individual squares that could either be blank or contain a symbol.  By reading these symbols based on a simple set of hardwired instructions and following any coded instructions conveyed by the symbols themselves, the machine would be able to carry out any calculation possible by a human computer, output the results, and even incorporate those results into a new set of calculations.  This concept of a machine reacting to data in memory that could consist of both instructions and numbers to be manipulated encapsulates the basic operation of a stored program computer.

Turing was unable to act on his theoretical machine with the technology available to him at the time, but when he first saw the Colossus computer in operation at Bletchley Park, he realized that electronics would make such a device possible.  In 1945, Turing moved from Bletchley Park to the National Physical Laboratory (NPL), where late in the year he outlined the first relatively complete design for a stored-program computer.  Called the Automatic Computing Engine (ACE), the computer defined by Turing was ambitious for its time, leading others at the NPL to fear it could not actually be built.  The organization therefore commissioned a smaller test model instead called the Pilot ACE.  Ultimately, Turing left the NPL in frustration over the slow progress of building the Pilot ACE, which was not completed until 1950 and was therefore preceded by several other stored program computers.  As a result, Turing, despite being the first to articulate the stored program concept, exerted little influence over how the stored program concept was implemented.

One of the first people to whom Turing gave a copy of his landmark 1936 paper was its principle inspiration, Max Newman.  Upon reading it, Newman became interested in building a Universal Turing Machine himself.  Indeed, he actually tried to interest Tommy Flowers in the paper while he was building his Colossi for the Newmanry at Bletchley Park, but Flowers was an engineer, not a mathematician or logician, and by his own admission did not really understand Turing’s theories.  As early as 1944, however, Newman himself was expressing his enthusiasm about taking what had been learned about electronics during the war and establishing a project to build a Universal Turing Machine at the war’s conclusion.

In September 1945, Newman took the Fielden Chair of Mathematics at Manchester University and soon after applied for a grant from the Royal Society to establish the Computing Machine Laboratory at the university.  After the grant was approved in May 1946, Newman had portions of the dismantled Colossi shipped to Manchester for reference and began assembling a team to tackle a stored-program computer project.  Perhaps the most important members of the team were electrical engineers Freddie Williams and Tom Kilburn.  While working on radar during the war, the duo developed a storage method in which a cathode ray tube can “remember” a piece of information by virtue of firing an electron “dot” onto the surface of the tube, thus creating a persistent charge well.  By placing a metal plate against the surface of the tube, this data can be “read” in the form of a voltage pulse transferred to the plate whenever a charge well is created or eliminated by drawing or erasing a dot.  Originally developed to eliminate stationary background objects from a radar display, a Williams tube could also serve as computer memory and store 1,024 characters.  As any particular dot on the tube could be read at any given time, the Williams tube was an early form of random access memory (RAM)

In June 1948, Williams and Kilburn completed the Manchester Small Scale Experimental Machine (SSEM), which was specifically built to test the viability of the Williams Tube as a computer memory device.  While this computer contained only 550 tubes and was therefore not practical for actual computing projects, the SSEM was the first device in the world with all the characteristics of a stored program computer and proved the viability of Williams Tube memory.  Building on this work, the team completed the Manchester Mark 1 computer in October 1949, which contained 4,050 tubes and used more reliable custom-built CRTs from industrial conglomerate the General Electric Company (GEC) to increase the reliability of the memory.

978

John von Neumann stands next to the IAS Machine, which he developed based on his consulting work on the Electronic Discrete Variable Automatic Computer (EDVAC), the first stored-program computer in the United States

Meanwhile, at the Moore School Eckert and Mauchly were already beginning to ponder building a computer superior to the ENIAC by the middle of 1944.  The duo felt the most serious limitation of the computer was its paltry storage, and like Newman in England, they turned to radar technology for a solution.  Before joining the ENIAC project, Eckert had devised the first practical method of eliminating stationary objects from a display called delay line memory.  Basically, rather than displaying the result of a single pulse on the screen, the radar would compare two pulses, one of which was delayed by passing it through a column of mercury, allowing both pulses to arrive at the same time, with the radar screen displaying only those objects that were in different locations between the two pulses.  Eckert realized that using additional electronic components to keep the delayed pulse trapped in the mercury would allow it to function as a form of computer memory.

The effort to create a better computer received a boost when Herman Goldstine had a chance encounter with physicist John von Neumann at the Aberdeen railroad station.  A brilliant Hungarian emigre teaching at Princeton, von Neumann was consulting on several government war programs, including the Manhattan Project, but had not been aware of the ENIAC.  When Goldstine started discussing the computer on the station platform, von Neumann took an immediate interest and asked for access to the project.  Impressed by what he saw, von Neumann not only used his influence to help gain the BRL’s approval for Project PY to create the improved machine, he also held several meetings with Eckert and Mauchly in which he helped define the basic design of the computer.

The extent of von Neumann’s contribution to the Electronic Discrete Variable Automatic Computer (EDVAC) remains controversial.  Because the eminent scientist penned the first published general overview of the computer in May 1945, entitled “First Draft of a Report on the EDVAC,” the stored program concept articulated therein came to be called the “von Neumann architecture.”  In truth, the realization that the increased memory provided by mercury delay lines would allow both instructions and numbers to be stored in memory occurred during meetings between Eckert, Mauchly, and von Neumann, and his contributions were probably not definitive.  Von Neumann did, however, play a critical role in defining the five basic elements of the computer — the input, the output, the control unit, the arithmetic unit, and the memory — which remain the basic building blocks of the modern computer.  It is also through von Neumann, who was keenly interested in the human brain, that the term “memory” entered common use in a computing context.  Previously, everyone from Babbage forward had used the term “storage” instead.

The EDVAC project commenced in April 1946, but the departure of Eckert and Mauchly with most of their senior engineers soon after disrupted the project, so the computer was not completed until August 1949 and only became fully operational in 1951 after several problems with the initial design were solved.  It contained 6,000 vacuum tubes, 12,000 diodes, and two sets of 64 mercury delay lines capable of storing eight characters per line, for a total storage capacity of 1,024 characters.  Like the ENIAC, EDVAC cost roughly $500,000 to build.

cambridge

The Electronic Delay Storage Automatic Calculator (EDSAC)

Because of the disruptions caused by Eckert and Mauchley’s departures, the EDVAC was not actually the first completed stored program computer conforming to von Neumann’s report.  In May 1946, computing entrepreneur L.J. Comrie visited the Moore School to view the ENIAC and came away with a copy of the von Neumann EDVAC report.  Upon his return to England, he brought the report to physicist Maurice Wilkes, who had established a computing laboratory at Cambridge in 1937, but had made little progress in computing before World War II.  Wilkes devoured the report in an evening and then paid his own way to the United States so he could attend the Moore School lectures.   Although he arrived late and only managed to attend the final two weeks of the course, Wilkes was inspired to initiate his own stored-program computer project at Cambridge, the Electronic Delay Storage Automatic Calculator (EDSAC).  Unlike the competing computer projects at the NPL and Manchester University, Wilkes decided that completing a computer was more important than advancing computer technology and therefore decided to create a machine of only modest capability and to use delay line memory rather than the newer Williams tubes developed at Manchester.  While this resulted in a less powerful computer than some of its contemporaries, it did allow the EDSAC to become the first practical stored-program computer when it was completed in May 1949.

Meanwhile, after concluding his consulting work at the Moore School, John von Neumann established his own stored-program computer project in late 1945 at the Institute of Advanced Study (IAS) at Princeton University.  Primarily designed by Julian Bigelow, the IAS Machine employed 3,000 vacuum tubes and could hold 4,096 40-bit words in its Williams Tube memory.  Although not completed until June 1952, the functional plan of the computer was published in the late 1940s and widely disseminated.  As a result, the IAS Machine became the template for many of the scientific computers built in the 1950s, including the MANIAC, JOHNNIAC, MIDAC, and MIDSAC machines that hosted some of the earliest computer games.

With the Moore lectures about the ENIAC and the publication of the IAS specifications helping to spread interest in electronic computers across the developed world and the EDSAC computer demonstrating that crafting a reliable stored program computer was possible, the stage was now set for the computer to spread beyond a few research laboratories at prestigious universities and become a viable commercial product.

Advertisements

Historical Interlude: The Birth of the Computer Part 1, the Mechanical Age

Before continuing the history of video gaming with the activities of the Tech Model Railroad Club and the creation of the first truly landmark computer game, Spacewar!, it is time to pause and present the first of what I referred to in my introductory post as “historical interludes.”  In order to understand why the video game finally began to spread in the 1960s, it is important to understand the evolution of computer technology and the spread of computing resources.  As we shall see, the giant mainframes of the 1940s and 1950s were neither particularly interactive nor particularly accessible outside of a small elite, which generally prevented the creation of programs that provided feedback quickly and seamlessly enough to create an engaging play experience while also generally discouraging projects not intended to aid serious research or corporate data processing.  By the time work on Spacewar! began in 1961, however, it was possible to occasionally divert computers away from more scholarly pursuits and design a program interesting enough to hold the attention of players for hours at a time.  The next four posts will describe how computing technology reached that point.

Note: Unlike my regular posts, historical interlude posts will focus more on summarizing events and less on critiquing sources or stating exactly where every last fact came from.  They are meant to provide context for developments in video game history, and the information within them will usually be drawn from a small number of secondary sources and not be researched as thoroughly as the video game history posts.  Much of the material in this post is drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, and The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson.

Defining the Computer

766px-Human_computers_-_Dryden

Human computers working at the NACA High Speed Flight Station in 1949

Before electronics, before calculating machines, even before the Industrial Revolution there were computers, but the term did not mean the same thing it does today.  Before World War II and the emergence of the first electronic digital computers, a computer was a person who performed calculations, generally for a specialized purpose.  As we shall see, most of the early computers were created specifically to perform calculations, so as they grew to function with less need for human intervention, they naturally came to be called “computers” themselves after the profession they quickly replaced.

The computer profession originated after the development of the first mathematical tables in the 16th and 17th centuries such as the logarithmic tables designed to perform complex mathematical operations solely through addition and subtraction and the trigonometric tables designed to simplify the calculation of angles for fields like surveying and astronomy.  Computers were the people who would perform the calculations necessary to produce these tables.  The first permanent table-making project was established in 1766 by Nevil Maskelyne to produce navigational tables that were updated and published annually in the Nautical Almanac, which is still issued today.

Maskelyne relied on freelance computers to perform his calculations, but with the dawning of the Industrial Revolution, a French mathematician named Gaspard de Prony established what was essentially a computing factory in 1791 modeled after the division of labor principles espoused by Adam Smith in the Wealth of Nations to compile accurate logarithmic and trigonometric tables to aid in performing a new survey of the entirety of France as part of a project to reform the property tax system.  De Prony relied on a small number of skilled mathematicians to define the mathematical formulas and a group of middle managers to organize the tables, so his computers needed only a knowledge of basic addition and subtraction to do their work, reducing the computer to an unskilled laborer.  As the Industrial Revolution progressed, unskilled workers in most fields moved from using simple tools to mechanical factory machinery to do their work, so it comes as no surprise that one enterprising individual would attempt to bring a mechanical tool to computing as well.

Charles Babbage and the Analytical Engine

charles_babbage

Charles Babbage, creator of the first computer design

Charles Babbage was born in 1791 in London.  The son of a banker, Babbage was a generally indifferent student who bounced between several academies and private tutors, but did gain a love of mathematics at an early age and attained sufficient marks to enter Trinity College, Cambridge, in 1810.  While Cambridge was the leading mathematics institution in England, the country as a whole had fallen behind the Continent in sophistication, and Babbage soon came to realize he knew more about math than his instructors.  In an attempt to rectify this situation, Babbage and a group of friends established the Analytical Society to reform the study of mathematics at the university.

After leaving Cambridge in 1814 with a degree in mathematics from Peterhouse, Babbage settled in London, where he quickly gained a reputation as an eminent mathematical philosopher but had difficulty finding steady employment.  He also made several trips to France beginning in 1819, which is where he learned of De Prony’s computer factory.  In 1820, he joined with John Herschel to establish the Astronomical Society and took work supervising the creation of star tables.  Frustrated by the tedious nature of fact-checking the calculations of the computers and preparing the tables for printing, Babbage decided to create a machine that would automate the task.

The Difference Engine would consist of columns of wheels and gears each of which represented a single decimal place.  Once the initial values were set for each column — which would be determined by setting a polynomial equation in column one and then using a series of derivatives to establish the value of the other columns — the machine would use Newton’s method of divided differences (hence its name) to perform addition and subtraction functions automatically, complete the tables, and then send them to a printing device.  Babbage presented his proposed machine to the Royal Society in 1822 and won government funding the next year by arguing that a maritime industrial nation required the most accurate navigational tables possible and that the Difference Engine would be both cheaper to operate and more accurate than an army of human computers.

The initial grant of £1,500 quickly proved insufficient for the task of creating the machine, however, which was at the very cutting edge of machine tool technology and therefore extremely difficult to fashion components for.   The government continued to fund the project for over a decade, however, ultimately providing £17,000.  By 1833, Babbage was able to construct a miniature version of the Difference Engine that lacked sufficient capacity to actually create tables but did prove the feasibility of the project.  The next year, however, he unwittingly sabotaged himself by proposing an even more grand device to the government, the Analytical Engine, thus undermining the government’s faith in Babbage’s ability to complete the original project and causing it to withdraw funding and support.  A fully working Difference Engine to Babbage’s specification would not be built until the late 1980s, by which time it was a historical curiosity rather than a useful machine.  In the meantime, Babbage turned his attention to the Analytical Engine, the first theorized device with the capabilities of a modern computer.

10303265

A portion of Charles Babbage’s Analytical Engine, which remained unfinished at his death

The Difference Engine was merely a calculating machine that performed addition and subtraction, but the proposed Analytical Engine was a different beast.  Equipped with an arithmetical unit called the “mill” that exhibited many of the features of a modern central-processing unit (CPU), the machine would be capable of performing all four basic arithmetic operations.  It would also possess a memory, able to store 1,000 numbers of up to 40 digits each.  Most importantly, it would be program controlled, able to perform a wide variety of tasks based on instructions inputted into the machine.  These programs would be entered using punched cards, a recording medium first developed in 1725 by Basile Bouchon and Jean-Baptiste Falcon to automate textile looms that was greatly improved and popularized by Joseph Marie Jacquard in 1801 for the loom that bears his name.  Results could be outputted to a printer or a curve plotter.  By employing separate memory and computing elements and establishing a method of program control, Babbage outlined the first machine to include all the basic hallmarks of the modern computer.

Babbage sketched out the design of his Analytical Engine between 1834 and 1846.  He then halted work on the project for a decade before returning to the concept in 1856 and continuing to tinker with it right up until his death in 1871.  Unlike with the Difference Engine, however, he was never successful in securing funding from a British Government that remained unconvinced of the device’s utility — as well as unimpressed by Babbage’s inability to complete the first project it had commissioned from him — and thus failed to build a complete working unit.  His project did attract attention in certain circles, however.  Luigi Manabrea, a personal friend and mathematician who later became Prime Minister of Italy, invited Babbage to give a presentation on his Analytical Engine at the University of Turin in 1842 and subsequently published a transcription of the lecture in French.  This account was translated into English over a nine month period in 1842-43 by another friend of Babbage, Ada Lovelace, the daughter of the celebrated poet Lord Byron.

Ada Lovelace has been a controversial figure in computer history circles.  Born in 1815, she never knew her celebrated father, whom her mother fled shortly after Ada’s birth.  She possessed what appears to have been a decent mathematical mind, but suffered from mental instability and delusions of grandeur that caused her to perceive greater abilities than she actually possessed.  She became a friend and student of noted mathematician Mary Somerville, who was also a friend of Babbage.  It was through this connection that she began attending Babbage’s regular Saturday evening salons in 1834 and came to know the man.  She tried unsuccessfully to convince him to tutor her, but they remained friends and he was happy to show off his machines to her.  Lovelace became a fervent champion of the Analytical Engine and attempted to convince Babbage to make her his partner and publicist for the machine.  It was in this context that she not only took on the translation of the Turin lecture in 1842, but at Babbage’s suggestion also decided to appended her own description of how the Analytical Engine differed from the earlier Difference Engine alongside some sample calculations using the machine.

In a section entitled “Notes by the Translator,” which ended up being longer than the translation itself, Lovelace articulated several important general principles of computing, including the recognition that a computer could be programmed and reprogrammed to take on a variety of different tasks and that it could be set to tasks beyond basic math through the use of symbolic logic.  She also outlined a basic structure for programming on the Analytical Engine, becoming the first person to articulate common program elements such as recursive loops and subroutines.  Finally, she included a sample program to calculate a set of Bernoulli numbers using the Analytical Engine.  This last feat has led some people to label Lovelace the first computer programmer, though in truth it appears Babbage created most of this program himself.  Conversely, some people dismiss her contributions entirely, arguing that she was being fed all of her ideas directly by Babbage and had little personal understanding of how his machine worked.  The truth is probably somewhere in the middle.  While calling her the first programmer is probably too much of a stretch, as Babbage had already devised several potential programs himself by that point and contributed significantly to Lovelace’s as well, she still deserves recognition for being the first person to articulate several important elements of computer program structure.  Sadly, she had no chance to make any further mark on computer history, succumbing to uterine cancer in 1852 at the age of thirty-six.

Towards the Modern Office

cb000184_1907_Office_with_Burroughs_Model_6_OM

An Office in the B-Logo Business Systems Department in 1907, showcasing some of the mechanical equipment revolutionizing clerical work in the period.

Ultimately, the Analytical Engine proved too ambitious, and the ideas articulated by Babbage would have to wait for the dawn of the electronics era to become practical.  In the meantime, however, the Industrial Revolution resulted in great advances in office automation that would birth some of the most important companies of the early computer age.  Unlike the human computer industry and the innovative ideas of Babbage, however, the majority of these advances came not from Europe, but from the United States.

Several explanations have been advanced to explain why the US became the leader in office automation.  Certainly, the country industrialized later than the European powers, meaning businessmen were not burdened with outmoded theories and traditions that hindered innovations in the Old World.  Furthermore, the country had a long history of interest in manufacturing efficiency, dating back as far as Eli Whitney and his concept of using interchangeable parts in firearms in 1801 (Whitney’s role in the creation of interchangeable parts is usually exaggerated, as he was not the first person to propose the method and was never actually able to implement it himself, but he was responsible for introducing the concept to the US Congress and therefore still deserves some credit for its subsequent adoption in the United States).  By the 1880s, this fascination with efficiency had evolved into the “scientific management” principles of Frederick Taylor that aimed to identify best practices through rational, empirical study and employ standardization and training to eliminate waste and inefficiency on the production line.  Before long, these ideals had penetrated the domain of the white-collar worker through the concept of “office rationalization,” in which managers introduced new technologies and systems to maximize productivity in that setting as well.

The first major advance in the drive for office automation was the invention of a practical typewriter.  While several inventors created typing machines in the early nineteenth century, none of these designs gained any traction in the marketplace because using them was slower than writing out a document by hand.  In 1867, however, a retired newspaper editor named Christopher Latham Sholes was inspired by an article in Scientific American describing a mechanical typing device to create one of his own.  By the next year Sholes, with the help of amateur mechanic Carlos Glidden and printer Samuel Soule, had created a prototype for a typing machine using a keyboard and type-basket design that finally allowed typing at a decent speed.  After Soule left the project, Sholes sent typewritten notes to several financiers in an attempt to raise capital to refine the device and prepare for mass production.  A Pennsylvania businessman named James Densmore answered the call and provided the funding necessary to make important improvements such as replacing a frame to hold the paper with a rotating drum and changing the layout of the keyboard to the familiar QWERTY orientation — still used on computer keyboards to this day — to cut down on jamming by spacing out commonly used letters in the typing basket.

After several failed attempts to mass produce the typewriter through smaller companies in the early 1870s, Densmore was able to attract the interest of Philio Remington of the small-arms manufacturer E. Remington & Sons, which had been branching out into other fields such as sewing machines and fire engines in the aftermath of the U.S. Civil War.  First introduced by Remington in 1874, the typewriter sold slowly at first, but as office rationalization took hold in the 1880s, businesses started flocking to the machine.  By 1890 Remington had a virtual monopoly on the new industry and was producing 20,000 machines a year.  In addition to establishing the typewriter in the office, Remington also pioneered the idea of providing after-market service for office products, opening branch offices in major cities where people could not only buy typewriters, but also bring them in for repairs.

With typed loose-leaf pages replacing the traditional “letter book” for office correspondence, companies soon found it necessary to adopt new methods for storing and retrieving documents.  This led to the development of vertical filing using hanging folders stored in upright cabinets, which was first publicly demonstrated by Melville Dewey at the Chicago World’s Fair in 1893.  While vertical filing proved superior to the boxes and drawers previously employed in the workplace, however, it proved woefully inefficient once companies evolved from tracking hundreds of records to tens of thousands.  This time the solution came from James Rand, Sr., a clerk from Tonawanda, New York, who patented a visible index system in which colored signal strips and tabs would allow specific file folders to be found quickly and easily.  Based on this invention, the clerk established the Rand Ledger Company in 1898.  His son, James Rand, Jr., joined the business in 1908 and then split off from his father in 1915 after a dispute over advertising spending to market his own record retrieval system based around index cards called the Kardex System.  As the elder Rand neared retirement a decade later, his wife orchestrated a reconciliation between him and his son, and their companies merged to form the Rand Kardex Company in 1925.  Two years later, Rand Kardex merged with the Remington Typewriter Company to form Remington Rand,  which became the largest business machine company in the world.

burroughs

A Burroughs “adder-lister,” one of the first commercially successful mechanical calculators

A second important invention of the late nineteenth century was the first practical calculator.  Mechanical adding machines had existed as far back as the 17th century when Blaise Pascal completed his Pascaline in 1645 and Gottfriend Liebnitz invented the first calculator capable of performing all four basic functions, the Stepped Reckoner, in 1692, but the underlying technology remained fragile and unreliable and therefore unsuited to regular use despite continued refinements over the next century.  In 1820, the calculator was commercialized for the first time by Thomas de Colmar, but production of his Arithmometer lasted only until 1822.  After making several changes, Thomas began offering his machine to the public again in 1851, but while the Arithmometer gained a reputation for both sturdiness and accuracy, production never exceeded a few dozen a year over the next three decades as the calculator remained too slow and impractical for use in a business setting.

The main speed bottleneck of the early adding machines was that they all required the setting of dials and levers to use, making them far more cumbersome for bookkeepers than just doing the sums by hand.  The man who first solved this problem was Dorr Felt, a Chicago machinist who replaced the dials with keys similar to those found on a typewriter.  Felt’s Comptometer, completed in 1885, arranged keys labelled 0 to 9 across ten columns that each corresponded to a single digit of a number, allowing figures to be entered rapidly with just one hand.  In 1887, Felt formed the Felt & Tarrant Manufacturing Company with a local manufacturer named Robert Tarrant to mass produce the Comptometer, and by 1900 they were selling over a thousand a year.

While Felt remained important in the calculator business throughout the early twentieth century, he was ultimately eclipsed by another inventor.  William S. Burroughs, the son of a St. Louis mechanic, was employed as a clerk at a bank but suffered from health problems brought on by spending hours hunched over columns adding figures.  Like Felt, he decided to create a mechanical adding machine using keys to improve this process, but he also added another key advance to his “adder-lister,” the ability to print the numbers as they were entered so there would be a permanent record of every financial transaction.  In 1886, Burroughs established the American Arithmometer Company to market his adding machine, which was specifically targeted at banks and clearing houses and was selling at a rate of several hundred a year by 1895.  Burroughs died in 1898, but the company lived on and relocated to Detroit in 1904 after it outgrew its premises in St. Louis, changing its name to the Burroughs Adding Machine Company in honor of its founder.  At the time of the move, Burroughs was selling 4,500 machines a year.  Just four years later, that number had risen to 13,000.

John H. Patterson

John H. Patterson, founder of the National Cash Register Company (NCR)

The adding machine was one of two important money management devices invented in this period, with the other being the mechanical cash register.  This device was invented in 1879 by James Ritty, a Dayton saloon owner who feared his staff was stealing from him, and constructed by his brother, John.  Inspired by a tool that counted the revolutions of the propeller on a steamship, “Ritty’s Incorruptible Cashier” required the operator to enter each transaction using a keypad, displayed each total entered for all to see, and printed the results on a roll of paper, allowing the owner to compare the cash taken in to the recorded amounts.  Ritty attempted to interest other business owners in his machine, but proved unsuccessful and ultimately sold the business to Jacob Eckert of Cincinnati in 1881.  Eckert added a cash drawer to the machine and established the National Manufacturing Company, but he was barely more successful than the Rittys.  Therefore, in 1884 he sold out to John Patterson, who established the National Cash Register Company (NCR).

John Henry Patterson was born on a farm outside Dayton, Ohio, and entered the coal trade after graduating from Dartmouth College.  While serving as the general manager of the Southern Coal and Iron Company, Patterson was tasked with running the company store and became one of Ritty’s earliest cash register customers.  After being outmaneuvered in the coal trade, Patterson sold his business interests and used the proceeds to buy NCR.  A natural salesman, Patterson created and/or popularized nearly every important modern sales practice while running NCR.  He established sales territories and quotas for his salesmen, paid them a generous commission, and rewarded those who met their quotas with an annual sales convention.  He also instituted formal sales training and produced sales literature that included sample scripts, creating the first known canned sales pitch.  Like Remington, he established a network of dealerships that provided after market services to build customer loyalty, but he also advertised through direct mailings, another unusual practice.  Understanding that NCR could only stay on top of the business by continuing to innovate, Patterson also established an “innovations department” in 1888, one of the earliest permanent corporate research & development organizations in the world.  In an era when factory work was mostly still done in crowded “sweatshops,” Patterson constructed a glass-walled factory that let in ample light set amid beautifully landscaped grounds.

While Patterson seemed to genuinely care for the welfare of his workers, however, he also had a strong desire to control every aspect of their lives.  He manipulated subordinates constantly, hired and fired individuals for unfathomable reasons, instituted a strict physical fitness regimen that all employees were expected to follow, and established rules of conduct for everything from tipping waiters to buying neckties.  For all his faults, however, his innovative sales techniques created a juggernaut.  By 1900, the company was selling 25,000 cash registers a year, and by 1910 annual sales had risen to 100,000.  By 1928, six years after Patterson’s death, NCR was the second largest office-machine supplier in the world with annual sales of $50 million, just behind Remington Rand at $60 million and comfortably ahead of number three Burroughs at $32 million.  All three companies were well ahead of the number four company, a small firm called International Business Machines, or IBM.

Computing, Tabulating, and Recording

IBM, which eventually rose to dominance in the office machine and data processing industries, cannot be traced back to a single origin, for it began as a holding company that brought together several firms specializing in measuring and processing information.  There were three key people responsible for shaping the company in its early years: Herman Hollerith, Charles Flint, and Tom Watson, Sr.

416px-Hollerith

Herman Hollerith, whose tabulating machine laid the groundwork for the company that became IBM

Born in Buffalo, New York, in 1860, Herman Hollerith pursued an education as a mining engineer, culminating in a Ph.D from Columbia University in 1890.  One of Hollerith’s professors at Columbia also served as an adviser to the Bureau of the Census in Washington, introducing Hollerith to the largest data processing organization in the United States.  At the time, the Census Bureau was in crisis as traditional methods of processing census forms failed to keep pace with a growing population.  The 1880 census, processed entirely by hand using tally sheets, took the bureau seven years to complete.  With the population of the country continuing to expand rapidly, the 1890 census appeared poised to take even longer.  To attack this problem, the new superintendent of the census, Robert Porter, held a competition to find a faster and more efficient way to count the U.S. population.

Three finalists demonstrated solutions for Porter in 1889.  Two of them created systems using colored ink or cards to allow data to be sorted more efficiently, but these were still manual systems.  Hollerith on the other hand, inspired by the ticket punches used by train conductors, developed a system in which the statistical information was recorded on punched cards that were quickly tallied by a tabulating machine of his own design.  Cards were placed in this machine one at a time and pressed with an apparatus containing 288 retractable pins.  Any pin that encountered a hole in the card would complete an electrical circuit and advance one of forty tallies.  Using Hollerith’s machines, the Census Bureau was able to complete its work in just two and a half years.

As the 1890 census began to wind down, Hollerith re-purposed his tabulating system for use by businesses and incorporated the Tabulating Machine Company in December 1896.  He remained focused on the census, however, until President McKinley’s assassination in 1901 resulted in the appointment of a new superintendent that chose to go with a different company for 1910.  In the meantime, Hollerith refined his system by implementing a three-machine setup consisting of a keypunch to put the holes in the cards, a tabulator to tally figures, and a sorting machine to place the cards in sequence.  By 1911, Hollerith had roughly one hundred customers and the business was continuing to expand, but his health was failing, leading him to entertain an offer to sell from an influential financier named Charles Flint.

Charles_Ranlett_Flint

Charles Rantlett Flint, the man who forged IBM

Charles Rantlett Flint was a self-made man born into a family of shipbuilders that started his first business at 18 on the docks of his hometown of Thomaston, Maine.  From there, he secured a job with a trader named William Grace by offering to work for free.  In 1872, Grace made Flint a partner in his new W.R. Grace & Co. shipping and trading firm, which still exists today as a chemical and construction materials conglomerate.  During this period, Flint acted as a commission agent in South America dealing in both arms and raw materials.  He also became keenly interested in new technologies such as the automobile, light bulb, and airplane.

In 1892, Flint leveraged his international trading contacts to pull together a number of rubber exporters into a trust called U.S. Rubber.  This began a period of intense monopoly building by Flint across a number of industries.  By 1901, Flint’s growing roster of trusts included the International Time Recording Company (ITR) of Endicott, New York, based around the recently invented time clock that allowed employers to easily track the hours worked by their employees, and the Computing Scale Company of America of Dayton, Ohio, based around scales that would both weigh items by the pound and compute their total cost.  While ITR proved modestly successful, however, the Computing Scale Company ended an abject failure.  In an attempt to salvage his poorly performing concern, Flint decided to define a new larger market of information recording machines for businesses and merge ITR and Computing Scale under the umbrella of a single holding company.  Feeling Hollerith’s company fit well into this scheme, Flint purchased it as well in 1911 and folded the three companies into the new Computing-Tabulating-Recording Company (C-T-R).  The holding company approach did not work, however, as C-T-R was an unwieldy organization consisting of three subsidiaries spread across five cities with managers that ignored each other at best and actively plotted against each other at worst.  Furthermore, the company was saddled with a large debt and its component parts could not leverage their positions in a trust to create superior integration or economies of scale because their products and customers were too different.  By 1914, C-T-R was worth only $3 million and carried a debt of $6.5 million.  Flint’s experiment had clearly failed, so he brought in a new general manager to turn the company around.  That man was Thomas Watson, Sr.

thomas_watson

Thomas Watson, Sr., the man who built IBM into a corporate giant

By the time Flint hired Watson for C-T-R, he already had a reputation as a stellar salesman, but was also tainted by a court case brought over monopolistic practices.  Born on a farm in south central New York State, Watson tried his hand as both a bookkeeper and a salesman with various outfits, but had trouble holding down steady employment.  After his latest venture failed in 1896, a butcher’s shop in Buffalo, Watson trudged down to the local NCR office to transfer the installment payments on the store’s cash register to the new owner.  While there, he struck up a conversation with a salesman named John Range and kept pestering him periodically until Range finally offered him a job.  Within nine months, Watson went from sales apprentice to full sales agent as he finally seemed to find his calling.  Four years later, he was transferred to the struggling NCR branch in Rochester, New York, which he managed to turn around.  This brought him to the attention of John Patterson in Dayton, who tapped Watson for a special assignment.

By 1903, when Patterson summoned Watson, NCR was experiencing fierce competition from a growing second-hand cash register market.  NCR cash registers were both durable and long-lasting, so enterprising businessmen had begun buying up used cash registers from stores that were upgrading or going out of business and then undercutting NCR’s prices on new machines.  For the controlling monopolist Patterson, this was unacceptable.  His solution was to create his own used cash register business that would buy old machines for higher prices than other outlets and sell them cheaper, making up the lost profits through funding directly from NCR.  Once the competition had been driven out of business, prices could be raised and the business would start turning a profit.  Patterson tapped Watson to control this business.  For legal reasons, Patterson kept the connection between NCR and the new Watson business a secret.

Between 1903 and 1908, Watson slowly expanded his used cash register business across the country, creating an excellent new profit-center for NCR.  His reward was a posting back at headquarters in Dayton as an assistant sales manager, where he soon became Patterson’s protégé and absorbed his innovative sales techniques.  By 1910, Watson had been promoted to sales manager, where his personable and less-controlling management style created a welcome contrast to Patterson and encouraged flexibility and creativity among the 900-strong NCR sales force, helping to double the company’s 1909 sales within two years.

As quickly as Watson rose at NCR, however, he fell even faster.  In 1912 the Taft administration, amid a general crusade against corporate trusts, brought criminal charges against Patterson, Watson, and other high-ranking NCR executives for violations of the Sherman Anti-Trust Act.  At the end of a three-month trial, Watson was found guilty along with Patterson and all but one of their co-defendants on February 13, 1913 and now faced the prospect of jail time.  Worse, the ordeal appears to have soured the ever-changeable Patterson on the executives indicted with him, as they were all chased out of the company within a year.  Watson himself departed NCR in November 1913 after 17 years of service.  Some accounts state that Watson was fired, but it appears that the separation was more by mutual agreement.  Either way, it was a humbled and disgraced Watson that Charles Flint tapped to save C-T-R in early 1914.  Things began looking up the next year, however, when an appeal resulted in an order for a new trial.  All the defendants save Watson settled with the government, which decided pursuing Watson alone was not worth the effort.  Thus cleared of all wrongdoing, Watson was elevated to the presidency of C-T-R.

Watson saved and reinvented C-T-R through a combination of Patterson’s techniques and his own charisma and personality.  He reinvigorated the sales force through quotas, generous commissions, and conventions much like Patterson.  A lover of the finer things in life, he insisted that C-T-R staff always be impeccably dressed and polite, shaping the popular image of the blue-suited IBM sales person that would last for decades.  He changed the company culture by emphasizing the importance of every individual in the corporation and building a sense of company pride and loyalty.  Finally, he was fortunate to take over at a time when the outbreak of World War I and a booming U.S. economy led to increased demand for tabulating machines both from businesses and the U.S. government.  Between 1914 and 1917, revenues doubled from $4.2 million to $8.3 million, and by 1920 they had reached $14 million.

What really set IBM apart, however, was the R&D operation Watson established based on the model of NCR’s innovations department.  At the time Watson arrived, C-T-R remained the leading seller of tabulating machines, but the competition was rapidly gaining market share on the back of superior products.  Hollerith, who remained as a consultant to C-T-R after Flint bought his company, showed little interest in developing new products, causing the company’s technology to fall further and further behind.  The company’s only other senior technical employee, Eugene Ford, occasionally came up with improvements, but he could not actually put them into practice without the approval of Hollerith, which was rarely forthcoming.  Watson moved Ford into a New York loft and ordered him to begin hiring additional engineers to develop new products.

Ford’s first hire, Clair Lake, developed the company’s first printing tabulator in the early 1920s, which gave the company a machine that could rival the competition in both technology and user friendliness.  Another early hire, Fred Carroll from NCR, developed the Carroll Press that allowed C-T-R to cheaply mass produce the punched cards used in the tabulating machines and therefore enjoy a huge profit margin on the product.  In the late 1920s, Lake created a new patentable punched-card design that would only work in IBM machines, which locked-in customers and made them unlikely to switch to a competing company and have to redo millions of cards.  Perhaps the most important hire was James Bryce, who joined the company in 1917, rose to chief engineer in 1922, and ended up with over four hundred patents to his name.

After a small hiccup in 1921-22 as the U.S. endured a small recession, C-T-R, which Watson renamed International Business Machines (IBM) in 1924, experienced rapid growth for the rest of the decade, reaching $20 million in revenue by 1928.  While this placed IBM behind Remington Rand, NCR, and Burroughs, the talented R&D group and highly effective sales force built by Watson left the company perfectly poised to rise to a dominant position in the 1930s and subsequently conquer the new computer market of the 1950s.