Historical Interlude: The Birth of the Computer Part 4, Real-Time Computing

By 1955, computers were well on their way to becoming fixtures at government agencies, defense contractors, academic institutions, and large corporations, but their function remained limited to a small number of activities revolving around data processing and scientific calculation.  Generally speaking, the former process involved taking a series of numbers and running them through a single operation, while the latter process involved taking a single number and running it through a series of operations.  In both cases, computing was done through batch processing — i.e. the user would enter a large data set from punched cards or magnetic tape and then leave the computer to process that information based on a pre-defined program housed in memory.  For companies like IBM and Remington Rand, which had both produced electromechanical tabulating equipment for decades, this was a logical extension of their preexisting business, and there was little impetus for them to discover novel applications for computers.

In some circles, however, there was a belief that computers could move beyond data processing and actually be used to control complex systems.  This would require a completely different paradigm in computer design, however, based around a user interacting with the computer in real-time — i.e. being able to give the computer a command and have it provide feedback nearly instantaneously.  The quest for real-time computing not only expanded the capabilities of the computer, but also led to important technological breakthroughs instrumental in lowering the cost of computing and opening computer access to a greater swath of the population.  Therefore, the development of real-time computers served as the crucial final step in transforming the computer into a device capable of delivering credible interactive entertainment.

Note: This is the fourth and final post in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray,  A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh, and The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar.

Project Whirlwind

Forrester_Taylor_CutCut2

Jay Forrester (l), the leader of Project Whirlwind

The path to the first real-time computer began with a project that was never supposed to incorporate digital computing in the first place.  In 1943, the head of training at the United States Bureau of Aeronautics, a pilot and MIT graduate named Captain Luis de Florez, decided to explore the feasibility of creating a universal flight simulator for military training.  While flight simulators had been in widespread use since Edwin Link had introduced a system based around pneumatic bellows and valves called the Link Trainer in 1929 and subsequently secured an Army contract in 1934, these trainers could only simulate the act of flying generally and were not tailored to specific planes.  Captain de Florez envisioned using an analog computer to simulate the handling characteristics of any extant aircraft and turned to his alma mater to make this vision a reality.

At the time, MIT was already the foremost center in the United States for developing control systems thanks to the establishment of the Servomechanisms Laboratory in 1941, which worked closely with the military to develop electromechanical equipment for fire control, bomb sights, aircraft stabilizers, and similar projects.  The Bureau of Aeronautics therefore established Project Whirlwind within the Servomechanisms Laboratory in 1944 to create de Florez’s flight trainer.  Leadership of the Whirlwind project fell to an assistant director of the Servomechanisms Laboratory named Jay Forrester.  Born in Nebraska, Forrester had been building electrical systems since he was a teenager, when he constructed a 12-volt electrical system out of old car parts to provide his family’s ranch with electricity for the first time.  After graduating from the University of Nebraska, Forrester came to MIT as a graduate student in 1939 and joined the Servomechanisms Laboratory at its inception.  By 1944, Forrester was getting restless and considering establishing his own company, so he was given his choice of projects to oversee to prevent his defection.  Forrester chose Whirlwind.

In early 1945, Forrester drew up the specifications for a trainer consisting of a mock cockpit connected to an analog computer that would control a hydraulic transmission system to provide feedback to the cockpit.  Based on this preliminary work, MIT drafted a proposal in May 1945 for an eighteen-month project budgeted at $875,000, which was approved.  As work on Whirlwind began, the mechanical elements of the design came together quickly, but the computing element remained out of reach.  To create an accurate simulator, Forrester required a computer that updated dozens of variables constantly and reacted to user input instantaneously.  Bush’s Differential Analyzer, perhaps the most powerful analog computer of the time, was still far too slow to handle these tasks, and Forrester’s team could not figure out how to produce a more powerful machine solely through analog components.  In the summer of 1945, however, a fellow MIT graduate student named Perry Crawford that had written a master’s thesis in 1942 on using a digital device as a control system alerted Forrester to the breakthroughs being made in digital computing at the Moore School.  In October, Forrester and Crawford attended a Conference on Advanced Computational Techniques hosted by MIT and learned about the ENIAC and EDVAC in detail.  By early 1946, Forrester was convinced that the only way forward for Project Whirlwind was the construction of a digital computer that could operate in real time.

The shift from an analog computer to a digital computer for the Whirlwind project resulted in a threefold increase in cost to an estimated $1.9 million. It also created an incredible technical challenge.  In a period when the most advanced computers under development were struggling to achieve 10,000 operations a second, Whirlwind would require the capability of performing closer to 100,000 operations per second for seamless real-time operation.  Furthermore, the first stored-program computers were still three years away, so Forrester’s team also faced the prospect of integrating cutting edge memory technologies that were still under development.  By 1946, the size of the Whirlwind team had grown to over a hundred staff members spread across ten groups each focused on a particular part of the system in an attempt to meet these challenges.  All other aspects of the flight simulator were placed on hold as the entire team focused its attention on creating a working real-time computer.

200908311113506364_0

The Whirlwind I, the first real-time computer

By 1949, Forrester’s team had succeeded in designing an architecture fast enough to support real-time operation, but the computer could not operate reliably for extended periods.  With costs escalating and no end to development in sight, continued funding for the project was placed in jeopardy.  After the war, responsibility for Project Whirlwind had transferred from the Bureau of Aeronautics to the Office of Naval Research (ONR), which felt the project was not providing much value relative to a cost that had by now far surpassed $1.9 million.  By 1948, Whirlwind was consuming twenty percent of ONR’s entire research budget with little to show for it, so ONR began slowly trimming the budget.  By 1950, ONR was ready to cut funding all together, but just as the project appeared on the verge of death, it was revived to serve another function entirely.

On August 29, 1949, the Soviet Union detonated its first atomic bomb.  In the immediate aftermath of World War II, the United States had felt relatively secure from the threat of Soviet attack due to the distance between the two nations, but now the USSR had both a nuclear capability and a long range bomber capable of delivering a payload on U.S. soil.  During World War II, the U.S. had developed a primitive radar early warning system to protect against conventional attack, but it was wholly insufficient to track and interdict modern aircraft.  The United States needed a new air defense system and needed it quickly.

In December 1949, the United States Air Force formed a new Air Defense System Engineering Committee (ADSEC) chaired by MIT professor George Valley to address the inadequacies in the country’s air-defense system.  In 1950, ADSEC recommended creating a series of computerized command-and-control centers that could analyze incoming radar signals, evaluate threats, and scramble interceptors as necessary to interdict Soviet aircraft.  Such a massive and complex undertaking would require a powerful real-time computer to coordinate.  Valley contacted several computer manufacturers with his needs, but they all replied that real-time computing was impossible.

Despite being a professor at MIT, Valley knew very little about the Whirlwind project, as he was not interested in analog computing and had no idea it had morphed into a digital computer.  Fortunately, a fellow professor at the university, Jerome Wiesner, pointed him towards the project.  By early 1950, the Whirlwind I computer’s basic architecture had been completed, and it was already running its first test programs, so Forrester was able to demonstrate its real-time capabilities to Valley.  Impressed by what he saw, Valley organized a field-test of the Whirlwind as a radar control unit in September 1950 at Hanscom Field outside Bedford, Massachusettes, where a radar station connected to Whirlwind I via a phone line successfully delivered a radar signal from a passing aircraft.  Based on this positive result, the United States Air Force established Project Lincoln in conjunction with MIT in 1951 and moved Whirlwind to the new Lincoln Laboratory.

Project SAGE

IBM's_$10_Billion_Machine

A portion of an IBM AN/FSQ-7 Combat Direction Central, the heart of the SAGE system and the largest computer ever built

By April 1951, the Whirlwind I computer was operational, but still rarely worked properly due to faulty memory technology.  At Whirlwind’s inception, there were two primary forms of electronic memory in use, the delay-line storage pioneered for the EDVAC and CRT memory like the Williams Tube developed for the Manchester Mark I.  From his exposure to the EDVAC, Forrester was already familiar with delay-line memory early in Whirlwind’s development, but that medium functioned too slowly for a real-time design.  Forrester therefore turned his attention to CRT memory, which could theoretically operate at a sufficient speed, but he rejected the Williams Tube due to its low refresh rate.  Instead, Forrester incorporated an experimental tube memory under development at MIT, but this temperamental technology never achieved its promised capabilities and proved unreliable besides.  Clearly, a new storage method would be required for Whirlwind.

In 1949, Forrester saw an advertisement for a new ceramic material called Deltamax from the Arnold Engineering Company that could be magnetized or demagnetized by passing a large enough electric current through it.  Forrester believed the properties of this material could be used to create a fast and reliable form of computer memory, but he soon discovered that Deltamax could not switch states quickly at high temperatures, so he assigned a graduate student named William Papian to find an alternative.  In August 1950, Papian completed a master’s thesis entitled “A Coincident-Current Magnetic Memory Unit” laying out a system in which individual cores — small doughnut-shaped objects — with magnetic properties similar to Deltamax are threaded into a three-dimensional matrix of wires.  Two wires are passed through the center of the core to magnetize or demagnetize it by taking advantage of a property called hysteresis in which an electrical current only changes the magnetization of the material if it is above a certain threshold.  Only when currents are run through both wires and passed in the same direction will the magnetization change, making the cores a suitable form of computer memory.  A third wire is threaded through all of the cores in the matrix, allowing any portion of the memory to be read at any time.

Papian built the first small core memory matrix in October 1950, and by the end of 1951 he was able to construct a 16 x 16 array of cores.  During this period, Papian tested a wide variety of materials for his cores and settled on a silicon-steel ribbon wrapped around a ceramic bobbin, but these cores still operated too slowly and also required an unacceptably high level of current.  At this point Forrester discovered a German ceramicist in New Jersey named E. Albers-Schoenberg was attempting to create a transformer for televisions by mixing iron ore with certain oxides to create a compound called a ferrite that exhibited certain magnetic properties.  While ferrites generated a weaker output than the metallic cores Papian was experimenting with, they could switch up to ten times faster.  After experimenting with various chemical compositions, Papian finally constructed a ferrite-based core memory system in May 1952 that could switch between states in less than a microsecond and therefore serve the needs of a real-time computer.  First installed in the Whirlwind I in August 1953, ferrite core memory was smaller, cheaper, faster, and more reliable than delay-line, CRT, and magnetic drum memory and ultimately doubled the operating speed of the computer while reducing maintenance time from four hours a day to two hours a week.  Within five years, core memory had replaced all other forms of memory in mainframe computers, netting MIT a hefty profit in patent royalties.

With Whirlwind I finally fully functional the Lincoln Laboratory turned its attention to transforming the computer into a commercial command-and-control system suitable for installation in the United States Air Force’s air defense system.  This undertaking was beyond the scope of the lab itself, as it would require fabrication of multiple components on a large scale.  Lincoln Labs evaluated three companies to take on this task, defense contractor Raytheon, which had recently established a computer division, Remington Rand — through both its EMCC and ERA subsidiaries — and IBM.  At the time, Remington Rand was still the powerhouse in the new commercial computer business, while IBM was only just preparing to bring its first products to market.  Nonetheless, Forrester and his team were impressed with IBM’s manufacturing facilities, service force, integration, and experience deploying electronic products in the field and therefore chose the new kid on the block over its more established competitor.  Originally designated Project High by IBM — due to its location on the third floor of a necktie factory on High Street in Poughkeepsie — and the Whirlwind II by Lincoln Laboratory, the project eventually went by the name Semi-Automatic Ground Environment, or SAGE.

The heart of the SAGE system was a new IBM computer derived from the Whirlwind design called the AN/FSQ-7 Combat Direction Central.  By far the largest computer system ever built, the AN/FSQ-7 weighed 250 tons, consumed three megawatts of electricity, and took up roughly half an acre of floor space.  Containing 49,000 vacuum tubes and a core memory capable of storing over 65,000 33-bit words, the computer was capable of performing roughly 75,000 operations per second.  In order to insure uninterrupted operation, each SAGE installation actually consisted of two AN/FSQ-7 computers so that if one failed, the other could seamlessly assume control of the air defense center.  As the first deployed real-time computer system, it inaugurated a number of firsts in commercial computing such as the ability generate text and vector graphics on a display screen, the ability to directly enter commands via a typewriter-style keyboard, and the ability to select or draw items directly on the display using a light pen, a technology developed specifically for Whirlwind in 1955.  In order to remain in constant contact with other segments of the air defense system, the computer was also the first outfitted with a new technology called a modem developed by AT&T’s Bell Labs research division to allow data to be transmitted over a phone line.

The first SAGE system was deployed at McChord Air Force Base in November 1958, and the entire network of twenty-three Air Defense Direction Centers were online by 1963 at a total cost to the government of $8 billion.  While IBM agreed to do the entire project at cost as part of its traditional support for national defense, the project still brought the company $500 million in revenues in the late 1950s.  SAGE was perhaps the key project in IBM’s rise to dominance in the computer industry.  Through this massive undertaking, IBM became the most knowledgeable company in world at designing, fabricating, and deploying both large-scale mainframe systems and their critical components such as core memory and computer software.  In 1954, IBM upgraded its 701 computer to replace Williams Tubes memory with magnetic cores and released the system as the IBM 704.  The next year, a core-memory replacement for the 702 followed designated the IBM 705.  These new computers were instrumental in vaulting IBM past Remington Rand in the late 1950s.  SAGE, meanwhile, remained operational until 1983.

The Transistor and the TX-0

102631231-03-01

Kenneth Olsen, co-designer of the TX-0 and co-founder of the Digital Equipment Corporation (DEC)

While building a real-time computer for the SAGE air-defense system was the primary purpose of Project Whirlwind, the scope of the project grew large enough by the middle of the 1950s that staff could occasionally indulge in other activities, such as a new computer design proposed by staff member Kenneth Olsen.  Born in Bridgeport, Connecticut, Olsen began experimenting with radios as a teenager and took an eleven-month electronics course after entering the Navy during World War II.  The war was over by the time his training was complete, so after a single deployment on an admiral’s staff in the Far East, Olsen left the Navy to attend MIT in 1947, where he majored in electrical engineering.  After graduating in 1950, Olsen decided to continue his studies at MIT as a graduate student and joined Project Whirlwind.  One of Olsen’s duties on the project was the design and construction of the Memory Test Computer (MTC), a smaller version of the Whirlwind I built to test various core memory solutions.  In creating the MTC, Olsen innovated with a modular design in which each group of circuits responsible for a particular function was placed on a single plug-in unit placed on a rack that could be easily swapped out if it malfunctioned.  This was a precursor of the plug-in circuit boards still used today on computers.

One of the engineers who helped Olsen debug the MTC was Wes Clark, a physicist that came to Lincoln Laboratory in 1952 after working at the Hanford nuclear production site in Washington State.  Clark and Olsen soon bonded over their shared views on the future of computing and their desire to create a computer that would apply the lessons learned during the Whirlwind project and the construction of the MTC to the latest advances in electronics to demonstrate the potential of a fast and power-efficient computer to the defense industry.  Specifically, Olsen and Clark wanted to explore the potential of a relatively new electronic component called the transistor.

Bardeen_Shockley_Brattain_1948

John Bardeen (l), William Shockley (seated), and Walter Brattain, the team that invented the transistor

For over forty years, the backbone of all electronic equipment was the vacuum tube pioneered by John Fleming in 1904.  While this device allowed for switching at electronic speeds, however, its limitations were numerous.  Vacuum tubes generated a great deal of heat during operation, which meant that they consumed power at a prodigious rate and were prone to burnout over extended periods of use.  Furthermore, they could not be miniaturized beyond a certain point and had to be spaced relatively far apart for heat management, guaranteeing that tube-based electronics would always be large and bulky.  Unless an alternative switching device could be found, the computer would never be able to shrink below a certain size.  The solution to the vacuum tube problem came not from one of the dozen or so computer projects being funded by the U.S. government, but from the telephone industry.

In the 1920s and 1930s, AT&T, which held a monopoly on telephone service in the United States, began constructing a series of large switching facilities in nearly every town in the country to allow telephone calls to be placed between any two phones in the United States.  These facilities relied on the same electromechanical relays that powered several of the early computers, which were bulky, slow, and wore out over time.  Vacuum tubes were sometimes used as well, but the problems articulated above made them particularly unsuited for the telephone network.  As AT&T continued to expand its network, the size and speed limitations of relays became increasingly unacceptable, so the company gave a mandate to its Bell Labs research arm, one of the finest corporate R&D organizations in the world, to discover a smaller, faster, and more reliable switching device.

In 1936, the new director of research at Bell Labs, Mervin Kelly, decided to form a group to explore the possibility of creating a solid-state switching device.  Both solid-state physics, which explores the properties of solids based on the arrangement of their sub-atomic particles, and the related field of quantum mechanics, in which physical phenomena are studied on a nanoscopic scale, were in their infancy and not widely understood, so Kelly scoured the universities for the smartest chemists, metallurgists, physicists, and mathematicians he could find.  His first hire was a brilliant, but difficult physicist named William Shockley.  Born in London to a mining engineer and a geologist, William Bradford Shockley, Jr. grew up in Palo Alto, California, in the heart of the Santa Clara Valley, a region known as the “Valley of the Heart’s Delight” for its orchards and flowering plants.  Shockley’s father spent most of his time moving from mining camp to mining camp, so he grew especially close to his mother, May, who taught him the ins and outs of geology from a young age.  After attending Stanford to stay close to his mother, Shockley received a Ph.D. from MIT in 1936 and went to work for Bell.  Gruff and self-centered, Shockley never got along with his colleagues anywhere he worked, but there was no questioning his brilliance or his ability to push colleagues towards making new discoveries.

Kelly’s group began educating itself on the field of quantum mechanics through informal sessions where they would each take a chapter of the only quantum mechanics textbook in existence and teach the material to the rest of the group.  As their knowledge of the underlying science grew in the late 1930s, the group decided the most promising path to a solid-state switching device lay with a group of materials called semiconductors.   Generally speaking, most materials are either a conductor of electricity, allowing electrons to flow through them, or an insulator, halting the flow of electrons.  As early as 1826, however, Michael Faraday, the brilliant scientist whose work paved the way for electric power generation and transmission, had observed that a small number of compounds would not only act as a conductor under certain conditions and an insulator in others, but would also serve as amplifiers under certain conditions as well.  These properties allowed a semiconductor to behave like a triode under the right conditions, but for decades scientists remained unable to determine why changes in heat, light, or magnetic field would alter the conductivity of these materials and therefore could not harness this property.  It was not until the field of quantum mechanics became more developed in the 1930s that scientists gained a great enough understanding of electron behavior to attack the problem.  Kelly’s new solid-state group hoped to unlock the mystery of semiconductors once and for all, but their work was interrupted by World War II.

In 1945, Kelly revived the solid-state project under the joint supervision of William Shockley and chemist Stanley Morgan.  The key members of this new team were John Bardeen, a physicist from Wisconsin known as one of the best quantum mechanics theorists in the world, and Walter Brattain, a farm boy from Washington known for his prowess at crafting experiments.  During World War II, great progress had been made in creating crystals of the semiconducting element germanium for use in radar, so the group focused its activities on that element.  In late 1947, Bardeen and Brattain discovered that if they introduced impurities into just the right spot on a lump of germanium, the germanium could amplify a current in the same manner as a vacuum tube triode.  Shockley’s team gave an official demonstration of this phenomenon to other Bell Labs staff on December 23, 1947, which is often recognized as the official birthday of the transistor, so named because it effects the transfer of a current across a resistor — i.e. the semiconducting material.  Smaller, less power-hungry, and more durable than the vacuum tube, the transistor paved the way for the development of the entire consumer electronics and personal computer industries of the late twentieth century.

tumblr_mrf93w8XJQ1s6mxo0o1_500

The TX-0, one of the earliest transistorized computers, designed by Wes Clark and Kenneth Olsen

Despite its revolutionary potential, the transistor was not incorporated into computer designs right away, as there were still several design and production issues that had to be overcome before it could be deployed in the field in large numbers (which will be covered in a later post).  By 1954, however, Bell Labs had deployed the first fully transistorized computer, the Transistor Digital Computer or TRADIC, while electronics giant Philco had introduced a new type of transistor called a surface-barrier transistor that was expensive, but much faster than previous designs and therefore the first practical transistor for use in a computer.  It was in this environment that Clark and Olsen proposed a massive transistorized computer called the TX-1 that would be roughly the same size as a SAGE system and deploy one of the largest core memory arrays ever built, but they were turned down because Forrester did not find their design practical.  Clark therefore went back to the drawing board to create as simple a design as he could that still demonstrated the merits of transistorized computing.  As this felt like a precursor to the larger TX-1, Olsen and Clark named this machine the TX-0.

Completed in 1955 and fully operational the next year, the TX-0 — often pronounced “Tixo” — incorporated 3,600 surface-barrier transistors and was capable of performing 83,000 operations per second.  Like the Whirlwind, the TX-0 operated in real time, and it also incorporated a display with a 512×512 resolution that could be manipulated by a light pen, and a core memory that could store over 65,000 words, though Clark and Olsen settled on a relatively short 18-bit word length.  Unlike the Whirlwind I, which occupied 2,500 square feet, the TX-0 took up a paltry 200 square feet.  Both Clark and Olsen realized that the small, fast, interactive TX-0 represented something new: a (relatively) inexpensive computer that a single user could interact with in real time.  In short, it exhibited many of the hallmarks of what would become the personal computer.

With the TX-0 demonstrating the merits of high-speed transistors, Clark and Olsen returned to their goal of creating a more complex computer with a larger memory, which they dubbed the TX-2.  Completed in 1958, the TX-2 could perform a whopping 160,000 operations per second and contained a core memory of 260,000 36-bit words, far surpassing the capability of the earlier TX-0.  Olsen once again designed much of the circuitry for this follow-up computer, but before it was completed he decided to leave MIT behind.

The Digital Equipment Corporation

vs-dec-pdp-1

The PDP-1, Digital Equipment Corporation’s First Computer

Despite what Olsen saw as the nearly limitless potential of transistorized computers, the world outside MIT remained skeptical.  It was one thing to create an abstract concept in a college laboratory, people said, but another thing entirely to actually deploy an interactive transistorized system under real world conditions.  Olsen fervently desired to prove these naysayers wrong, so along with a fellow student who worked with him on the MTC named Harlan Anderson he decided to form his own computer company.  As a pair of academics with no practical real-world business experience, however, Olsen and Anderson faced difficulty securing financial backing.  They approached defense contractor General Dynamics first, but were flatly turned down.  Unsure how to proceed next, they visited the Small Business Administration office in Boston, which recommended they contact investor Georges Doriot.

Georges Doriot was a Frenchman who immigrated to the United States in the 1920s to earn an MBA from Harvard and then decided to stay on as a professor at the school.  In 1940, Doriot became an American citizen, and the next year he joined the United States Army as a lieutenant colonel and took on the role of director of the Military Planning Division for the Quartermaster General.  Promoted to brigadier general before the end of the war, Doriot returned to Harvard in 1946 and also established a private equity firm called the American Research and Development Corporation (ARD).  With a bankroll of $5 million raised largely from insurance companies and educational institutions, Doriot sought out startups in need of financial support in exchange for taking a large ownership stake in the company.  The goal was to work closely with the company founders to grow the business and then sell the stake at some point in the future for a high return on investment.  While many of the individual companies would fail, in theory the payoff from those companies that did succeed would more than make up the difference and return a profit to the individuals and groups that provided his firm the investment capital.  Before Doriot, the only outlets for a new business to raise capital were the banks, which generally required tangible assets to back a loan, or a wealthy patron like the Rockefeller or Whitney families.  After Doriot’s model proved successful, inexperienced entrepreneurs with big ideas now had a new outlet to bring their products to the world.  This outlet soon gained the name venture capital.

In 1957, Olsen and Anderson wrote a letter to Doriot detailing their plans for a new computer company.  After some back and forth and refinement of the business plan, ARD agreed to provide $70,000 to fund Olsen and Anderson’s venture in return for a 70% ownership stake, but the money came with certain conditions.  Olsen wanted to build a computer like the TX-0 for use by scientists and engineers that could benefit from a more interactive programming environment in their work, but ARD did not feel it was a good idea to go toe-to-toe with an established competitor like IBM.  Instead, ARD convinced Olsen and Anderson to produce components like power supplies and test equipment for core memory.  Olsen and Anderson had originally planned to call their new company the Digital Computer Corporation, but with their new ARD-mandated direction, they instead settled on the name Digital Equipment Corporation (DEC).

In August 1957, DEC moved into its new office space on the second floor of Building 12 of a massive woolen mill complex in Maynard, Massachusetts, originally built in 1845 and expanded many times thereafter.  At the time, the company consisted of just three people: Ken Olsen, Harlan Anderson, and Ken’s younger brother Stan, who had worked as a technician at Lincoln Lab.  Ken served as the leader and technical mastermind of the group, Anderson looked after administrative matters, and Stan focused on manufacturing.  In early 1958, the company released its first products.

DEC arrived on the scene at the perfect moment.  Core memory was in high demand and transistor prices were finally dropping, so all the major computer companies were exploring new designs, creating an insatiable demand for testing equipment.  As a result, DEC proved profitable from the outset.  In fact, Olsen and Anderson actually overpriced their stock due to their business inexperience, but with equipment in such high demand, firms bought from DEC anyway, giving the company extremely high margins and allowing it to exceed its revenue goals.  Bolstered by this success, Olsen chose to revisit the computer project with ARD, so in 1959 DEC began work on a fully transistorized interactive computer.

Designed by Ben Gurley, who had developed the display for the TX-0 at MIT, the Programmed Data Processor-1, more commonly referred to as the PDP-1, was unveiled in December 1959 at the Eastern Joint Computer Conference in Boston.  It was essentially a commercialized version of the TX-0, though it was not a direct copy.  The PDP-1 incorporated a better display than its predecessor with a resolution of 1024 x 1024 and it was also faster, capable of 100,000 operations per second.  The base setup contained only 4,096 18-bit words of core memory, but this could be upgraded to 65,536.  The primary method of inputting programs was a punched tape reader, and it was hooked up to a typewriter as well.  While not nearly as powerful as the latest computers from IBM and its competitors in the mainframe space, the PDP-1 only cost $120,000, a stunningly low price in an era where buying a computer would typically set an organization back a million dollars or more.  Lacking developed sales, manufacturing, or service organizations, DEC sold only a handful of PDP-1 computers over its first two years on the market to organizations like Bolt, Beranek, and Newman and the Lawrence Livermore Labs.  A breakthrough occurred in late 1962 when the International Telegraph and Telephone Company (ITT) decided to order fifteen PDP-1 computers to form the heart of a new telegraph message switching system designated the ADX-7300.  ITT would continue to be DEC’s most important PDP-1 customer throughout the life of the system, ultimately purchasing roughly half of the fifty-three computers sold.

While DEC only sold around fifty PDP-1’s over its lifetime, the revolutionary machine introduced interactive computing commercially and initiated the process of opening computer use to ever greater portions of the public, which culminated in the birth of the personal computer two decades later.  With its monitor and real-time operation, it also provided a perfect platform for creating engaging interactive games.  Even with these advances, the serious academics and corporate data handlers of the 1950s were unlikely to ever embrace the computer as an entertainment medium, but unlike the expensive and bulky mainframes reserved for official business, the PDP-1 and its successors soon found their way into the hands of students at college campuses around the country, beginning with the birthplace of the PDP-1 technology: MIT.

Advertisements

Historical Interlude: The Birth of the Computer Part 3, the Commercialization of the Computer

In the 1940s, the electronic digital computer was a new, largely unproven machine developed in response to specific needs like the code-breaking requirements of Bletchley Park or the ballistics calculations of the Aberdeen Proving Grounds.  Once these early computers proved their worth, projects like the Manchester Mark 1, EDVAC, and EDSAC implemented a stored program concept that allowed digital computers to become useful for a wide variety of scientific and business tasks.  In the early 1950s, several for-profit corporations built on this work to introduce mass-produced computers and offered them to businesses, universities, and government organizations around the world.  As previously discussed, Ferranti in the United Kingdom introduced the first such computer by taking the Manchester Mark 1 design, increasing the speed and storage capacity of the machine, and releasing it as the Ferranti Mark 1 in February 1952.  This would be one of the few times that the United Kingdom led the way in computing over the next several decades, however, as demand remained muted among the country’s conservative businesses, allowing companies in the larger U.S. market to grow rapidly and achieve world dominance in computing.

Note: This is the third of four posts in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, A History of Modern Computing by Paul Ceruzzi, Computers and Commerce: A Study of Technology and Management at Eckert-Mauchly Computer Company, Engineering Research Associates, and Remington Rand, 1946-1957 by Arthur Norberg, and IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh.

UNIVAC

1951_univac_large

The UNIVAC I, the first commercially available computer in the United States

For a brief period from 1943 to 1946, the Moore School in Philadelphia was the center of the computer world as John Mauchly and J. Presper Eckert developed ENIAC and initiated the EDVAC project.  Unlike the more accommodating MIT and Stanford, however, which nurtured the Route 128 tech corridor and Silicon Valley respectively by encouraging professors and students to apply technologies developed in academia to the private sector, the Moore School believed commercial interests had no place in an academic institution and decided to quash them entirely.  In early 1946 the entire staff of the school was ordered to sign release forms giving up the rights to all patent royalties from inventions pioneered at the school.  This was intolerable to both Eckert and Mauchly, who formally resigned on March 31, 1946 to pursue commercial opportunities.

While still at the Moore School, Mauchly met with several organizations that might be interested in the new EDVAC computer.  One of these was the Census Bureau, which once again needed to migrate to new technologies as tabulating machines were no longer sufficient to count the U.S. population in a timely manner.  After leaving the school, Eckert and Mauchly attended a series of meetings with the Census Bureau and the National Bureau of Standards (NBS) between March and May devoted to the possibility of replacing tabulating machines with computers.  After further study, the NBS entered into an agreement with Eckert and Mauchly on September 25, 1946, for them to develop a computer for the Census Bureau in return for $300,000, which Eckert and Mauchly naively believed would cover a large portion of their R&D cost.

Census contract aside, Eckert and Mauchly experienced great difficulty attempting to fund the world’s first for-profit electronic computer company.  Efforts to raise capital commenced in the summer of 1946, but Philadelphia-area investors were focused on the older industries of steel and electric power that had driven the region for decades.  In New York, there was funding available for going electronic concerns, but the concept of venture capital did not yet exist and no investment houses were willing to take a chance on a startup.  The duo were finally forced to turn to friends and family, who provided enough capital in combination with the Census contract for Eckert and Mauchly to establish a partnership called the Electric Control Company in October 1946, which later incorporated as the Eckert-Mauchly Computer Corporation (EMCC) in December 1948.

As work began on the EDVAC II computer at the new Philadelphia offices of the Electric Control Company, the founders continued to seek new contracts to alleviate chronic undercapitalization.  In early 1947 Prudential, a forward-thinking company that had a reputation as an early adopter of new technology, agreed to pay the duo $20,000 to serve as consultants, but refused to commit to ordering a computer until it was completed.  Market research film A.C. Nielsen placed an order in spring 1948 and Prudential changed its mind and followed suit late in the year, but both deals were for $150,000 as Eckert and Mauchly continued to underestimate the cost of building their computers.  To keep the company solvent, the duo completed a $100,000 deal with Northrop Aircraft in October 1947 for a smaller scientific computer called the Binary Automatic Computer (BINAC) for use in developing a new unmanned bomber.  Meanwhile, with contracts coming in Eckert and Mauchly realized that they needed a new name for their computer to avoid confusion with the EDVAC project at the Moore School and settled on UNIVAC, which stood for Universal Automatic Computer.

EMCC appeared to finally turn a corner in August 1948 when it received a $500,000 investment from the American Totalisator Company.  The automatic totalisator was a specialized counting machine originally invented by New Zealander George Julius in the early twentieth century to tally election votes and divide them properly among the candidates.  When the government rejected the device, he adapted it for use at the race track, where it could run a pari-mutual betting system by totaling all bets and assigning odds to each horse.  American Totalisator came to dominate this market after one of its founders, Henry Strauss, invented and patented an electro-mechanical totalisator first used in 1933.  Strauss realized that electronic computing was the logical next step in the totalisator field, so he convinced the company board to invest $500,000 in EMCC in return for a 40% stake in the company.  With the funding from American Totalisator, EMCC completed BINAC and delivered it to Northrop in September 1949.  Although it never worked properly, BINAC was the first commercially sold computer in the world.  Work continued on UNIVAC as well, but disaster struck on October 25, 1949, when Henry Strauss died in a plane crash.  With EMCC’s chief backer at American Totalisator gone, the company withdrew its support and demanded that its loans be repaid.  Eckert and Mauchly therefore began looking for a buyer for their company.

On February 15, 1950, office equipment giant Remington Rand purchased EMCC for $100,000 while also paying off the $438,000 owed to American Totalisator.  James Rand, Jr., the president of the company, had become enamored with the scientific advances achieved during World War II and was in the midst of a post-war expansion plan centered on high technology and electronic products.  In 1946, Rand constructed a new high-tech R&D lab in Norwalk, Connecticut, to explore products as varied as microfilm readers, xerographic copiers, and industrial television systems.  In late 1947, he hired Leslie Groves, the general who oversaw the Manhattan Project, to run the operation.  EMCC therefore fit perfectly into Rand’s plans.  Though Eckert and Mauchly were required to give up their ownership stakes and take salaries as regular employees of Remington Rand, Groves allowed them to remain in Philadelphia and generally let them run their own affairs without interference.

With Remignton Rand sorting out its financial problems, EMCC was finally able to complete its computer.  First accepted by the U.S. Census Bureau on March 31, 1951, the UNIVAC I contained 5,200 vacuum tubes and could perform 1,905 operations a second at a clock speed of 2.25 MHz.  Like the EDVAC and EDSAC, the UNIVAC I used delay line memory as its primary method of storing information, but it also pioneered the use of magnetic tape storage as a secondary memory, which was capable of storing up to a million characters.  The Census Bureau resisted attempts by Remington Rand to renegotiate the purchase price of the computer and spent only the $300,000 previously agreed upon, while both A.C. Nielsen and Prudential ultimately cancelled their orders when Remington Rand threatened to tie up delivery through a lawsuit to avoid selling the computers for $150,00 dollars; future customers were forced to pay a million dollars or more for a complete UNIVAC I.

By 1954, nineteen UNIVAC computers had been purchased and installed at such diverse organizations as the Pentagon, U.S. Steel, and General Electric.  Most of these organizations took advantage of the computer’s large tape storage capacity to employ the computer for data processing rather than calculations, where it competed with the tabulating machines that had brought IBM to prominence.

UNIVAC-1101BRL61-0901

The UNIVAC 1101, Remington Rand’s first scientific computer

To serve the scientific community, Remington Rand turned to another early computer startup, Engineering Research Associates (ERA).  ERA grew out of the code-breaking activities of the United States Navy during World War II, which were carried out primarily through an organization called the Communications Supplementary Activity – Washington (CSAW).  Like Bletchley Park in the United Kingdom, CSAW constructed a number of sophisticated electronic devices to aid in codebreaking, and the Navy wanted to maintain this technological capability after the war.  Military budget cuts made this impractical, however, so to avoid losing the assembly of talent at CSAW, the Navy helped establish ERA in St. Paul, Minnesota, in January 1946 as a private corporation.  The company was led by John Parker, a former Navy lieutenant who had become intimately involved in the airline industry in the late 1930s and 1940s while working for the D.C. investment firm Auchincloss, Parker, and Redpath, and drew most of its important technical personnel from CSAW.

Unlike EMCC, which focused on building a machine for corporate data processing, ERA devoted its activities to intelligence analysis work for the United States Navy.  Like Eckert and Mauchly, the founders of ERA realized the greatest impediment to building a useful electronic computing device was the lack of suitable storage technology, so in its first two years of existence, the company concentrated on solving this problem, ultimately settling on magnetic drum memory, a technology invented by Austrian Gustav Tauchek in 1932 in which a large metal cylinder is coated with a ferromagnetic magnetic material.  As the drum is rotated, stationary write heads can generate an electrical pulse to change the magnetic orientation on any part of the surface of the drum, while a read head can detect the orientation and recognize it in binary as either a “1” or a “0,” therefore making it suitable for computer memory.  A series of specialized cryptoanalytic machines followed with names like Goldberg and Demon, but these machines tended to become obsolete quickly since they were targeted at specific codes and were not programmable to take on new tasks.  Meanwhile, as both ERA and the Navy learned more about developments at the Moore School, they decided a general purpose computer would be a better method of addressing the Navy’s needs than specialized equipment and therefore initiated Task 13 in 1947 to build a stored program computer called Atlas.  Completed in December 1950, the Atlas contained 2,700 vacuum tubes and a drum memory that could hold just over 16,000 24-bit words.  The computer was delivered to the National Security Agency (NSA) for code-breaking operations, and the agency was so pleased with the computer that it accepted a second unit in 1953.  In December 1951, a modified version was made available as the ERA 1101 — a play on the original project name as “1101” is “13” in binary — but ERA did not furnish any manuals, so no businesses purchased the machine.

The same month ERA announced the 1101, it was purchased by Remington Rand.  ERA president John Parker realized that fully entering the commercial world would require a significant influx of capital that the company would be unlikely to raise.  Furthermore, the close relationship between ERA and the Navy had piqued the interest of government auditors and threatened the company’s ability to secure future government contracts.  Therefore, Parker saw the Remington Rand purchase as essential to ERA’s continued survival.  Remington Rand, meanwhile, gained a foothold in a new segment of the computer market.  The company began marketing an improved version of ERA’s first computer as the UNIVAC 1103 in October 1953 and ultimately installed roughly twenty of them, mostly within the military-industrial complex.

In 1952, the American public was introduced to the UNIVAC in dramatic fashion when Mauchly developed a program to predict the results of the general election between Dwight Eisenhower and Adlai Stevenson based on the returns from the previous two elections.  The results were to be aired publicly on CBS, but UNIVAC predicted a massive landslide for Eisenhower in opposition to Gallup polls that indicated a close race.  CBS refused to deliver the results, opting instead to state that the computer predicted a close victory for Eisenhower.  When it became clear that Eisenhower would actually win in a landslide, the network owned up to its deception and aired the true results, which were within just a few electoral votes of the actual total.  Before long, the term “UNIVAC” became a generic word for all computers in the same way “Kleenex” has become synonymous with tissue paper and “Xerox” with photocopying.  For a time, it appeared that Remington Rand would be the clear winner in the new field of electronic computers, but only until IBM finally hit its stride.

IBM Enters the Computer Industry

IBM701Console

Tom Watson, Sr. sits at the console of an IBM 701, the company’s first commercial computer

There is a story, oft-repeated, about Tom Watson, Sr. that claims he saw no value in computers.  According to this story, the aging president of IBM scoffed that there would never be a market for more than five computers and neglected to bring IBM into the new field.  Only after the debut of the UNIVAC I did IBM realize its mistake and hastily enter the computer market.  While there are elements of truth to this version of events, there is no truth to the claim that IBM was completely ignoring the computer market in the late 1940s.  Indeed, the company developed several electronic calculators and had no fewer than three computer projects underway when the UNIVAC I hit the market.

As previously discussed, IBM’s involvement with computers began when the company joined with Howard Aiken to develop the Automatic Sequence Controlled Calculator (ASCC).  That machine was first unveiled publicly on August 6, 1944, and Tom Watson traveled to Cambridge, Massachusetts, to speak at the dedication.  At the Boston train station, Watson was irked that no one from Harvard was there to welcome him.  Irritation turned to rage when he perused the Boston Post and saw that Harvard had not only issued a press release about the ASCC without consulting him, but also gave sole credit to Howard Aiken for inventing the machine.  When an angry and humiliated Watson returned to IBM, he ordered James Bryce and Clair Lake to develop a new machine that would make Aiken’s ASCC look like a toy.  Watson wanted to show the world that IBM could build computers without help from anyone else and to get revenge on the men he felt wronged him.

With IBM seriously engaged in war work, Bryce and Lake felt they would be unable to achieve the breakthroughs in the lab necessary to best Aiken in a reasonable time frame, so instead argued for a simpler goal of creating the world’s first automatic calculator.  To that end, an electronics enthusiast in the company named Haley Dickinson was ordered to convert the company’s electro-mechanical Model 601 Multiplying Punch into a tube-based machine.  Unveiled in September 1946 as the IBM 603 Electronic Multiplier, the machine contained only 300 vacuum tubes and no storage, but it could multiply ten times faster than existing tabulating machines and soon became a sensation.  Embarrassed by the limitations of the machine, however, Watson halted production at 100 units and ordered his engineers to develop an improved model.  Ralph Palmer, an electronics expert that joined IBM in 1932 and was recently returned from a stint in the Navy, was asked to form a new laboratory in Poughkeepsie, New York, dedicated solely to electronics.  Palmer’s group delivered the IBM 604 Electronic Calculating Punch in 1948, which contained 1,400 tubes and could be programmed to solve simple equations.  Over the next ten years, the company leased 5,600 604’s to customers, and Watson came to realize that the future of IBM’s business lay in electronics.

Meanwhile, as World War II neared its conclusion, Watson’s mandate to best Aiken’s ASCC gained momentum.  The man responsible for this project was Wallace Eckert (no relation to the ENIAC co-inventor), who as an astronomy professor at Columbia in the 1920s and 1930s had been one of the main beneficiaries of Watson’s relationship with the university in those years.  After directing the Nautical Almanac of the United States Naval Observatory during much of World War II, Eckert accepted an invitation from Watson in March 1945 to head a new division within IBM specifically concerned with the computational needs of the scientific community called the Pure Science Department.

Eckert remained at headquarters in New York while Frank Hamilton, who had been a project leader on the ASCC, took charge of defining the Aiken-beating machine’s capabilities in Endicott.  In summer 1945, Eckert made new hire Rex Seeber his personal representative to the project.  A Harvard graduate, Seeber had worked with Aiken, but fell out with him when he refused to implement the stored program concept in his forthcoming update of the ASCC.  Seeber’s knowledge of computer theory and electronics perfectly complemented Hamilton’s electrical engineering skills and resulted in the completion of the Selective Sequence Electronic Calculator (SSEC) in 1947.  The SSEC was the first machine in the world to successfully implement the stored program concept, although it is often classified as a calculator rather than a stored program computer due to its limited memory and reliance on paper tape for program control.  The majority of the calculator remained electromechanical, but the arithmetic unit, adapted from the 603, operated at electronic speeds.  Built with 21,400 relays and 12,500 vacuum tubes and assembled at a cost of $950,000, the SSEC was a strange hybrid that exerted no influence over the future of computing, but it did accomplish IBM’s objectives by operating 250 times faster than the Harvard ASCC while also gaining significant publicity for IBM’s computing endeavors by operating while on display to the public on the ground floor of the company’s corporate headquarters from 1948 to 1952.

6703PH02

Tom Watson, Jr., son and successor of Tom Watson, Sr.

The success of the IBM 603 and 604 showed Watson that IBM needed to embrace electronics, but he remained cautious regarding electronic computing.  Indeed, when given the chance to bring Eckert and Mauchly into the IBM fold in mid-1946 after they left the Moore School, Watson ultimately turned them down not because he saw no value in their work but because he did not want to meet the price they demanded to buy out their business.  When he learned that the duo’s computer company was garnering interest from the National Bureau of Standards and Prudential in 1947, he told his engineers they should explore a competing design, but he was thinking in terms of a machine tailored to the needs of specific clients rather than a general-purpose computing device.  By now Watson was in his seventies and set in his ways, and while there is no evidence that he ever uttered the famous line about world demand reaching only five computers, he could simply not envision a world in which electronic computers replaced tabulating machines entirely.  As a result, the push for computing within the company came instead from his son and heir apparent, Tom Watson, Jr.

Thomas J. Watson, Jr. was born in Dayton, Ohio, in 1914, the same year his father accepted the general manager position at C-T-R.  His relationship with his father was strained for most of his life, as the elder Watson was prone to both controlling behavior and ferocious bursts of temper.  While incredibly bright, Watson suffered from anxiety and crippling depression as a child and felt incapable of living up to his father’s standards or of succeeding him at IBM one day, which he sensed was his father’s wish.  As a result, he rebelled and performed poorly in school, only gaining admittance to Brown University as a favor to his father.  After graduating with a degree in business in 1937, he became a salesman at IBM, but grew to hate working there due to the special treatment he received as the CEO’s son and the cult of personality that had grown up around his father.  Desperate for a way out, he joined the Air National Guard shortly before the United States entered World War II and became aide-de-camp to First Air Force Commander Major General Follett Bradley in 1942.  He had no intention of ever returning to IBM.

Working for General Bradley, Watson finally realized his own potential.  He became the general’s most trusted subordinate and gained experience managing teams undertaking difficult tasks.  With the encouragement of Bradley, his inner charisma surfaced for the first time, as did a remarkable ability to focus on and explain complex problems.  Near the end of the war, Bradley asked Watson about his plans for the future and was shocked when Watson said he might become a commercial pilot and would certainly never rejoin IBM.  Bradley stated that he always assumed Watson would return to run the company.  In that moment, Watson realized he was avoiding the company because he feared he would fail, but that his war experiences had prepared him to succeed his father.  On the first business day of 1946, he returned to the fold.

Tom Jr. was not promoted to a leadership position right away.  Instead, Tom Sr. appointed him personal assistant to Charley Kirk, the executive vice president of the company and Tom Sr.’s most trusted subordinate.  Kirk generously took Tom Jr. under his wing, but he also appeared to be first in line to take over the company upon Tom Sr.’s retirement, which Tom Jr. resented.  A potential power struggle was avoided when Kirk suffered a massive heart attack and died in 1947.  Tom Sr. did not feel his son was quite ready to assume the executive vice president position, but Tom Jr. did assume many of Kirk’s responsibilities while an older loyal Watson supporter named George Phillips took on the executive VP role on a short-term basis.  In 1952, Tom Sr. finally named Tom Jr. president of IBM.

ibm-650-drum

The IBM 650, IBM’s most successful early computer

Tom Jr. first learned of the advances being made in computing in 1946 when he and Kirk traveled to the Moore School to see the ENIAC.  He became a staunch supporter of electronics and computing from that day forward.  While there was no formal division of responsibilities drawn up between father and son, it was understood from the late forties until Tom Jr. succeeded his father as IBM CEO in 1956 that Tom Jr. would be given free reign to develop IBM’s electronics and computing businesses, while Tom Sr. concentrated on the traditional tabulating machine business.  In this capacity, Tom Jr. played a significant role in overcoming bias within IBM’s engineering, sales, and future demands divisions towards new technologies and brought IBM fully into the computer age.

By 1950, IBM had two computer projects in progress.  The first had been started in 1948 when Tom Watson, Sr. ordered his engineers to adapt the SSEC into something cheaper that could be mass produced and sold to IBM’s business customers.  With James Bryce incapacitated — he would die the next year — the responsibility of shaping the new machine fell to Wallace Eckert, Frank Hamilton, and John McPherson, an IBM vice president that had been instrumental in constructing two powerful relay calculators for the Aberdeen Proving Grounds during World War II.  The trio decided to create a machine focused on scientific and engineering applications, both because this was their primary area of expertise and because with the dawn of the Cold War the United States government was funding over a dozen scientific computing projects to maintain the technological edge it had built during World War II.  There was a real fear that if IBM did not stay relevant in this area, one of these projects could birth a company capable of challenging IBM’s dominant position in business machines.

Hamilton acted as the chief engineer on the project and chose to increase the system’s memory capacity by incorporating magnetic drum storage, thus leading to the machine’s designation as the Magnetic Drum Calculator (MDC). While the MDC began life as a calculator essentially pairing an IBM 603 with a magnetic drum, the realization that drum memory was expansive enough that a paper tape reader could be discarded entirely and instructions could be read and modified directly from the drum itself caused the project to morph into a full-fledged computer.  By early 1950, engineering work had commenced on the MDC, but development soon stalled as it became the focus of fights between multiple engineering teams as well as the sales and future demands departments over its specifications, target audience, and potential commercial performance.

While work continued on the MDC in Endicott, several IBM engineers in the electronics laboratory in Poughkeepsie initiated their own experiments related to computer technology.  In 1948, an engineer named Philip Fox began studying alternate solutions to vacuum tube memory that would allow for a stored-program computer.  Learning of the Williams Tube in 1948, he decided to focus his attention on CRT memory.  Fox created a machine called the Test Assembly on which he worked to improve on the reliability of existing CRT memory solutions.  Meanwhile, in early 1949, a new employee named Nathaniel Rochester who was dismayed that IBM did not already have a stored-program computer in production began researching the capabilities of magnetic tape as a storage medium.  These disparate threads came together in October 1949 when a decision was made to focus on the development of a tape machine to challenge the UNIVAC, which appeared poised to grab a share of IBM’s data processing business.  By March 1950,  Rochester and Werner Buchholz had completed a technical outline of the Tape Processing Machine (TPM), which would incorporate both CRT and tape memory.  As with the MDC, however, sales and future demands’ inability to clearly define a market for the computer hindered its development.

A breakthrough in the stalemate between sales and engineering finally occurred with the outbreak of the Korean War.  As he had when the United States entered World War II, Tom Watson, Sr. placed the full capabilities of the company at the disposal of the United States government.  The United States Air Force quickly responded that it wanted help developing a new electro-mechanical bombsight for the B-47 Bomber, but Tom Watson, Jr., who already believed IBM was not embracing electronics fast enough, felt working on electro-mechanical projects to be a giant step backwards for the company.  Instead, he proposed developing an electronic computer suitable for scientific computation by government organizations and contractors.

Initially, IBM considered adapting the TPM for its new scientific computer project, but quickly abandoned the idea.  To save on cost, the engineering team of the TPM had decided to design the computer to process numbers serially rather than in parallel, which was sufficient for data processing, but made the machine too slow to meet the computational needs of the government.  Therefore, in September 1950 Ralph Palmer’s engineers drew up preliminary plans for a floating-point decimal computer hooked up to an array of tape readers and other auxiliary devices that would be capable of well over 10,000 operations a second and of storing 2000 thirteen-digit words in Williams Tube memory.  Watson Jr. approved this project in January 1950 under the moniker “Defense Calculator.”  With a tight deadline of Spring 1952 in place for the Defense Calculator so it would be operational in time to contribute to the war effort, Palmer realized the engineering team, led by Nathaniel Rochester and Jerrier Haddad, could not afford to start from scratch on the design of the new computer, so they decided to base the architecture on von Neumann’s IAS Machine.

ibm_702

The IBM 702, IBM’s first computer targeted at businesses

On April 29, 1952, Tom Watson, Sr. announced the existence of the Defense Calculator to IBM’s shareholders at the company’s annual meeting.  In December, the first completed model was installed at IBM headquarters in the berth occupied until then by the SSEC.  On April 7, 1953, the company staged a public unveiling of the Defense Calculator under the name IBM 701 Electronic Data Processing Machine four days after the first production model had been delivered to the Los Alamos National Laboratory in New Mexico.  By April 1955, when production ceased, IBM had completed nineteen installations of the 701 — mostly at government organizations and defense contractors like Boeing and Lockheed — at a rental cost of $15,000 a month.

The success of the 701 finally broke the computing logjam at IBM.  The TPM, which had been on the back burner as the Defense Calculator project gained steam, was redesigned for faster operation and announced in September 1953 as the IBM 702, although the first model was not installed until July 1955.  Unlike the 701, which borrowed the binary numeral system from the IAS Machine, the 702 used the decimal system as befit its descent from the 603 and 604 electronic calculators.  It also shipped with a newly developed high speed printer capable of outputting 1,000 lines per minute.  IBM positioned the 702 as a business machine to compete with the UNIVAC I and ultimately installed fourteen of them.  Meanwhile, IBM also reinstated the MDC project — which had stalled almost completely — in November 1952, which saw release in 1954 as the IBM 650.  While the drum memory used in the 650 was slower than the Williams Tube memory of the 701 and 702, it was also more reliable and cheaper, allowing IBM to lease the 650 at the relatively low cost of $3,250 a month.  As a result, it became IBM’s first breakout success in the computer field, with nearly 2,000 installed by the time the last one rolled off the assembly line in 1962.

IBM’s 700 series computers enjoyed several distinct advantages over the UNIVAC I and UNIVAC 1103 computers marketed by Remington Rand.  Technologically, Williams Tube memory was both more reliable and significantly faster than the mercury delay line memory and drum memory used in the UNIVAC machines, while the magnetic tape system developed by IBM was also superior to the one used by Remington Rand.  Furthermore, IBM designed its computers to be modular, making them far easier to ship and install than the monolithic UNIVAC system.  Finally, IBM had built one of the finest sales and product servicing organizations in the world, making it difficult for Remington Rand to compete for customers.  While UNIVAC models held a small 30 to 24 install base edge over the 700 series computers as late as August 1955, IBM continued to improve the 700 line through newly emerging technologies and just a year later moved into the lead with 66 700 series installations versus 46 UNIVAC installations.  Meanwhile, installations of the 650 far eclipsed any comparable model, giving IBM control of the low end of the computer market as well.  The company would remain the number one computer maker in the world throughout the mainframe era.

Historical Interlude: The Birth of the Computer Part 2, The Creation of the Electronic Digital Computer

In the mid-nineteenth century, Charles Babbage attempted to create a program-controlled universal calculating machine, but failed for lack of funding and the difficulty of creating the required mechanical components.  This failure spelled the end of digital computer research for several decades.  By the early twentieth century, however, fashioning small mechanical components no longer presented the same challenge, while the spread of electricity generating technologies provided a far more practical power source than the steam engines of Babbage’s day.  These advances culminated in just over a decade of sustained innovation between 1937 and 1949 out of which the electronic digital computer was born.  While both individual computer components and the manner in which the user interacts with the machine have continued to evolve, the desktops, laptops, tablets, smartphones, and video game consoles of today still function according to the same basic principles as the Manchester Mark 1, EDSAC, and EDVAC computers that first operated in 1949.  This blog post will chart the path to these three computers.

Note: This is the second of four “historical interlude” posts that will summarize the evolution of computer technology between 1830 and 1960.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, Reckoners: The Prehistory of the Digital Computer, From Relays to the Stored Program Concept, 1935-1945 by Paul Ceruzzi, The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, and the articles “Alan Turing: Father of the Modern Computer” by B. Jack Copeland and Diane Proudfoot, “Colossus: The First Large Scale Electronic Computer” by Jack Copeland, and “A Brief History of Computing,” also by Copeland.

Analog Computing

102680080-03-01

Vannevar Bush with his differential analyzer, an analog computer

While a digital computer after the example of Babbage would not appear until the early 1940s, specialized computing devices that modeled specific systems mechanically continued to be developed in the late nineteenth and early twentieth centuries.  These machines were labelled analog computers, a term derived from the word “analogy” because each machine relied on a physical model of the phenomenon being studied to perform calculations unlike a digital computer that relied purely on numbers.  The key component of these machines was the wheel-and-disc integrator, first described by James Thomson, that allowed integral calculus to be performed mechanically.  Perhaps the most important analog computer of the nineteenth century was completed by James’s brother William, better known to history as Lord Kelvin, in 1876.  Called the tide predictor, Kelvin’s device relied on a series of mechanical parts such as pulleys and gears to simulate the gravitational forces that produce the tides and measured the water depth of a harbor at any given time of day, printing the results on a roll of paper.  Before Lord Kelvin’s machine, creating tide tables was so time-consuming that only the most important ports were ever charted.  After Kelvin’s device entered general use, it was finally possible to complete tables for thousands of ports around the world.  Improved versions of Kelvin’s computer continued to be used until the 1950s.

In the United States, interest in analog computing began to take off in the 1920s as General Electric and Westinghouse raced to build regional electric power networks by supplying alternating-current generators to power plants.  At the time, the mathematical equations required to construct the power grids were both poorly understood and difficult to solve by hand, causing electrical engineers to turn to analog computing as a solution.  Using resistors, capacitors, and inducers, these computers could simulate how the network would behave in the real world.  One of the most elaborate of these computers, the AC Network Analyzer, was built at MIT in 1930 and took up an entire room.  With one of the finest electrical engineering schools in the country, MIT quickly became a center for analog computer research, which soon moved from highly specific models like the tide predictor and power grid machines to devices capable of solving a wider array of mathematical problems through the work of MIT professor Vannevar Bush.

One of the most important American scientists of the mid-twentieth century, Bush possessed a brilliant mind coupled with a folksy demeanor and strong administration skills.  These traits served him well in co-founding the American Appliance Company in 1922 — which later changed its name to Raytheon and became one of the largest defense contractors in the world — and led to his appointment in 1941 to head the new Office of Scientific Research and Development, which oversaw and coordinated all wartime scientific research by the United States government during World War II and was instrumental to the Allied victory.

Bush built his first analog computer in 1912 while a doctoral student at Tufts College.  Called the “profile tracer,” it consisted of a box hung between two bicycle wheels and would trace the contours of the ground as it was rolled.  Moving on to MIT in 1919, Bush worked on problems involving electric power transmission and in 1924 developed a device with one of his students called the “product integraph” to simplify the solving and plotting of the first-order differential equations required for that work.  Another student, Harold Hazen, suggested this machine be extended to solve second-order differential equations as well, which would make the device useful for solving a wide array of physics problems.  Bush immediately recognized the potential of this machine and worked with Hazen to build it between 1928 and 1931.  Bush called the resulting machine the “differential analyzer.”

The differential analyzer improved the operation of Thomson’s wheel-and-disc integrator through a device called a torque amplifier, allowing it to mechanically model, solve, and plot a wider array of differential equations than any analog computer that came before, but it still fell short of the Babbage ideal of a general-purpose digital device.  Nevertheless, the machine was installed at several universities, corporations, and government laboratories and demonstrated the value of using a computing device to perform advanced scientific calculations.  It was therefore an important stepping stone on the path to the digital computer.

Electo-Mechanical Digital Computers

23593-004-D5156F2C

The Automatic Sequence Controlled Calculator (ASCC), also known as the Harvard Mark I, the first proposed electro-mechanical digital computer, though not the first completed

With problems like power network construction requiring ever more complex equations and the looming threat of World War II requiring world governments to compile large numbers of ballistics tables and engage in complex code-breaking operations, the demand for computing skyrocketed in the late 1930s and early 1940s.  This led to a massive expansion of human computing and the establishment of the first for-profit calculating companies, beginning with L.J. Comrie’s Scientific Computing Services Limited in 1937.  Even as computing services were expanding, however, the armies of human computers required for wartime tasks were woefully inadequate for completing necessary computations in a timely manner, while even more advanced analog computers like the differential analyzer were still too limited to carry out many important tasks.  It was in this environment that researchers in the United States, Great Britain, and Germany began attempting to address this computing shortfall by designing digital calculating machines that worked similarly to Babbage’s Analytical Engine but made use of more advanced components not available to the British mathematician.

The earliest digital calculating machines were based on electromechanical relay technology.  First developed in the mid nineteenth century for use in the electric telegraph, a relay consists in its simplest form of a coil of wire, an armature, and a set of contacts.  When a current is passed through the coil, a magnetic field is generated that attracts the armature and therefore draws the contacts together, completing a circuit.  When the current is removed, a spring causes the armature to return to the open position.  Electromechanical relays played a crucial role in the telephone network in the United States, routing calls between different parts of the network.  Therefore, Bell Labs, the research arm of the telephone monopoly AT&T, served as a major hub for relay research and was one of the first places where the potential of relays and similar switching units for computer construction was first contemplated.

The concept of the binary digital circuit, which continues to power computers to this day, was independently articulated and applied by several scientists and mathematicians in the late 1930s.  Perhaps the most influential of these thinkers — due to his work being published and widely disseminated — was Claude Shannon.  A graduate of the University of Michigan with degrees in electrical engineering and math, Shannon matriculated to MIT, where he secured a job helping Bush run his Differential Analyzer.  In 1937, Shannon took a summer job at Bell Labs, where he gained hands on experience with the relays used in the phone network and connected their function with another interest of his — the symbolic logic system created by mathematician George Boole in the 1840s.

Basically, Boole had discovered a way to represent formal logical statements mathematically by giving a true proposition a value of 1 and a false proposition a value of 0 and then constructing mathematical equations that could represent the basic logical operations such as “and,” “or” and “not.”  Shannon realized that since a relay either existed in an “on” or an “off” state, a series of relays could be used to construct logic gates that emulated Boolean logic and therefore carry out complex instructions, which in their most basic form are a series of “yes” or “no,” “on” or “off,” “1” or “0” propositions.  When Shannon returned to MIT that fall, Bush urged him to include these findings in his master’s thesis, which was published later that year under the name “A Symbolic Analysis of Relay and Switching Circuits.”  In November 1937, a Bell Labs researcher named George Stibitz, who was aware of Shannon’s theories, applied the concept of binary circuits to a calculating device for the first time when he constructed a small relay calculator he dubbed the K-Model because he built it at his kitchen table.  Based on this prototype, Stibitz received permission to build a full-sized model at Bell Labs, which was named the Complex Number Calculator and completed in 1940.  While not a full-fledged programmable computer, Stibitz’s machine was the first to use relays to perform basic mathematical operations and demonstrated the potential of relays and binary circuits for computing devices.

One of the earliest digital computers to use electromechanical relays was proposed by Howard Aiken in 1936.  A doctoral candidate in mathematics at Harvard University, Aiken needed to solve a series of non-linear differential equations as part of his dissertation, which was beyond the capabilities of Bush’s differential analyzer at neighboring MIT.  Unenthused by the prospect of solving these equations by hand, Aiken, who was already a skilled electrical engineer, proposed that MIT build a large-scale digital calculator to do the work.  The university turned him down, so Aiken approached the Monroe Calculating Machine Company, which also failed to see any value in the project.  Monroe’s chief engineer felt the idea had merit, however, and urged Aiken to approach IBM.

When last we left IBM in 1928, the company was growing and profitable, but lagged behind several other companies in overall size and importance.  That all changed with the onset of the Great Depression.  Like nearly every other business in the country, IBM was devastated by the market crash of 1929, but Tom Watson decided to boldly soldier on without laying off workers or cutting production, keeping his faith that the economy could not continue in a tailspin for long.  He also increased the company’s emphasis on R&D, building one of the world’s first corporate research laboratories to house all his engineers in Endicott, New York in 1932-33 at a cost of $1 million.  As the Depression dragged on, machines began piling up in the factories and IBM’s growth flattened, threatening the solvency of the company.  Watson’s gambles increasingly appeared to be a mistake, but then President Franklin Roosevelt began enacting his New Deal legislation.

In 1935, the United States Congress passed the Social Security Act.  Overnight, every company in the country was required to keep detailed payroll records, while the Social Security Administration had to keep a file on every worker in the nation.  The data processing burden of the act was enormous, and IBM, with its large stock of tabulating machines and fully operational factories, was the only company able to begin filling the demand immediately.  Between 1935 and 1937, IBM’s revenues rose from $19 million to $31 million and then continued to grow for the next 45 years.  The company was never seriously challenged in tabulating equipment again.

Traditionally, data processing revolved around counting tangible objects, but by the time Aiken approached IBM Watson had begun to realize that scientific computing was a natural extension of his company’s business activities.  The man who turned Watson on to this fact was Ben Wood, a Columbia professor who pioneered standardized testing and was looking to automate the scoring of his tests using tabulating equipment.  In 1928, Wood wrote ten companies to win support for his ideas, but only Watson responded, agreeing to grant him an hour to make his pitch.  The meeting began poorly as the nervous Wood failed to hold Watson’s interest with talk of test scoring, so the professor expanded his presentation to describe how nearly anything could be represented mathematically and therefore quantified by IBM’s machines.  One hour soon stretched to over five as Watson grilled Wood and came to see the value of creating machines for the scientific community.  Watson agreed to give Wood all the equipment he needed, dropped in frequently to monitor Wood’s progress, and made the professor an IBM consultant.  As a result of this meeting, IBM began supplying equipment to scientific labs around the world.

Aiken

Howard Aiken, designer of the Automatic Sequence Control Calculator

In 1937, Watson began courting Harvard, hoping to create the same kind of relationship he had long enjoyed with Columbia.  He dispatched an executive named John Phillips to meet with deans and faculty, and Aiken used the opportunity to introduce IBM to his calculating device.  He also wrote a letter to James Bryce, IBM’s chief engineer, who sold Watson on the concept.  Bryce assigned Clair Lake to oversee the project, which would be funded and built by IBM in Endicott according to Aiken’s design and then installed at Harvard.

Aiken’s initial concept basically stitched together a card reader, a multiplying punch, and a printer, removing human intervention in the process by connecting the components through electrical wiring and incorporating relays as switching units to control the passage of information through the parts of the machine.  Aiken drew inspiration from Babbage’s Analytical Enginge, which the mathematician first learned about soon after proposing his device when a technician informed him that the university actually owned a fragment of one of Babbage’s calculating machines that had been donated by the inventor’s son in 1886. Unlike Babbage, however, Aiken did not employ separate memory and computing elements, as all calculations were performed across a series of 72 accumulators that both stored and modified the data transmitted to them by the relays.  Without something akin to a CPU, the machine was actually less advanced than the Analytical Engine in that it did not support conditional branching — the ability to modify a program on the fly to incorporate the results of previous calculations — and therefore required all calculations to be done in a set sequence while requiring complex programs to use large instruction sets and long lines of paper tape.

Work began on the Automatic Sequence Control Calculator (ASCC) Mark I in 1939, but the onset of World War II resulted in the project being placed on the back burner as IBM shifted its focus to more important war work and Aiken entered the Navy.  It was finally completed in January 1943 at a cost of $500,000 and subsequently installed at Harvard in early 1944 after undergoing a year of testing in Endicott.  Measuring 8 feet tall and 51 feet long, the machine was housed in a gleaming metal case designed by Norman Bel Geddes, known for his art deco works such as the Metropolitan Opera House in New York.  By the time of its completion, the ASCC already lagged behind several other machines technologically and therefore did not play a significant role in the further evolution of the computer.  It is notable, however, both as the earliest proposed digital computer to actually be built and as IBM’s introduction to the world of computing.

zuse

Konrad Zuse, designer of the Z1, the first completed digital computer

While Howard Aiken was still securing support for his digital computer, a German named Konrad Zuse was busy completing one of his own.  Born in Berlin, Zuse spent most of his childhood in Braunsberg, East Prussia (modern Braniewo, Poland).  Deciding on a career as an engineer, he enrolled at the Technical College of Berlin-Charlottenburg in 1927.  While not particularly interested in mathematics, Zuse did have to work with complex equations to calculate the lode-bearing capability of structures, and like Aiken across the Atlantic he was not enthused at having to perform these calculations by hand.  Therefore, in 1935 Zuse began designing a universal automatic calculator consisting of a computing element, a storage unit, and a punched tape reader, independently arriving at the same basic design that Babbage had developed a century before.

While Zuse’s basic concept did not stray far from Babbage, however, he did incorporate one crucial improvement in his design that neither Babbage nor Aiken had considered, storing the numbers in memory according to a binary rather than a decimal system.  Zuse’s reason for doing so was practical — as an accomplished mechanical engineer he preferred keeping his components as simple as possible to make the computer easier to design and build — but the implications of this decision went far beyond streamlined memory construction.  Like Shannon, Zuse realized that by recognizing data in only two states, on and off, a computing device could represent not just numbers, but also instructions.  As a result, Zuse was able to use the same basic building blocks for both his memory and computing elements, simplifying the design further.

By 1938, Zuse had completed his first computer, a mechanical binary digital machine called the Z1. (Note: Originally, Zuse called this computer the V1 and continued to use the “V” designation on his subsequent computers.  After World War II, he began referring to these machines using the “Z” designation instead to avoid confusion with Germany’s V1 and V2 rockets.)  This first prototype was fairly basic, but it proved two things for Zuse: that he could create a working automatic calculating device and that the computing element could not be mechanical, as the components were just too unreliable.  The solution to this problem came from college friend Helmut Schreyer, an electrical engineer who convinced Zuse that the electrical relays used in telephone networks would provide superior performance.  Schreyer also worked as a film projectionist and convinced Zuse to switch from paper tape to punched film stock for program control.  These improvements were incorporated into the Z2 computer, completed in 1939, which never worked reliably, but was essential for securing funding for Zuse’s next endeavor.

Z3_1

A reconstruction of Konrad Zuse’s Z3, the world’s first programmable fully automatic digital computer

In 1941, Konrad Zuse completed the Z3 for the German government, the first fully operational digital computer in the world.  The computer consisted of two cabinets containing roughly 2,600 relays — 1,800 for memory, 600 for computing, and 200 for the tape reader — and a small display/keyboard unit for inputting programs.  With a memory of only 64 characters, the computer was too limited to carry out useful work, but it served as an important proof of concept and illustrated the potential of a programmable binary computer.

Unfortunately for Zuse, the German government proved disinterested in further research.  Busy fighting a war it was convinced would be over in just a year or two, the Third Reich limited its research activities to projects that could directly impact the war effort in the short-term and ignored the potential of computing entirely.  While Zuse continued to work on the next evolution of his computer design, the Z4, between 1942 and 1945, he did so on his own without the support of the Reich, which also turned down a computer project by his friend Schreyer that would have replaced relays with electronics.  Isolated from the rest of the developed world by the war, Zuse’s theories would have little impact on subsequent developments in computing, while the Z3 itself was destroyed in an Allied bombing raid on Berlin in 1943 before it could be studied by other engineers.  That same year, Great Britain’s more enthusiastic support of computer research resulted in the next major breakthrough in computing technology.

The Birth of the Electronic Computer

Colossus

Colossus, the world’s first programmable electronic computer

Despite the best efforts of Aiken and Zuse, relays were never going to play a large role in computing, as they were both unreliable and slow due to a reliance on moving parts.  In order for complex calculations to be completed quickly, computers would need to transition from electro-mechanical components to electronic ones, which function instead by manipulating a beam of electrons.

The development of the first electronic components grew naturally out of Thomas Edison’s work with the incandescent light bulb.  In 1880, Edison was conducting experiments to determine why the filament in his new incandescent lamps would sometimes break and noticed that an electrical charge would not flow through a negatively charged plate.  Although this effect had been observed by other scientists as early as 1873, Edison was the first to patent a voltage-regulating device based on this principle in 1883, which resulted in the phenomenon being named the “Edison effect.”

Edison, who did not have a solid grasp of the underlying science, did not follow up on his discovery.  In 1904, however, John Fleming, a consultant with the Marconi Company engaged in research relating to wireless telegraphy, realized that the Edison effect could be harnessed to create a device that would only allow the flow of electric current in one direction and thus serve as a rectifier that turned a weak alternating current into a stronger direct current.  This would in turn allow a receiver to be more sensitive to radio waves, thus making reliable trans-Atlantic wireless communication possible.  Based on his research, Fleming created the first diode, the Fleming Valve, in which an electric current was passed in one direction from a negatively-charged cathode to a positively-charged anode through a vacuum-sealed glass container.  The vacuum tube concept invented by Fleming remained the primary building block of electronic devices for the next fifty years.

In 1906, an American electrical engineer named Lee DeForest working independently of Fleming began creating his own series of electron tubes, which he called Audions.  DeForest’s major breakthrough was the development of the triode, which used a third electrode called a grid that could control the voltage of the current in the tube and therefore serve as an amplifier to boost the power of a signal.  DeForest’s tube contained gas at low pressure, which inhibited reliable operation, but by 1913 the first vacuum tube triodes had been developed.  In 1918, British physicists William Eccles and F.W. Jordan used two triodes to create the Eccles-Jordan circuit, which could flip between two states like an electrical relay and therefore serve as a switching device.

Even after the invention of the Eccles-Jordan circuit, few computer pioneers considered using vacuum tubes in their devices.  Conventional wisdom held they were unsuited for large-scale projects because a triode contains a filament that generates a great deal of heat and is prone to burnout.  Consequently, the failure rate would be unacceptable in a device requiring thousands of tubes.  One of the first people to challenge this view was a British electrical engineer named Thomas Flowers.

Tommy_Flowers

Tommy Flowers, the designer of Colossus

Born in London’s East End, Flowers, the son of a bricklayer, simultaneously took an apprenticeship in mechanical engineering at the Royal Armory, Woolwich, while attending evening classes at the University of London.  After graduating with a degree in electrical engineering, Flowers took a job with the telecommunications branch of the General Post Office (GPO) in 1926.  In 1930, he was posted to the GPO Research Branch at Dollis Hill, where he established a reputation as a brilliant engineer and achieved rapid promotion.

In the early 1930s, Flowers began conducting research into the use of electronics to replace relays in telephone switchboards.  Counter to conventional wisdom, Flowers realized that vacuum tube burnout usually occurred when a device was switched on and off frequently.  In a switchboard or computer, the vacuum tubes could remain in continuous operation for extended periods once switched on, thus greatly increasing their longevity.  Before long, Flowers began experimenting with equipment containing as many as 3,000 vacuum tubes.  Flowers would make the move from switchboards to computing devices with the onset of World War II.

With the threat of Nazi Germany rising in the late 1930s, the United Kingdom began devoting more resources to cracking German military codes.  Previously, this work had been carried out in London at His Majesty’s Government Code and Cypher School, which was staffed with literary scholars rather than cryptographic experts.  In 1938, however, MI6, the British Intelligence Service, purchased a country manor called Bletchley Park, near the intersection of the rail lines connecting Oxford and Cambridge and London and Birmingham, to serve as a cryptographic and code-breaking facility.  The next year, the government began hiring mathematicians to seriously engage in code-breaking activities.  The work conducted at the manor has been credited with shortening the war in Europe and saving countless lives. It also resulted in the development of the first electronic computer.

Today, the Enigma Code, broken by a team led by Alan Turing, is the most celebrated of the German ciphers decrypted at Bletchley, but this was actually just one of several systems used by the Reich and was not even the most complicated.  In mid-1942, Germany initiated general use of the Lorenz Cipher, which was reserved for messages between the German High Command and high-level army commands, as the encryption machine — which the British code-named “Tunny” — was not easily portable like the Enigma Machine.  In 1942, Bletchley established a section dedicated to breaking the cipher, and by November a system called the “statistical method” had been developed by William Tutte to crack the code, which built on earlier work by Turing.  When Tutte presented his method, mathematician Max Newman decided to establish a new section — soon labelled the Newmanry — to apply the statistical method with electronic machines.  Newman’s first electronic codebreaking machine, the Heath Robinson, was both slow and unreliable, but it worked well enough to prove that Newman was on the right track.

Meanwhile, Flowers joined the code-breaking effort in 1941 when Alan Turing enlisted Dollis Hill to create some equipment for use in conjunction with the Bombe, his Enigma-cracking machine.  Turing was greatly impressed by Flowers, so when Dollis Hill encountered difficulty crafting a combining unit for the Heath Robison, Turing suggested that Flowers be called in to help.  Flowers, however, doubted that the Heath Robisnon would ever work properly, so in February 1943 he proposed the construction of an electronic computer to do the work instead.  Bletchley Park rejected the proposal based on existing prejudices over the unreliability of tubes, so Flowers began building the machine himself at Dollis Hill.  Once the computer was operational, Bletchley saw the value in it and accepted the machine.

Installed at Bletchley Park in January 1944, Flowers’s computer, dubbed Colossus, contained 1600 vacuum tubes and processed 5,000 characters per second, a limit imposed not by the speed of the computer itself, but rather by the speed that the reader could safely operate without risk of destroying the magnetic tape.  In June 1944, Flowers completed the first Colossus II computer, which contained 2400 tubes and used an early form of shift register to perform five simultaneous operations and therefore operated at a speed of 25,000 characters per second.  The Colossi were not general purpose computers, as they were dedicated solely to a single code-breaking operation, but they were program-controlled. Unlike electro-mechanical computers, however, electronic computers process information too quickly to accept instructions from punched cards or paper tape, so the Colossus actually had to be rewired using plugs and switches to run a different program, a time-consuming process.

As the first programmable electronic computer, Colossus was an incredibly significant advance, but it ultimately exerted virtually no influence on future computer design.  By the end of the war, Bletchley Park was operating nine Colossus II computers alongside the original Colossus to break Tunny codes, but after Germany surrendered, Prime Minister Winston Churchill ordered the majority of the machines dismantled and kept the entire project classified.  It was not until the 1970s that most people knew that Colossus had even existed, and the full function of the machine remained unknown until 1996.  Therefore, instead of Flowers being recognized as the inventor of the electronic computer, that distinction was held for decades by a group of Americans working at the Moore School of the University of Pennsylvania.

ENIAC

ENIAC_Image_2

The Electronic Numerical Integrator and Computer (ENIAC), the first widely known electronic computer

In 1935, the United States Army established a new Ballistic Research Laboratory (BRL) at the Aberdeen Proving Grounds in Maryland dedicated to calculating ballistics tables for artillery.  With modern guns capable of lofting projectiles at targets many miles away, properly aiming them required the application of complex differential equations, so the BRL assembled a staff of thirty to create trajectory tables for various ranges, which would be compiled into books for artillery officers.  Aberdeen soon installed one of Bush’s differential analyzers to help compute the tables, but the onset of World War II overwhelmed the lab’s capabilities.  Therefore, it began contracting some of its table-making work with the Moore School, the closest institution with its own differential analyzer.

The Moore School of Electrical Engineering of the University of Pennsylvania owned a fine reputation, but it carried nowhere near the prestige of MIT and therefore did not receive the same level of funding support from the War Department for military projects.  It did, however, place itself on a war footing by accelerating degree programs through the elimination of vacations and instituting a series of war-related training and research programs.  One of these was the Engineering, Science, Management, War Training (ESMWT) program, an intensive ten-week course designed to familiarize physicists and mathematicians with electronics to address a manpower shortfall in technical fields.  One of the graduates of this course was a physics instructor at a nearby college named John Mauchly.

Born in Cincinnati, Ohio, John William Mauchly grew up in Chevy Chase, Maryland, after his physicist father became the research chief for the Department of Terrestrial Magnetism of the Carnegie Insitution, a foundation established in Washington, D.C. to support scientific research around the country.  Sebastien Mauchly specialized in recording atmospheric electrical conditions to further weather research, so John became particularly interested in meteorology.  After completing a Ph.D. at Johns Hopkins University in 1932, Mauchly took a position at Ursinus College, a small Philadelphia-area institution, where he studied the effects of solar flares and sunspots on long-range weather patterns.  Like Aiken and Zuse before him, Mauchly grew tired of solving the complex equations required for his research and began to dream of building a machine to automate this process.  After viewing an IBM electric calculating machine and a vacuum tube encryption machine at the 1939 World’s Fair, Mauchly felt electronics would provide the solution, so he began taking a night course in electronics and crafting his own experimental circuits and components.  In December 1940, Moore gave a lecture articulating his hopes of building a weather prediction computer to the American Association for the Advancement of Science.  After the lecture, he met an Iowa State College professor named John Atanasoff, who would play an important role in opening Mauchly to the potential of electronics by inviting him out to Iowa State to study a computer project he had been working on for several years.

atanasoff-berry-computer

The Atanasoff-Berry Computer (ABC), the first electronic computer project, which was never completed

A graduate of Iowa State College that earned a Ph.D. in theoretical physics from the University of Wisconsin-Madison in 1930, John Atanasoff, like Howard Aiken, was drawn to computing due to the frustration of solving equations for his dissertation.  In the early 1930s, Atanasoff experimented with tabulating machines and analog computing to make solving complex equations easier, culminating in a decision in December 1937 to create a fully automatic electronic digital computer.  Like Shannon and Zuse, Atanasoff independently arrived at binary digital circuits as the most efficient way to do calculations, remembering childhood lessons by his mother, a former school teacher, on calculating in base 2.  While he planned to use vacuum tubes for his calculating circuits, however, he rejected them for storage due to cost.  Instead, he developed a system in which paper capacitors would be attached to a drum that could be rotated by a bicycle chain.  By keeping the drums rotating so that the capacitors would sweep past electrically charged brushes once per second, Atanasoff believed he would be able to keep the capacitors charged and therefore create a low-cost form of electronic storage.  Input and output would be accomplished through punch cards or paper tape.  Unlike most of the other computer pioneers profiled so far, Atanasoff was only interested in solving a specific set of equations and therefore hardwired the instructions into the machine, meaning it would not be programmable.

By May 1939, Atanasoff was ready to put his ideas into practice, but he lacked electrical engineering skills himself and therefore needed an assistant to actually build his computer.  After securing a $650 grant from the Iowa State College Research Council, Atanasoff hired a graduate student recommended by one of his colleagues named Clifford Berry.  A genius who graduated high school at sixteen, Berry had been an avid ham radio operator in his youth and worked his way through college at Iowa State as a technician for a local company called Gulliver Electric.  He graduated in 1939 at the top of his engineering school class.  The duo completed a small-scale prototype of Atanasoff’s concept in late 1939 and then secured $5,330 from a private foundation to begin construction of what they named the Atanasoff-Berry Computer (ABC), the first electronic computer to employ separate memory and computing elements and a binary system for processing instructions and storing data, predating Colossus by just a few years.  By 1942, the ABC was nearly complete, but it remained unreliable and was ultimately abandoned when Atanasoff left Iowa State for a wartime posting with the Naval Ordinance Laboratory.  With no other champion at the university, the ABC was cannibalized for parts for more important wartime projects, after which the remains were placed in a boiler room and forgotten.  Until a patent lawsuit brought renewed attention to the computer in the 1960s, few were aware the ABC had ever existed, but in June 1941 Mauchly visited Atanasoff and spent five days learning everything he could about the machine.  While there is still some dispute regarding how influential the ABC was on Mauchly’s own work, there is little doubt that at the very least the computer helped guide his own thoughts on the potential of electronics for computing.

Upon completing the ESMWT at the Moore School, Mauchly was offered a position on the school’s faculty, where he soon teamed with a young graduate student he met during the course to realize his computer ambitions.  John Presper Eckert was the only son of a wealthy real estate developer from Philadelphia and an electrical engineering genius who won a city-wide science fair at twelve years old by building a guidance system for model boats and made money in high school by building and selling radios, amplifiers, and sound systems.  Like Tommy Flowers in England, Eckert was a firm believer in the use of vacuum tubes in computing projects and worked with Mauchly to upgrade the differential analyzer by using electronic amplifiers to replace some of its components.  Meanwhile, Mauchly’s wife was running a training program for human computers, which the university was employing to work on ballistics tables for the BRL.  Even with the differential analyzer working non-stop and over two hundred human computers doing calculations by hand, a complete table of roughly 3,000 trajectories took the BRL thirty days to complete.  Mauchly was uniquely positioned in the organization to understand both the demands being placed on Moore’s computers and the technology that could greatly increase the efficiency of their work.  He therefore drafted a memorandum in August 1942 entitled “The Use of High Speed Vacuum Devices for Calculating” in an attempt to interest the BRL in greatly speeding up artillery table creation through use of an electronic computer.

Mauchly submitted his memorandum to both the Moore School and the Army Ordinance Department and was ignored by both, most likely due to the continued skepticism over the use of vacuum tubes in large-scale computing projects.  The paper did catch the attention of one important person, however, Lieutenant Herman Goldstine, a mathematics professor from the University of Chicago currently serving as the liaison between the BRL and the Moore School human computer training program.  While not one of the initial recipients of the memo, Goldstine became friendly with Mauchly in late 1942 and learned of the professor’s ideas.  Aware of the acute manpower crisis faced by the BRL for creating its ballistic tables, Goldstine urged Mauchly to resubmit his memo and promised he would use all his influence to aid its acceptance.  Therefore, in April 1943, Mauchly submitted a formal proposal for an electronic calculating machine that was quickly approved and given the codename “Project PX.”

g

John Mauchly (right) and J. Presper Eckert, the men behind ENIAC

Eckert and Mauchly began building the Electronic Numerical Integrator and Computer (ENIAC) in autumn 1943 with a team of roughly a dozen engineers.  Mauchly remained the visionary of the project and was largely responsible for defining its capabilities, while the brilliant engineer Eckert turned that vision into reality.  ENIAC was a unique construction that had more in common with tabulating machines than later electronic computers, as the team decided to store numbers in decimal rather than binary and stored and modified numbers in twenty accumulators, therefore failing to separate the memory and computing elements.  The machine was programmable, though like Colossus this could only be accomplished through rewiring, as the delay of waiting for instructions to be read from a tape reader was unacceptable in a machine operating at electronic speed.  The computer was powerful for its time, driven by 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays, and could output a complete artillery table in just fifteen minutes.  The entire computer took up 1,800 square feet of floor space, consumed 150 kilowatts of power, and generated an enormous amount of heat.  Costing roughly $500,000, ENIAC was completed in November 1945 and successfully ran its first program the following month.

Unlike the previously discussed Z3, Colossus, and ABC computers, the ENIAC was announced to the general public with much fanfare in February 1946, was examined by many other scientists and engineers, and became the subject of a series of lectures held at the Moore School over eight weeks in the summer of 1946 in which other aspiring computer engineers could learn about the machine in detail.  While it was completed too late to have much impact on the war effort and exerted virtually no influence on future computers from a design perspective, the ENIAC stands as the most important of the early computers because it proved to the world at large that vacuum tube electronic computers were possible and served as the impetus for later computer projects.  Indeed, even before the ENIAC had been completed, Eckert and Mauchly were moving on to their next computer concept, which would finally introduce the last important piece of the computer puzzle: the stored program.

The First Stored Program Computers

Manchester_Mark2

The Manchester Small-Scale Experimental Machine (SSEM), the first stored-program computer to successfully run a program

As previously discussed, electronic computers like the Colossus and ENIAC were limited in their general utility because they could only be configured to run a different program by actually rewiring the machine, as there were no input devices capable of running at electronic speeds.  This bottleneck could be eliminated, however, if the programs themselves were also stored in memory alongside the numbers they were manipulating.  In theory, the binary numeral system made this feasible since the instructions could be represented through symbolic logic as a series of “yes or no,” “on or of,” “1 or 0” propositions, but in reality the amount of storage needed would overwhelm the current technology.  The mighty ENIAC with its 18,000 vacuum tubes could only store 200 characters in memory.  This was fine if all you needed to store were a few five or ten digit numbers at a time, but instruction sets would require thousands of characters.  By the end of World War II the early computer pioneers of both Great Britain and the United States began tackling this problem independently.

The brilliant British mathematician Alan Turing, who has already been mentioned several times in this blog for both his code breaking and early chess programming feats, first articulated the stored program concept.  In April 1936, Turing completed a paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” as a response to a lecture by Max Newman he attended at Cambridge in 1935.  In a time when the central computing paradigm revolved around analog computers tailored to specific problems, Turing envisioned a device called the Universal Turing Machine consisting of a scanner reading an endless roll of paper tape. The tape would be divided into individual squares that could either be blank or contain a symbol.  By reading these symbols based on a simple set of hardwired instructions and following any coded instructions conveyed by the symbols themselves, the machine would be able to carry out any calculation possible by a human computer, output the results, and even incorporate those results into a new set of calculations.  This concept of a machine reacting to data in memory that could consist of both instructions and numbers to be manipulated encapsulates the basic operation of a stored program computer.

Turing was unable to act on his theoretical machine with the technology available to him at the time, but when he first saw the Colossus computer in operation at Bletchley Park, he realized that electronics would make such a device possible.  In 1945, Turing moved from Bletchley Park to the National Physical Laboratory (NPL), where late in the year he outlined the first relatively complete design for a stored-program computer.  Called the Automatic Computing Engine (ACE), the computer defined by Turing was ambitious for its time, leading others at the NPL to fear it could not actually be built.  The organization therefore commissioned a smaller test model instead called the Pilot ACE.  Ultimately, Turing left the NPL in frustration over the slow progress of building the Pilot ACE, which was not completed until 1950 and was therefore preceded by several other stored program computers.  As a result, Turing, despite being the first to articulate the stored program concept, exerted little influence over how the stored program concept was implemented.

One of the first people to whom Turing gave a copy of his landmark 1936 paper was its principle inspiration, Max Newman.  Upon reading it, Newman became interested in building a Universal Turing Machine himself.  Indeed, he actually tried to interest Tommy Flowers in the paper while he was building his Colossi for the Newmanry at Bletchley Park, but Flowers was an engineer, not a mathematician or logician, and by his own admission did not really understand Turing’s theories.  As early as 1944, however, Newman himself was expressing his enthusiasm about taking what had been learned about electronics during the war and establishing a project to build a Universal Turing Machine at the war’s conclusion.

In September 1945, Newman took the Fielden Chair of Mathematics at Manchester University and soon after applied for a grant from the Royal Society to establish the Computing Machine Laboratory at the university.  After the grant was approved in May 1946, Newman had portions of the dismantled Colossi shipped to Manchester for reference and began assembling a team to tackle a stored-program computer project.  Perhaps the most important members of the team were electrical engineers Freddie Williams and Tom Kilburn.  While working on radar during the war, the duo developed a storage method in which a cathode ray tube can “remember” a piece of information by virtue of firing an electron “dot” onto the surface of the tube, thus creating a persistent charge well.  By placing a metal plate against the surface of the tube, this data can be “read” in the form of a voltage pulse transferred to the plate whenever a charge well is created or eliminated by drawing or erasing a dot.  Originally developed to eliminate stationary background objects from a radar display, a Williams tube could also serve as computer memory and store 1,024 characters.  As any particular dot on the tube could be read at any given time, the Williams tube was an early form of random access memory (RAM)

In June 1948, Williams and Kilburn completed the Manchester Small Scale Experimental Machine (SSEM), which was specifically built to test the viability of the Williams Tube as a computer memory device.  While this computer contained only 550 tubes and was therefore not practical for actual computing projects, the SSEM was the first device in the world with all the characteristics of a stored program computer and proved the viability of Williams Tube memory.  Building on this work, the team completed the Manchester Mark 1 computer in October 1949, which contained 4,050 tubes and used more reliable custom-built CRTs from industrial conglomerate the General Electric Company (GEC) to increase the reliability of the memory.

978

John von Neumann stands next to the IAS Machine, which he developed based on his consulting work on the Electronic Discrete Variable Automatic Computer (EDVAC), the first stored-program computer in the United States

Meanwhile, at the Moore School Eckert and Mauchly were already beginning to ponder building a computer superior to the ENIAC by the middle of 1944.  The duo felt the most serious limitation of the computer was its paltry storage, and like Newman in England, they turned to radar technology for a solution.  Before joining the ENIAC project, Eckert had devised the first practical method of eliminating stationary objects from a display called delay line memory.  Basically, rather than displaying the result of a single pulse on the screen, the radar would compare two pulses, one of which was delayed by passing it through a column of mercury, allowing both pulses to arrive at the same time, with the radar screen displaying only those objects that were in different locations between the two pulses.  Eckert realized that using additional electronic components to keep the delayed pulse trapped in the mercury would allow it to function as a form of computer memory.

The effort to create a better computer received a boost when Herman Goldstine had a chance encounter with physicist John von Neumann at the Aberdeen railroad station.  A brilliant Hungarian emigre teaching at Princeton, von Neumann was consulting on several government war programs, including the Manhattan Project, but had not been aware of the ENIAC.  When Goldstine started discussing the computer on the station platform, von Neumann took an immediate interest and asked for access to the project.  Impressed by what he saw, von Neumann not only used his influence to help gain the BRL’s approval for Project PY to create the improved machine, he also held several meetings with Eckert and Mauchly in which he helped define the basic design of the computer.

The extent of von Neumann’s contribution to the Electronic Discrete Variable Automatic Computer (EDVAC) remains controversial.  Because the eminent scientist penned the first published general overview of the computer in May 1945, entitled “First Draft of a Report on the EDVAC,” the stored program concept articulated therein came to be called the “von Neumann architecture.”  In truth, the realization that the increased memory provided by mercury delay lines would allow both instructions and numbers to be stored in memory occurred during meetings between Eckert, Mauchly, and von Neumann, and his contributions were probably not definitive.  Von Neumann did, however, play a critical role in defining the five basic elements of the computer — the input, the output, the control unit, the arithmetic unit, and the memory — which remain the basic building blocks of the modern computer.  It is also through von Neumann, who was keenly interested in the human brain, that the term “memory” entered common use in a computing context.  Previously, everyone from Babbage forward had used the term “storage” instead.

The EDVAC project commenced in April 1946, but the departure of Eckert and Mauchly with most of their senior engineers soon after disrupted the project, so the computer was not completed until August 1949 and only became fully operational in 1951 after several problems with the initial design were solved.  It contained 6,000 vacuum tubes, 12,000 diodes, and two sets of 64 mercury delay lines capable of storing eight characters per line, for a total storage capacity of 1,024 characters.  Like the ENIAC, EDVAC cost roughly $500,000 to build.

cambridge

The Electronic Delay Storage Automatic Calculator (EDSAC)

Because of the disruptions caused by Eckert and Mauchley’s departures, the EDVAC was not actually the first completed stored program computer conforming to von Neumann’s report.  In May 1946, computing entrepreneur L.J. Comrie visited the Moore School to view the ENIAC and came away with a copy of the von Neumann EDVAC report.  Upon his return to England, he brought the report to physicist Maurice Wilkes, who had established a computing laboratory at Cambridge in 1937, but had made little progress in computing before World War II.  Wilkes devoured the report in an evening and then paid his own way to the United States so he could attend the Moore School lectures.   Although he arrived late and only managed to attend the final two weeks of the course, Wilkes was inspired to initiate his own stored-program computer project at Cambridge, the Electronic Delay Storage Automatic Calculator (EDSAC).  Unlike the competing computer projects at the NPL and Manchester University, Wilkes decided that completing a computer was more important than advancing computer technology and therefore decided to create a machine of only modest capability and to use delay line memory rather than the newer Williams tubes developed at Manchester.  While this resulted in a less powerful computer than some of its contemporaries, it did allow the EDSAC to become the first practical stored-program computer when it was completed in May 1949.

Meanwhile, after concluding his consulting work at the Moore School, John von Neumann established his own stored-program computer project in late 1945 at the Institute of Advanced Study (IAS) at Princeton University.  Primarily designed by Julian Bigelow, the IAS Machine employed 3,000 vacuum tubes and could hold 4,096 40-bit words in its Williams Tube memory.  Although not completed until June 1952, the functional plan of the computer was published in the late 1940s and widely disseminated.  As a result, the IAS Machine became the template for many of the scientific computers built in the 1950s, including the MANIAC, JOHNNIAC, MIDAC, and MIDSAC machines that hosted some of the earliest computer games.

With the Moore lectures about the ENIAC and the publication of the IAS specifications helping to spread interest in electronic computers across the developed world and the EDSAC computer demonstrating that crafting a reliable stored program computer was possible, the stage was now set for the computer to spread beyond a few research laboratories at prestigious universities and become a viable commercial product.

Historical Interlude: The Birth of the Computer Part 1, the Mechanical Age

Before continuing the history of video gaming with the activities of the Tech Model Railroad Club and the creation of the first truly landmark computer game, Spacewar!, it is time to pause and present the first of what I referred to in my introductory post as “historical interludes.”  In order to understand why the video game finally began to spread in the 1960s, it is important to understand the evolution of computer technology and the spread of computing resources.  As we shall see, the giant mainframes of the 1940s and 1950s were neither particularly interactive nor particularly accessible outside of a small elite, which generally prevented the creation of programs that provided feedback quickly and seamlessly enough to create an engaging play experience while also generally discouraging projects not intended to aid serious research or corporate data processing.  By the time work on Spacewar! began in 1961, however, it was possible to occasionally divert computers away from more scholarly pursuits and design a program interesting enough to hold the attention of players for hours at a time.  The next four posts will describe how computing technology reached that point.

Note: Unlike my regular posts, historical interlude posts will focus more on summarizing events and less on critiquing sources or stating exactly where every last fact came from.  They are meant to provide context for developments in video game history, and the information within them will usually be drawn from a small number of secondary sources and not be researched as thoroughly as the video game history posts.  Much of the material in this post is drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, and The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson.

Defining the Computer

766px-Human_computers_-_Dryden

Human computers working at the NACA High Speed Flight Station in 1949

Before electronics, before calculating machines, even before the Industrial Revolution there were computers, but the term did not mean the same thing it does today.  Before World War II and the emergence of the first electronic digital computers, a computer was a person who performed calculations, generally for a specialized purpose.  As we shall see, most of the early computers were created specifically to perform calculations, so as they grew to function with less need for human intervention, they naturally came to be called “computers” themselves after the profession they quickly replaced.

The computer profession originated after the development of the first mathematical tables in the 16th and 17th centuries such as the logarithmic tables designed to perform complex mathematical operations solely through addition and subtraction and the trigonometric tables designed to simplify the calculation of angles for fields like surveying and astronomy.  Computers were the people who would perform the calculations necessary to produce these tables.  The first permanent table-making project was established in 1766 by Nevil Maskelyne to produce navigational tables that were updated and published annually in the Nautical Almanac, which is still issued today.

Maskelyne relied on freelance computers to perform his calculations, but with the dawning of the Industrial Revolution, a French mathematician named Gaspard de Prony established what was essentially a computing factory in 1791 modeled after the division of labor principles espoused by Adam Smith in the Wealth of Nations to compile accurate logarithmic and trigonometric tables to aid in performing a new survey of the entirety of France as part of a project to reform the property tax system.  De Prony relied on a small number of skilled mathematicians to define the mathematical formulas and a group of middle managers to organize the tables, so his computers needed only a knowledge of basic addition and subtraction to do their work, reducing the computer to an unskilled laborer.  As the Industrial Revolution progressed, unskilled workers in most fields moved from using simple tools to mechanical factory machinery to do their work, so it comes as no surprise that one enterprising individual would attempt to bring a mechanical tool to computing as well.

Charles Babbage and the Analytical Engine

charles_babbage

Charles Babbage, creator of the first computer design

Charles Babbage was born in 1791 in London.  The son of a banker, Babbage was a generally indifferent student who bounced between several academies and private tutors, but did gain a love of mathematics at an early age and attained sufficient marks to enter Trinity College, Cambridge, in 1810.  While Cambridge was the leading mathematics institution in England, the country as a whole had fallen behind the Continent in sophistication, and Babbage soon came to realize he knew more about math than his instructors.  In an attempt to rectify this situation, Babbage and a group of friends established the Analytical Society to reform the study of mathematics at the university.

After leaving Cambridge in 1814 with a degree in mathematics from Peterhouse, Babbage settled in London, where he quickly gained a reputation as an eminent mathematical philosopher but had difficulty finding steady employment.  He also made several trips to France beginning in 1819, which is where he learned of De Prony’s computer factory.  In 1820, he joined with John Herschel to establish the Astronomical Society and took work supervising the creation of star tables.  Frustrated by the tedious nature of fact-checking the calculations of the computers and preparing the tables for printing, Babbage decided to create a machine that would automate the task.

The Difference Engine would consist of columns of wheels and gears each of which represented a single decimal place.  Once the initial values were set for each column — which would be determined by setting a polynomial equation in column one and then using a series of derivatives to establish the value of the other columns — the machine would use Newton’s method of divided differences (hence its name) to perform addition and subtraction functions automatically, complete the tables, and then send them to a printing device.  Babbage presented his proposed machine to the Royal Society in 1822 and won government funding the next year by arguing that a maritime industrial nation required the most accurate navigational tables possible and that the Difference Engine would be both cheaper to operate and more accurate than an army of human computers.

The initial grant of £1,500 quickly proved insufficient for the task of creating the machine, however, which was at the very cutting edge of machine tool technology and therefore extremely difficult to fashion components for.   The government continued to fund the project for over a decade, however, ultimately providing £17,000.  By 1833, Babbage was able to construct a miniature version of the Difference Engine that lacked sufficient capacity to actually create tables but did prove the feasibility of the project.  The next year, however, he unwittingly sabotaged himself by proposing an even more grand device to the government, the Analytical Engine, thus undermining the government’s faith in Babbage’s ability to complete the original project and causing it to withdraw funding and support.  A fully working Difference Engine to Babbage’s specification would not be built until the late 1980s, by which time it was a historical curiosity rather than a useful machine.  In the meantime, Babbage turned his attention to the Analytical Engine, the first theorized device with the capabilities of a modern computer.

10303265

A portion of Charles Babbage’s Analytical Engine, which remained unfinished at his death

The Difference Engine was merely a calculating machine that performed addition and subtraction, but the proposed Analytical Engine was a different beast.  Equipped with an arithmetical unit called the “mill” that exhibited many of the features of a modern central-processing unit (CPU), the machine would be capable of performing all four basic arithmetic operations.  It would also possess a memory, able to store 1,000 numbers of up to 40 digits each.  Most importantly, it would be program controlled, able to perform a wide variety of tasks based on instructions inputted into the machine.  These programs would be entered using punched cards, a recording medium first developed in 1725 by Basile Bouchon and Jean-Baptiste Falcon to automate textile looms that was greatly improved and popularized by Joseph Marie Jacquard in 1801 for the loom that bears his name.  Results could be outputted to a printer or a curve plotter.  By employing separate memory and computing elements and establishing a method of program control, Babbage outlined the first machine to include all the basic hallmarks of the modern computer.

Babbage sketched out the design of his Analytical Engine between 1834 and 1846.  He then halted work on the project for a decade before returning to the concept in 1856 and continuing to tinker with it right up until his death in 1871.  Unlike with the Difference Engine, however, he was never successful in securing funding from a British Government that remained unconvinced of the device’s utility — as well as unimpressed by Babbage’s inability to complete the first project it had commissioned from him — and thus failed to build a complete working unit.  His project did attract attention in certain circles, however.  Luigi Manabrea, a personal friend and mathematician who later became Prime Minister of Italy, invited Babbage to give a presentation on his Analytical Engine at the University of Turin in 1842 and subsequently published a transcription of the lecture in French.  This account was translated into English over a nine month period in 1842-43 by another friend of Babbage, Ada Lovelace, the daughter of the celebrated poet Lord Byron.

Ada Lovelace has been a controversial figure in computer history circles.  Born in 1815, she never knew her celebrated father, whom her mother fled shortly after Ada’s birth.  She possessed what appears to have been a decent mathematical mind, but suffered from mental instability and delusions of grandeur that caused her to perceive greater abilities than she actually possessed.  She became a friend and student of noted mathematician Mary Somerville, who was also a friend of Babbage.  It was through this connection that she began attending Babbage’s regular Saturday evening salons in 1834 and came to know the man.  She tried unsuccessfully to convince him to tutor her, but they remained friends and he was happy to show off his machines to her.  Lovelace became a fervent champion of the Analytical Engine and attempted to convince Babbage to make her his partner and publicist for the machine.  It was in this context that she not only took on the translation of the Turin lecture in 1842, but at Babbage’s suggestion also decided to appended her own description of how the Analytical Engine differed from the earlier Difference Engine alongside some sample calculations using the machine.

In a section entitled “Notes by the Translator,” which ended up being longer than the translation itself, Lovelace articulated several important general principles of computing, including the recognition that a computer could be programmed and reprogrammed to take on a variety of different tasks and that it could be set to tasks beyond basic math through the use of symbolic logic.  She also outlined a basic structure for programming on the Analytical Engine, becoming the first person to articulate common program elements such as recursive loops and subroutines.  Finally, she included a sample program to calculate a set of Bernoulli numbers using the Analytical Engine.  This last feat has led some people to label Lovelace the first computer programmer, though in truth it appears Babbage created most of this program himself.  Conversely, some people dismiss her contributions entirely, arguing that she was being fed all of her ideas directly by Babbage and had little personal understanding of how his machine worked.  The truth is probably somewhere in the middle.  While calling her the first programmer is probably too much of a stretch, as Babbage had already devised several potential programs himself by that point and contributed significantly to Lovelace’s as well, she still deserves recognition for being the first person to articulate several important elements of computer program structure.  Sadly, she had no chance to make any further mark on computer history, succumbing to uterine cancer in 1852 at the age of thirty-six.

Towards the Modern Office

cb000184_1907_Office_with_Burroughs_Model_6_OM

An Office in the B-Logo Business Systems Department in 1907, showcasing some of the mechanical equipment revolutionizing clerical work in the period.

Ultimately, the Analytical Engine proved too ambitious, and the ideas articulated by Babbage would have to wait for the dawn of the electronics era to become practical.  In the meantime, however, the Industrial Revolution resulted in great advances in office automation that would birth some of the most important companies of the early computer age.  Unlike the human computer industry and the innovative ideas of Babbage, however, the majority of these advances came not from Europe, but from the United States.

Several explanations have been advanced to explain why the US became the leader in office automation.  Certainly, the country industrialized later than the European powers, meaning businessmen were not burdened with outmoded theories and traditions that hindered innovations in the Old World.  Furthermore, the country had a long history of interest in manufacturing efficiency, dating back as far as Eli Whitney and his concept of using interchangeable parts in firearms in 1801 (Whitney’s role in the creation of interchangeable parts is usually exaggerated, as he was not the first person to propose the method and was never actually able to implement it himself, but he was responsible for introducing the concept to the US Congress and therefore still deserves some credit for its subsequent adoption in the United States).  By the 1880s, this fascination with efficiency had evolved into the “scientific management” principles of Frederick Taylor that aimed to identify best practices through rational, empirical study and employ standardization and training to eliminate waste and inefficiency on the production line.  Before long, these ideals had penetrated the domain of the white-collar worker through the concept of “office rationalization,” in which managers introduced new technologies and systems to maximize productivity in that setting as well.

The first major advance in the drive for office automation was the invention of a practical typewriter.  While several inventors created typing machines in the early nineteenth century, none of these designs gained any traction in the marketplace because using them was slower than writing out a document by hand.  In 1867, however, a retired newspaper editor named Christopher Latham Sholes was inspired by an article in Scientific American describing a mechanical typing device to create one of his own.  By the next year Sholes, with the help of amateur mechanic Carlos Glidden and printer Samuel Soule, had created a prototype for a typing machine using a keyboard and type-basket design that finally allowed typing at a decent speed.  After Soule left the project, Sholes sent typewritten notes to several financiers in an attempt to raise capital to refine the device and prepare for mass production.  A Pennsylvania businessman named James Densmore answered the call and provided the funding necessary to make important improvements such as replacing a frame to hold the paper with a rotating drum and changing the layout of the keyboard to the familiar QWERTY orientation — still used on computer keyboards to this day — to cut down on jamming by spacing out commonly used letters in the typing basket.

After several failed attempts to mass produce the typewriter through smaller companies in the early 1870s, Densmore was able to attract the interest of Philio Remington of the small-arms manufacturer E. Remington & Sons, which had been branching out into other fields such as sewing machines and fire engines in the aftermath of the U.S. Civil War.  First introduced by Remington in 1874, the typewriter sold slowly at first, but as office rationalization took hold in the 1880s, businesses started flocking to the machine.  By 1890 Remington had a virtual monopoly on the new industry and was producing 20,000 machines a year.  In addition to establishing the typewriter in the office, Remington also pioneered the idea of providing after-market service for office products, opening branch offices in major cities where people could not only buy typewriters, but also bring them in for repairs.

With typed loose-leaf pages replacing the traditional “letter book” for office correspondence, companies soon found it necessary to adopt new methods for storing and retrieving documents.  This led to the development of vertical filing using hanging folders stored in upright cabinets, which was first publicly demonstrated by Melville Dewey at the Chicago World’s Fair in 1893.  While vertical filing proved superior to the boxes and drawers previously employed in the workplace, however, it proved woefully inefficient once companies evolved from tracking hundreds of records to tens of thousands.  This time the solution came from James Rand, Sr., a clerk from Tonawanda, New York, who patented a visible index system in which colored signal strips and tabs would allow specific file folders to be found quickly and easily.  Based on this invention, the clerk established the Rand Ledger Company in 1898.  His son, James Rand, Jr., joined the business in 1908 and then split off from his father in 1915 after a dispute over advertising spending to market his own record retrieval system based around index cards called the Kardex System.  As the elder Rand neared retirement a decade later, his wife orchestrated a reconciliation between him and his son, and their companies merged to form the Rand Kardex Company in 1925.  Two years later, Rand Kardex merged with the Remington Typewriter Company to form Remington Rand,  which became the largest business machine company in the world.

burroughs

A Burroughs “adder-lister,” one of the first commercially successful mechanical calculators

A second important invention of the late nineteenth century was the first practical calculator.  Mechanical adding machines had existed as far back as the 17th century when Blaise Pascal completed his Pascaline in 1645 and Gottfriend Liebnitz invented the first calculator capable of performing all four basic functions, the Stepped Reckoner, in 1692, but the underlying technology remained fragile and unreliable and therefore unsuited to regular use despite continued refinements over the next century.  In 1820, the calculator was commercialized for the first time by Thomas de Colmar, but production of his Arithmometer lasted only until 1822.  After making several changes, Thomas began offering his machine to the public again in 1851, but while the Arithmometer gained a reputation for both sturdiness and accuracy, production never exceeded a few dozen a year over the next three decades as the calculator remained too slow and impractical for use in a business setting.

The main speed bottleneck of the early adding machines was that they all required the setting of dials and levers to use, making them far more cumbersome for bookkeepers than just doing the sums by hand.  The man who first solved this problem was Dorr Felt, a Chicago machinist who replaced the dials with keys similar to those found on a typewriter.  Felt’s Comptometer, completed in 1885, arranged keys labelled 0 to 9 across ten columns that each corresponded to a single digit of a number, allowing figures to be entered rapidly with just one hand.  In 1887, Felt formed the Felt & Tarrant Manufacturing Company with a local manufacturer named Robert Tarrant to mass produce the Comptometer, and by 1900 they were selling over a thousand a year.

While Felt remained important in the calculator business throughout the early twentieth century, he was ultimately eclipsed by another inventor.  William S. Burroughs, the son of a St. Louis mechanic, was employed as a clerk at a bank but suffered from health problems brought on by spending hours hunched over columns adding figures.  Like Felt, he decided to create a mechanical adding machine using keys to improve this process, but he also added another key advance to his “adder-lister,” the ability to print the numbers as they were entered so there would be a permanent record of every financial transaction.  In 1886, Burroughs established the American Arithmometer Company to market his adding machine, which was specifically targeted at banks and clearing houses and was selling at a rate of several hundred a year by 1895.  Burroughs died in 1898, but the company lived on and relocated to Detroit in 1904 after it outgrew its premises in St. Louis, changing its name to the Burroughs Adding Machine Company in honor of its founder.  At the time of the move, Burroughs was selling 4,500 machines a year.  Just four years later, that number had risen to 13,000.

John H. Patterson

John H. Patterson, founder of the National Cash Register Company (NCR)

The adding machine was one of two important money management devices invented in this period, with the other being the mechanical cash register.  This device was invented in 1879 by James Ritty, a Dayton saloon owner who feared his staff was stealing from him, and constructed by his brother, John.  Inspired by a tool that counted the revolutions of the propeller on a steamship, “Ritty’s Incorruptible Cashier” required the operator to enter each transaction using a keypad, displayed each total entered for all to see, and printed the results on a roll of paper, allowing the owner to compare the cash taken in to the recorded amounts.  Ritty attempted to interest other business owners in his machine, but proved unsuccessful and ultimately sold the business to Jacob Eckert of Cincinnati in 1881.  Eckert added a cash drawer to the machine and established the National Manufacturing Company, but he was barely more successful than the Rittys.  Therefore, in 1884 he sold out to John Patterson, who established the National Cash Register Company (NCR).

John Henry Patterson was born on a farm outside Dayton, Ohio, and entered the coal trade after graduating from Dartmouth College.  While serving as the general manager of the Southern Coal and Iron Company, Patterson was tasked with running the company store and became one of Ritty’s earliest cash register customers.  After being outmaneuvered in the coal trade, Patterson sold his business interests and used the proceeds to buy NCR.  A natural salesman, Patterson created and/or popularized nearly every important modern sales practice while running NCR.  He established sales territories and quotas for his salesmen, paid them a generous commission, and rewarded those who met their quotas with an annual sales convention.  He also instituted formal sales training and produced sales literature that included sample scripts, creating the first known canned sales pitch.  Like Remington, he established a network of dealerships that provided after market services to build customer loyalty, but he also advertised through direct mailings, another unusual practice.  Understanding that NCR could only stay on top of the business by continuing to innovate, Patterson also established an “innovations department” in 1888, one of the earliest permanent corporate research & development organizations in the world.  In an era when factory work was mostly still done in crowded “sweatshops,” Patterson constructed a glass-walled factory that let in ample light set amid beautifully landscaped grounds.

While Patterson seemed to genuinely care for the welfare of his workers, however, he also had a strong desire to control every aspect of their lives.  He manipulated subordinates constantly, hired and fired individuals for unfathomable reasons, instituted a strict physical fitness regimen that all employees were expected to follow, and established rules of conduct for everything from tipping waiters to buying neckties.  For all his faults, however, his innovative sales techniques created a juggernaut.  By 1900, the company was selling 25,000 cash registers a year, and by 1910 annual sales had risen to 100,000.  By 1928, six years after Patterson’s death, NCR was the second largest office-machine supplier in the world with annual sales of $50 million, just behind Remington Rand at $60 million and comfortably ahead of number three Burroughs at $32 million.  All three companies were well ahead of the number four company, a small firm called International Business Machines, or IBM.

Computing, Tabulating, and Recording

IBM, which eventually rose to dominance in the office machine and data processing industries, cannot be traced back to a single origin, for it began as a holding company that brought together several firms specializing in measuring and processing information.  There were three key people responsible for shaping the company in its early years: Herman Hollerith, Charles Flint, and Tom Watson, Sr.

416px-Hollerith

Herman Hollerith, whose tabulating machine laid the groundwork for the company that became IBM

Born in Buffalo, New York, in 1860, Herman Hollerith pursued an education as a mining engineer, culminating in a Ph.D from Columbia University in 1890.  One of Hollerith’s professors at Columbia also served as an adviser to the Bureau of the Census in Washington, introducing Hollerith to the largest data processing organization in the United States.  At the time, the Census Bureau was in crisis as traditional methods of processing census forms failed to keep pace with a growing population.  The 1880 census, processed entirely by hand using tally sheets, took the bureau seven years to complete.  With the population of the country continuing to expand rapidly, the 1890 census appeared poised to take even longer.  To attack this problem, the new superintendent of the census, Robert Porter, held a competition to find a faster and more efficient way to count the U.S. population.

Three finalists demonstrated solutions for Porter in 1889.  Two of them created systems using colored ink or cards to allow data to be sorted more efficiently, but these were still manual systems.  Hollerith on the other hand, inspired by the ticket punches used by train conductors, developed a system in which the statistical information was recorded on punched cards that were quickly tallied by a tabulating machine of his own design.  Cards were placed in this machine one at a time and pressed with an apparatus containing 288 retractable pins.  Any pin that encountered a hole in the card would complete an electrical circuit and advance one of forty tallies.  Using Hollerith’s machines, the Census Bureau was able to complete its work in just two and a half years.

As the 1890 census began to wind down, Hollerith re-purposed his tabulating system for use by businesses and incorporated the Tabulating Machine Company in December 1896.  He remained focused on the census, however, until President McKinley’s assassination in 1901 resulted in the appointment of a new superintendent that chose to go with a different company for 1910.  In the meantime, Hollerith refined his system by implementing a three-machine setup consisting of a keypunch to put the holes in the cards, a tabulator to tally figures, and a sorting machine to place the cards in sequence.  By 1911, Hollerith had roughly one hundred customers and the business was continuing to expand, but his health was failing, leading him to entertain an offer to sell from an influential financier named Charles Flint.

Charles_Ranlett_Flint

Charles Rantlett Flint, the man who forged IBM

Charles Rantlett Flint was a self-made man born into a family of shipbuilders that started his first business at 18 on the docks of his hometown of Thomaston, Maine.  From there, he secured a job with a trader named William Grace by offering to work for free.  In 1872, Grace made Flint a partner in his new W.R. Grace & Co. shipping and trading firm, which still exists today as a chemical and construction materials conglomerate.  During this period, Flint acted as a commission agent in South America dealing in both arms and raw materials.  He also became keenly interested in new technologies such as the automobile, light bulb, and airplane.

In 1892, Flint leveraged his international trading contacts to pull together a number of rubber exporters into a trust called U.S. Rubber.  This began a period of intense monopoly building by Flint across a number of industries.  By 1901, Flint’s growing roster of trusts included the International Time Recording Company (ITR) of Endicott, New York, based around the recently invented time clock that allowed employers to easily track the hours worked by their employees, and the Computing Scale Company of America of Dayton, Ohio, based around scales that would both weigh items by the pound and compute their total cost.  While ITR proved modestly successful, however, the Computing Scale Company ended an abject failure.  In an attempt to salvage his poorly performing concern, Flint decided to define a new larger market of information recording machines for businesses and merge ITR and Computing Scale under the umbrella of a single holding company.  Feeling Hollerith’s company fit well into this scheme, Flint purchased it as well in 1911 and folded the three companies into the new Computing-Tabulating-Recording Company (C-T-R).  The holding company approach did not work, however, as C-T-R was an unwieldy organization consisting of three subsidiaries spread across five cities with managers that ignored each other at best and actively plotted against each other at worst.  Furthermore, the company was saddled with a large debt and its component parts could not leverage their positions in a trust to create superior integration or economies of scale because their products and customers were too different.  By 1914, C-T-R was worth only $3 million and carried a debt of $6.5 million.  Flint’s experiment had clearly failed, so he brought in a new general manager to turn the company around.  That man was Thomas Watson, Sr.

thomas_watson

Thomas Watson, Sr., the man who built IBM into a corporate giant

By the time Flint hired Watson for C-T-R, he already had a reputation as a stellar salesman, but was also tainted by a court case brought over monopolistic practices.  Born on a farm in south central New York State, Watson tried his hand as both a bookkeeper and a salesman with various outfits, but had trouble holding down steady employment.  After his latest venture failed in 1896, a butcher’s shop in Buffalo, Watson trudged down to the local NCR office to transfer the installment payments on the store’s cash register to the new owner.  While there, he struck up a conversation with a salesman named John Range and kept pestering him periodically until Range finally offered him a job.  Within nine months, Watson went from sales apprentice to full sales agent as he finally seemed to find his calling.  Four years later, he was transferred to the struggling NCR branch in Rochester, New York, which he managed to turn around.  This brought him to the attention of John Patterson in Dayton, who tapped Watson for a special assignment.

By 1903, when Patterson summoned Watson, NCR was experiencing fierce competition from a growing second-hand cash register market.  NCR cash registers were both durable and long-lasting, so enterprising businessmen had begun buying up used cash registers from stores that were upgrading or going out of business and then undercutting NCR’s prices on new machines.  For the controlling monopolist Patterson, this was unacceptable.  His solution was to create his own used cash register business that would buy old machines for higher prices than other outlets and sell them cheaper, making up the lost profits through funding directly from NCR.  Once the competition had been driven out of business, prices could be raised and the business would start turning a profit.  Patterson tapped Watson to control this business.  For legal reasons, Patterson kept the connection between NCR and the new Watson business a secret.

Between 1903 and 1908, Watson slowly expanded his used cash register business across the country, creating an excellent new profit-center for NCR.  His reward was a posting back at headquarters in Dayton as an assistant sales manager, where he soon became Patterson’s protégé and absorbed his innovative sales techniques.  By 1910, Watson had been promoted to sales manager, where his personable and less-controlling management style created a welcome contrast to Patterson and encouraged flexibility and creativity among the 900-strong NCR sales force, helping to double the company’s 1909 sales within two years.

As quickly as Watson rose at NCR, however, he fell even faster.  In 1912 the Taft administration, amid a general crusade against corporate trusts, brought criminal charges against Patterson, Watson, and other high-ranking NCR executives for violations of the Sherman Anti-Trust Act.  At the end of a three-month trial, Watson was found guilty along with Patterson and all but one of their co-defendants on February 13, 1913 and now faced the prospect of jail time.  Worse, the ordeal appears to have soured the ever-changeable Patterson on the executives indicted with him, as they were all chased out of the company within a year.  Watson himself departed NCR in November 1913 after 17 years of service.  Some accounts state that Watson was fired, but it appears that the separation was more by mutual agreement.  Either way, it was a humbled and disgraced Watson that Charles Flint tapped to save C-T-R in early 1914.  Things began looking up the next year, however, when an appeal resulted in an order for a new trial.  All the defendants save Watson settled with the government, which decided pursuing Watson alone was not worth the effort.  Thus cleared of all wrongdoing, Watson was elevated to the presidency of C-T-R.

Watson saved and reinvented C-T-R through a combination of Patterson’s techniques and his own charisma and personality.  He reinvigorated the sales force through quotas, generous commissions, and conventions much like Patterson.  A lover of the finer things in life, he insisted that C-T-R staff always be impeccably dressed and polite, shaping the popular image of the blue-suited IBM sales person that would last for decades.  He changed the company culture by emphasizing the importance of every individual in the corporation and building a sense of company pride and loyalty.  Finally, he was fortunate to take over at a time when the outbreak of World War I and a booming U.S. economy led to increased demand for tabulating machines both from businesses and the U.S. government.  Between 1914 and 1917, revenues doubled from $4.2 million to $8.3 million, and by 1920 they had reached $14 million.

What really set IBM apart, however, was the R&D operation Watson established based on the model of NCR’s innovations department.  At the time Watson arrived, C-T-R remained the leading seller of tabulating machines, but the competition was rapidly gaining market share on the back of superior products.  Hollerith, who remained as a consultant to C-T-R after Flint bought his company, showed little interest in developing new products, causing the company’s technology to fall further and further behind.  The company’s only other senior technical employee, Eugene Ford, occasionally came up with improvements, but he could not actually put them into practice without the approval of Hollerith, which was rarely forthcoming.  Watson moved Ford into a New York loft and ordered him to begin hiring additional engineers to develop new products.

Ford’s first hire, Clair Lake, developed the company’s first printing tabulator in the early 1920s, which gave the company a machine that could rival the competition in both technology and user friendliness.  Another early hire, Fred Carroll from NCR, developed the Carroll Press that allowed C-T-R to cheaply mass produce the punched cards used in the tabulating machines and therefore enjoy a huge profit margin on the product.  In the late 1920s, Lake created a new patentable punched-card design that would only work in IBM machines, which locked-in customers and made them unlikely to switch to a competing company and have to redo millions of cards.  Perhaps the most important hire was James Bryce, who joined the company in 1917, rose to chief engineer in 1922, and ended up with over four hundred patents to his name.

After a small hiccup in 1921-22 as the U.S. endured a small recession, C-T-R, which Watson renamed International Business Machines (IBM) in 1924, experienced rapid growth for the rest of the decade, reaching $20 million in revenue by 1928.  While this placed IBM behind Remington Rand, NCR, and Burroughs, the talented R&D group and highly effective sales force built by Watson left the company perfectly poised to rise to a dominant position in the 1930s and subsequently conquer the new computer market of the 1950s.

Searching for Bobby Fisher

Before leaving the 1950s behind, we now turn to the most prolific computer game concept of the decade: chess.  While complex simulations drove the majority of AI research in the military-industrial complex during the decade, the holy grail for much of academia was a computer that could effectively play this venerable strategy game.   As Alex Bernstein and Michael de V. Roberts explain it for Scientific American in June 1958, this is because chess is a perfect game to build an intelligent computer program around because the rules are straightforward and easy to implement, but playing out every possible scenario at a rate of one million complete games per second would take a computer 10108 years.  While this poses no real challenge for modern computers, the machines available in the 1950s and 1960s could never hope to complete a game of chess in a reasonable timeframe, meaning they actually needed to learn to react and adapt to a human player to win rather than just drawing on a stock of stored knowledge.  Charting the complete course of the quest to create a perfect chess-playing computer is beyond the scope of this blog, but since chess computer games have been popular entertainment programs as well as platforms for AI research, it is worth taking a brief look at the path to the very first programs to successfully play a complete game of chess.  The Computer History Museum presents a brief history of computer chess on its website called Mastering the Game, which will provide the framework for most of this examination.

El Ajedrecista (1912)

torres03

Leonardo Torres y Quevedo (left) demonstrates his chess-playing automaton

According to scholar Nick Montfort in his monograph on interactive fiction, Twisted Little Passages (2005), credit for the first automated chess-playing machine goes to a Spanish engineer named Leonardo Torres y Quevedo, who constructed an electro-mechanical contraption in 1912 called El Ajedrecista (literally “the chessplayer”) that simulated a KRK chess endgame, in which the machine attempted to mate the player’s lone king with his own king and rook.  First demonstrated publicly in 1914 in Paris and subsequently described in Scientific American in 1915, El Ajedrecista not only calculated moves, but actually moved the pieces itself using a mechanical arm.  A second version constructed in 1920 eliminated the arm and moved pieces via magnets under the board instead.  Montfort believes this machine should qualify as the very first computer game, but a lack of any electronics, a key component of every modern definition of a computer game — though not a requirement for a machine to be classified as an analog computer — makes this contention problematic, though perhaps technically correct.  Regardless of how one chooses to classify Quevedo’s contraption, however, it would be nearly four decades before anyone took up the challenge of computer chess again.

Turochamp and Machiavelli (1948)

alan-turing-2

Alan Turing, father of computer science and computer chess pioneer

As creating a viable chess program became one of the long-standing holy grails of computer science, it is only fitting that the man considered the father of that field, Alan Turing, was also the first person to approach the problem.  Both the computer history museum and Replay state that in 1947 Turing became the first person to write a complete chess program, but it proved so complex that no existing computer possessed sufficient memory to run it.  While this account contains some truth, however, it does not appear to be fully accurate.

As recounted by Andrew Hodges in the definitive Turing biography Alan Turing: The Enigma (1983), Turing had begun fiddling around with chess as early as 1941, but he did not sketch out a complete program until later in the decade, when he and economist David Champernowne developed a set of routines they called Turochamp. While it is likely that Turing and Champerdowne were actively developing this program in 1947, Turing did not actually complete Turochamp until late 1948 after hearing about a rival chess-playing program called Machiavelli written by his colleagues Donald Michie and Shaun Wylie.  This is demonstrated by a letter Hodges reprinted in the book from September 1948 in which Turing directly states that he had never actually written out the complete chess program, but would be doing so shortly.  Copeland also gives a 1948 date for the completion of Turochamp in The Essential Turing.

This may technically make Machiavelli the first completed chess program, though Michie relates in Alan M. Turing (1959), a biography written by the subject’s own mother, that Machiavelli was inspired by the already in development Turochamp.  It is true that Turochamp — and presumably Machiavelli as well — never actually ran on a computer, but apparently Turing began implementing it on the Ferranti Mark 1 before his untimely death.  Donovan goes on to say that Turing tested out the program by playing the role of the computer himself in a single match in 1952 that the program lost, but Hodges records that the program played an earlier simulated game in 1948 against Champerdowne’s wife, a chess novice, who lost to the program.

Programming a Computer for Playing Chess, by Claude Shannon (1950)

2-0 and 2-1.shannon_lasker.prior_1970.102645398.NEWBORN.lg

Claude Shannon (right) demonstrates a chess-playing automaton of his own design to chess champion Edward Lasker

While a fully working chess game would not arrive for another decade, key theoretical advances were made over 1949 and 1950 by another pioneer of computer science, Claude Shannon.  Shannon was keenly interested in the chess problem and actually built an “electric chess automaton” in 1949 — described in Vol. 12 No. 4 of the International Computer Chess Association (ICCA) Journal (1989) — that could handle six pieces and was used to test programming methods.

His critical contribution, however, was an article he wrote for Philosophical Magazine in 1950 entitled “Programming a computer for playing chess.” While Shannon’s paper did not actually outline a specific chess program, it was the first attempt to systematically identify some of the basic problems inherent in constructing such a program and proffered several solutions.  As Allen Newell, J.C. Shaw, and H.A. Simon relate in their chapter for the previously mentioned landmark AI anthology Computers and Thought, “Chess-Playing Programs and the Problem of Complexity,” Shannon was the first person to recognize that a chess game consists of a finite series of moves that will ultimately terminate in one of three states for a player: a win, a loss, or a draw.  As such, a game of chess can be viewed as a decision tree in which each node represents a specific board layout and each branch from that node represents a possible move.  By working backwards from the bottom of the tree, a player would know the best move to make at any given time.  This concept, called minimaxing in game theory, would conceivably allow a computer to play a perfect game of chess every time.

Of course, as we already discussed, chess may have a finite number of possible moves, but that number is still so large that no computer could conceivably work through every last move in time to actually play a game.  Shannon recognized this problem and proposed that a program should only track moves to a certain depth on the tree and then choose the best alternative under the circumstances, which would be determined by evaluating a series of static factors such as the value and mobility of pieces — weighted based on their importance in the decision-making process of actual expert chess players — and combining these values with a minimaxing procedure to pick a move.  The concept of evaluating the decision tree to a set depth and then using a combination of minimaxing and best value would inform all the significant chess programs that followed in the next decade.

Partial Chess-Playing Programs (1951-1956)

Chapter_5-154

Paul Stein (seated) plays chess against a program written for the MANIAC computer

The complexities inherent in programming a working chess-playing AI that adhered to Shannon’s principles guaranteed it would be nearly another decade before a fully working chess program emerged, but in the meantime researchers were able to implement more limited chess programs by focusing on specific scenarios or by removing specific aspects of the game. Dr. Dietrich Prinz, a follower of Turing who led the development of the Ferranti Mark 1, created the first such program to actually run on a computer.  According to Copeland and Diane Proudfoot in their online article Alan Turing: Father of the Modern Computer, Prinz’s program first ran in November 1951.  As the computer history museum explains, however, this program could not actually play a complete game of chess and instead merely simulated the “mate-in-two problem,” that is it could identify the best move to make when two moves away from a checkmate.

In The Video Game Explosion, Ahl recognizes a 1956 program written for the MANIAC I at the Los Alamos Atomic Energy Laboratory by James Kister, Paul Stein, Stanislaw Ulam, William Walden, and Mark Wells as the first chess-playing program, apparently missing the Prinz game.  Los Alamos had been at the forefront of digital computing almost from its inception, as the lab had used the ENIAC, one of the first Turing-complete digital computers, to perform calculations and run simulations for research relating to the atomic bomb.  As a result, Los Alamos personnel kept a close watch on advances in stored program computers in the late 1940s and early 1950s and decided to construct their own as they raced to complete the first thermonuclear weapon, colloquially known as a “hydrogen bomb.”  Designed by a team led by Nicholas Metropolis, the Mathematical Analyzer, Numerical Integrator, and Computer, or MANIAC, ran its first program in March 1952 and was put to a wide variety of physics experiments over the next five years.

While MANIAC was primarily used for weapons research, the scientists at Los Alamos implemented game programs on more than one occasion.  According to a brief memoir published by Jeremy Bernstein in 2012 in the London Review of Books, many of the Los Alamos scientists were drawn to the card tables of the casinos of nearby Las Vegas, Nevada.  Therefore, when they heard that four soldiers at the Aberdeen Proving Ground had published an article called “The Optimum Strategy in Blackjack” in the Journal of the American Statistical Association in 1956, they immediately created a program on the MANIAC to run tens of thousands of Blackjack hands to see if the strategy actually worked. (Note: Ahl and a small number of other sources allude to a Blackjack game being created at Los Alamos on an IBM 701 computer in 1954, but I have been unable to substantiate this claim in primary sources, leading me to wonder if these authors have confused some other experiment and the 1956 blackjack program on the MANIAC).  Therefore, it is no surprise that scientists at the lab would decided to create a chess program as well.

Unlike Prinz’s program, the MANIAC program could play a complete game of chess, but the programmers were only able to accomplish this feat using a simplified 6×6 board without bishops.  The program did, however, implement Shannon’s system of calculating all possible moves over two levels of the decision tree and then using static factors and minimaxing to determine its next move.  Capable of performing roughly 11,000 operations per second, the program only played three games and was estimated to have the skill of a human player with about twenty games experience according to Shaw.  By the time Shaw’s article was published in 1961, the program apparently no longer existed.  Presumably it was lost when the original MANIAC was retired in favor of the MANIAC II in 1957.

The Bernstein Program (1957)

2-1.Bernstein-alex.1958.L02645391.IBM_ARCHIVES.lg

Alex Bernstein with his chess program in 1958

A complete chess playing program finally emerged in 1957 from IBM, implemented by Alex Bernstein with the help of Michael de V. Roberts, Timothy Arbuckle, and Martin Belsky.  Like the MANIAC game, Bernstein’s program only examined two levels of moves, but rather than exploring every last possibility, his team instead programmed the computer to examine only the seven most plausible moves, determined by operating a series of what Shaw labels “plausible move generators” that identified the best moves based on specific goals such as king safety or prioritizing attack or defense.  After cycling through these generators, the program picked seven plausible continuations and then made a decision based on minimaxing and static factors just like the MANIAC program.  It did so much more efficiently, however, as it considered only about 2,500 of over 800,000 possible permutations.  Running on the faster IBM 704 computer, the program could handle 42,000 operations per second, though according to Shaw the added complexity of using the full 8×8 board rendered much of this speed advantage moot and the program still took about eight minutes to make a move compared to twelve for the MANIAC program.  According to Shaw, Bernstein’s program played at the level of a “passable amateur,” but exhibited surprising blind spots due to the limitations of its move analysis.  It apparently never actually defeated a human opponent.

The NSS Chess Program (1958)

2-3a.Carnegie_Mellon_University.Newell-Allen_Simon-Herbert.19XX.L062302007.CMU.lg

Herbert Simon (left) and Allan Newell (right), two-thirds of the team that created the NSS program

We end our examination of 1950s computer chess with the NSS chess program that emerged from Carnegie-Mellon University.  Allan Newell and Herbert Simon, professors at the university who consulted for RAND Corporation, were keenly interested in AI and joined with a RAND employee named Cliff Shaw in 1955 to fashion a chess program of their own.  According to their essay in Computers and Thought, the trio actually abandoned the project within a year to focus on writing programs for discovering symbolic logic proofs, but subsequently returned to their chess work and completed the program in 1958 on the JOHNNIAC, a stored program computer built by the RAND Corporation and operational between 1953 and 1966.  According to an essay by Edward Feigenbaum called “What Hath Simon Wrought?” in the 1989 anthology Complex Information Processing: The Impact of Herbert A. Simon, Newell and Shaw handled most of the actual development work, while Simon immersed himself in the game of chess itself in order to imbue the program with as much chess knowledge as possible.

The resulting program, with a name derived from the authors’ initials, improved upon both the MANIAC and Berstein programs. Like the Bernstein program, the NSS program used a combination of minimaxing, static value, and a plausible move generator to determine the best move to make, but Newell, Simon, and Shaw added a new important wrinkle to the process through a “branch and bounds” method similar to the technique that later researchers termed “alpha-beta pruning.”  Using this method, each branch of the decision tree was given a maximum lower and a minimum upper value, alpha and beta, and the program only considered those branches that fell in between these values in previously explored branches.  In this way, the program was able to consider far fewer moves than previous minimaxing-based programs, yet mostly ignored poor solutions rather than valuable ones.  While this still resulted in a program that played at an amateur level, the combination of minimaxing and alpha-beta pruning provided a solid base for computer scientists to carry chess research into the 1960s.

Tennis Anyone?

So now we turn to the most discussed of all the 1950s computer games: Tennis for Two, designed by Willy Higinbotham and largely built by Robert Dvorak at the Brookhaven National Laboratory (BNL) in 1958.  Unlike the games discussed previouslyTennis for Two was built specifically to entertain the public rather than just to demonstrate the power of a computer or train a group of students, giving it some claim as the first true computer “game” from a philosophical standpoint.  That is certainly the contention of BNL itself, which dismisses NIMROD and OXO as programming demonstrations rather than entertainment.  Ultimately, this debate matters little, as Tennis For Two only existed briefly and did not influence later developments in the industry.

While Tennis for Two did not inspire later designers, however, it did gain a new notoriety in the 1970s when lawyers for arcade companies defending against a patent lawsuit brought by Magnavox discovered the existence of the game and unsuccessfully attempted to portray it as an example of prior art that invalidated Ralph Baer’s television gaming patents.  Higinbotham was called to testify on multiple occasions during various patent suits that continued into the 1980s, which is one reason the game is far better documented than most of its contemporaries.  The game also received public recognition after Creative Computing ran a feature devoted to it in October 1982 because the magazine’s editor, David Ahl, had actually played the game at Brookhaven back in 1958.  As a result of this article, Tennis for Two was considered the first computer game until more in-depth research in the late 2000s uncovered some of the earlier games listed in the previous post, and the early monographs such as PhoenixHigh Score!, and The Ultimate History of Video Games accord the game pioneering status.  Even though newer works like Replay and All Your Base Are Belong to Us acknowledge earlier programs, however, they continue to give Tennis for Two pride of place in the early history of video games due to it arguably being the first pure entertainment product created on a computer.

higinbotham-300px

William A. Higinbotham

Before diving into the game itself, we should examine the man who created it.  According to an unpublished account now hosted at the BNL site that he wrote in the early 1980s supplemented by a deposition he gave in 1985, William A. Higinbotham graduated from Williams College in 1932 with a bachelor’s degree in physics and spent eight years working on a Ph.D at Cornell that he ultimately abandoned due to a lack of money.  Higinbotham first worked with an oscilloscope during a senior honor’s project at Williams and spent his last six years at Cornell working as a technician in the physics department, which gave him the opportunity to learn a great deal about electronics.  As a result, he was invited to MIT in December 1940 to work on radar at the university’s Radiation Laboratory, where he concentrated on CRT radar displays.   In December 1943, Higinbotham transferred to Los Alamos to work on the Manhattan Project, where he was quickly promoted to lead the electronics division and, according to Replay, worked on timing circuits.  He left Los Alamos for Washington, DC, in December 1945, where he spent two years doing education and PR work for the American Federation of Scientists, a group that worked to stem nuclear proliferation.  In 1947, he came to BNL, where he became the head of instrumentation in 1951 or 1952.

The above provides a solid overview of Higinbotham the scientist, but Harold Goldberg in All Your Base Are Belong to Us also gives us a portrait of Higinbotham’s less serious side.  According to Goldberg, who drew his information from a profile in Parade, Willy was a natural entertainer who called square dances, played the accordion, and led a Dixieland band named the Isotope Stompers.  He also exhibited a penchant for making technology fun, once attaching a sulky and two wagons to the family lawnmower so he could drive his kids around the yard.  Seeing this side of the eminent physicist, its no surprise that he would find a way to make a computer entertaining as well.

hqdefault

The original Tennis for Two display

Higinbotham created Tennis for Two as a public relations vehicle.  Every year, BNL held three visitor’s days in the fall — one each for high school students, college students, and the general public — in which the scientists gave tours of the facilities and built exhibits related to the lab’s work in the staff gymnasium.  Most accounts of the exhibits emphasize that they consisted of unengaging static displays, but in his 1976 deposition for the first Magnavox patent lawsuit, Higinbotham states that the staff always tried to include something with “action,” though he does not specify whether this included games. Therefore, Higinbotham may not have been the first person to liven up the event through audience participation, but he was still definitely the first person that decided to entertain the public with a computer game.

As the BNL website and his notes indicate, Higinbotham was inspired to create Tennis for Two after reading through the instruction manual for the lab’s Donner Model 30, a vacuum tube analog computer.  The manual described how the system could be hooked up to an oscilloscope to display curves to model a missile trajectory or a bouncing ball complete with an accurate simulation of gravity and wind resistance.  The bouncing ball reminded Higinbotham of tennis, so he sketched out a system to interface an oscilloscope with the computer and then gave the diagram to technician Robert Dvorak to implement.  Laying out the initial design only took Higinbotham a couple of hours, after which he spent a couple of days putting together a final spec based on the components available in the lab.  Dvorak then built the system over three weeks and spent a day or two debugging it with Higinbotham.  The game was largely driven by the vacuum tubes and relays that had defined electronics for decades, but in order to render graphics on the oscilloscope, which required rapidly switching between several different elements, Higinbotham and Dvorak incorporated transistors, which were just beginning to transform the electronics industry.

Tennis for Two‘s graphics consisted of a side-view image of a tennis court — rendered as a long horizontal line to represent the court itself and a small vertical line to represent the net — and a ball with a trajectory arc displayed on the oscilloscope.  Each player used a controller consisting of a knob and a button.  To start a volley, one player would use the knob to select an angle to hit the ball and then press the button.  At that point, the ball could either hit the net, hit the other side of the court, or sail out of bounds.  Once the ball made it over the net, the other player could either hit the ball on the fly or the bounce by selecting his own angle and pressing the button to return it.  Originally, the velocity of the ball could be chosen by the player as well, but Higinbotham decided that three controls would make the game too complicated and therefore left the velocity fixed.

800px-Tennis_for_Two_-_Modern_recreation

A modern recreation of the Tennis for Two controller

According to Higinbotham, Tennis for Two was a great success, with long lines of eager players quickly forming to play the game.  Based on this positive reception, Higinbotham brought the game back in 1959 on a larger monitor and with more sophisticated gravity modelling that allowed the player to simulate the low gravity of the Moon or the high gravity environment of Jupiter.  After the second round of visitor days, the game was dismantled so its components could be put to other uses.  Higinbotham never patented the device because he felt at the time that he was just adapting the bouncing ball program already discussed in the manual and had created no real breakthrough.  While he appears to have been proud of creating the game, he stated in his notes that he considered it a “minor achievement” at best and wanted to be remembered as a scientist who fought the spread of nuclear weapons rather than as an inventor of a computer game.

The Priesthood at Play: Computer Games in the 1950s

And now with all the introductions and definitions out of the way, it is finally time to start talking history.  First, a note of caution.  This post on computer games in the 1950s will at times refer to this or that program as the “first” to model a particular game or mechanic, but this should be read as the first that we know of rather than as an absolute statement of origination.  While many interesting games from this time period have been unearthed — and in some cases even been recreated to play on modern hardware — the games of the 1950s were largely confined to research labs run by universities, large corporations, and national governments and were not intended for mass distribution and/or public consumption.  As a result, there is a high degree of likelihood that researchers created logic puzzles, board games, card games, military simulations, etc., that never received larger exposure and have long since been lost.  With that one caveat in mind, here is a look at the first decade of video gaming.

The first digital computers were completed in the early 1940s, but it would take nearly another decade for the first computer games to appear.  A subsequent blog post will summarize the evolution of the mainframe in the 1940s and 1950s in more detail, but for the moment I will just say that the early computers were extremely few in number, tended to be dedicated to highly specific functions, were difficult to reprogram (if they could be reprogrammed at all), and lacked the capability to execute a stored program.  Consequently, even if a researcher had felt a “simulation” or a “game” might have been useful in his work, there would have been little opportunity to create one.  By the 1950s, however, computers had been commercialized and had become sophisticated enough to be set to a variety of tasks.  As we shall see, one of those tasks was playing games.

The 1950s have largely been ignored by the monographs covering the history of video games.  PhoenixHigh Score!, and the Ultimate History of Video Games all skip the decade entirely with the exception of brief mentions of Tennis for Two, while the more recent All Your Base Are Belong to Us devotes a rather substantial prologue to that game, but again ignores most of the developments of the period.  Only Tristan Donovan’s Replay and Mark Wolf’s The Video Game Explosion — through a chapter by David Ahl — cover the 1950s in any depth.

This omission is understandable for two reasons.  First, the authors of these books are primarily interested in the growth of the video game as an entertainment product and pop culture phenomenon, while the majority of the programs from this period were research projects that were not made available to the general public.  Tennis for Two actually served as a public spectacle, making it a more suitable topic for such a narrative.  Second, because these projects stayed locked up in research labs and were usually dismantled or discarded when they had served their purpose, these early games usually did not spread beyond a few academics and therefore exerted little to no influence on subsequent games.  Donovan and Ahl between them identify several games from this period, however, which I will now examine here.

Bertie the Brain (1950)

tumblr_mihcxqe5qr1qcma3zo1_1280

Entertainer Danny Kaye celebrates a victory over Bertie the Brain at the Canadian National Exposition in 1950

For the moment, Bertie the Brain, a custom-built computer financed by Rogers Majestic (a prominent vacuum tube manufacturer and one of the forerunners of Canadian media giant Rogers Communications), is the earliest known computer game actually implemented — although a small number of game programs may have been described and/or written earlier, most notably several early chess programs.  (Note: The development of computer chess over the 1950s is a long enough tale on its own and will therefore be covered in its own blog post.)  Unlike most subsequent games in this post, Bertie was not a research project, but was intended specifically to impress the public as to the potential of computers.

Bertie grew out of a project at the University of Toronto to build an early mainframe computer, the University of Toronto Electronic Computer, or UTEC.  One of the earliest university-led electronic computer projects, UTEC design commenced in 1948, and a prototype was completed in 1950.  The project was ultimately abandoned, however, when the university decided to purchase a computer from British firm Ferranti — a pioneering computer maker described in more detail below — instead.

One critical member of the UTEC team was a University of Toronto Ph.D. candidate named Josef Kates.  According to a profile on Bertie written by Chris Bateman for Canadian magazine Spacing in August 2014, Kates was born into a large Austrian Jewish family in 1921, but fled Nazi persecution in 1938, ultimately winding up in the United Kingdom.  Kates enlisted in the British Army as an optician’s apprentice, but on the outbreak of World War II he was shipped to an internment camp in Canada due to his nationality.  Finishing high school in New Brunswick, Kates eked out a living cutting wood, sewing socks, and repairing fishing nets until his previous lens experience landed him a job with Imperial Optical in 1941.  He soon moved to Rogers Majestic to build radar tubes before joining the UTEC project after the war.

One of Kates’s key contributions to the project was a device he devised in 1949 called the Additron, an electron tube that was smaller, less complex, and less power-hungry than the typical vacuum tubes of the day.  The Additron ultimately never found its way into the UTEC nor did it enter mass production, as by the time a long and difficult patent process concluded in 1956, the technology was already obsolete.  Rogers wanted to promote the technology during this delay, however, so Kates proposed creating a Tic-Tac-Toe game using Additron tubes for display at the Canadian National Exhibition (CNE).

Unveiled at the CNE in the summer of 1950, Kates’s machine, dubbed “Bertie the Brain,” stood just over thirteen feet high and consisted of a custom-built computer, a lighted keypad for the player to input his move, and a lighted panel that showed the layout of the board and announced the winner.  Once the player entered his move, Bertie countered nearly instantaneously and was virtually unbeatable, at least on the highest difficulty.  Kates would often turn down the difficulty for children and then crank it back up for adults.  The machine proved immensely popular, but it was dismantled at the end of the exhibition after serving its purpose.

Bertie the Brain is not discussed in any video game history monograph written to date, but why it has been overlooked is not exactly clear.  It was heavily publicized at the time, with Life Magazine even running a feature on Bertie that involved famed entertainer Danny Kaye challenging the machine and finally winning after the difficulty had been turned down multiple times.  Kates himself went on to a long and distinguished career in the Canadian scientific community, and at least one newspaper article discussing his appointment to a new post written in 1975 referenced the creation of the game.  A 2001 monograph by John Vardalas entitled The Computer Revolution in Canada: Building National Technological Competence also devotes a paragraph to the game.  The recent article in Spacing has brought new attention to the game and its inventor, however, so this omission will hopefully be corrected in the future.

NIMROD (1951)

nimrod_1

An artist’s rendering of NIMROD as it would have appeared at the Berlin Industrial Show

With Bertie so far overlooked by the various monographs written about video game history, Donovan and Ahl both identify the NIMROD, a custom-built computer from British engineering firm Ferranti, as the first computer game.  Like Bertie, NIMROD was built to impress upon the public the great potential of computers.

Ferranti has the distinction of being one of the oldest companies to become involved in the nascent computer industry after World War II.  According to a timeline made available by the Museum of Science and Industry in Manchester,  Siemens engineer Sebastien de Ferranti established the company in 1882 to market a dynamo (an early electrical generator) of his own design.  By the early twentieth century, Ferranti had become one of the most important power companies in the United Kingdom and was instrumental in the establishment of a national power grid.  Defense work during and after World War II in fields ranging from gun sights to radar to guided missiles led naturally into first electronics and then computers, culminating in the launch of the Ferranti Mark 1 in 1951.  While the UNIVAC I from Remington Rand is often identified as the first commercially available computer (see, for example About.com), the first UNIVAC I was delivered in June 1951 (Source: University of Pennsylvania, though some sources also claim the commercial deal was struck as early as March 31), while the first Ferranti Mark 1 was delivered to the University of Manchester in February (Source: University of Manchester School of Computer Science), making it the first commercially available computer by a matter of months (if we want to get really technical, the BINAC computer delivered to Northrop in 1949 was the first commercially sold computer, but it was a one-off unlike the Ferranti and Univac computers that were produced in quantity and made available to any interested party).

According to Donovan, who apparently drew most of his material from the personal website of one Peter Goodeve, Ferranti found itself in a difficult position in late 1950: the company had promised to exhibit a computer at the upcoming Festival of Britain, but proved unable to honor the commitment.  The Festival had been conceived by deputy prime minister Herbert Morrison as a “tonic for the nation” to demonstrate to the people of the United Kingdom that the art, technology, and ingenuity of the British people would heal the gaping wounds still remaining from the horror of World War II and help lead the world into a better tomorrow, making this an important public relations event for Ferranti.  An Australian engineer named John Bennett who had worked on the pioneering EDSAC computer in the late 1940s ultimately provided a solution: Ferranti should demonstrate the mathematical capabilities of the modern computer and the fundamentals of computer programming to the public by displaying a custom-built machine that played the strategy game nim.  According to Donovan, Bennett was inspired by the Nimatron, an electro-mechanical contraption displayed by Westinghouse at the 1940 World’s Fair.  Donovan appears to have drawn this claim from a 2001 German article linked at Mr. Goodeve’s website.  While this is a logical claim for the article to make, the author does not provide any proof from primary sources.  Bennett himself appears to be silent on this point, at least in this excerpt from his autobiography.

Again from Donovan and Goodeve, the NIMROD was built by Raymond Stuart-Williams between December 1950 and April 1951 and first exhibited at the festival on May 5, 1951.  After the festival ended, the NIMROD was displayed for three weeks in October at the Berlin Industrial Show and subsequently dismantled.  Lacking a monitor, the NIMROD used a series of lights as a display, which represented the individual pegs of the game. The human player chose which “pegs” to remove by pressing the corresponding buttons on a control panel situated in front of the machine. While NIMROD gave the public one of its first opportunities to play a game on and against a computer, Bennett was, in his own words, more interested in demonstrating programming algorithms and principles than in entertaining anyone.  As such, neither Bennett nor Ferranti followed up on this ground-breaking machine.

Checkers Programs (1951-1952)

res60f

Christopher Strachey’s Draughts Program on the Ferranti Mark 1

Although not mentioned by Donovan or Ahl, it appears that a checkers program created by Englishman Christopher Strachy may well be the first computer game executed in software to run on a Turing-complete computer — in contrast to the custom hardware of Bertie and NIMROD. This program’s significance extends far beyond the realm of computer games, however, as it may have also been the first program of any kind to exhibit artificial intelligence (AI). According to a biography hosted by the IEEE Computing Society, Strachey was a mathematician and physicist serving as a master at Harrow School in 1951 when he was introduced to the Pilot ACE computer at Britain’s National Physical Laboratory (NPL).  As a programming exercise to familiarize himself with the machine, Strachey created a draughts game (checkers to Americans), taking inspiration from an article in the June 1950 issue of Penguin Science News written by NPL physicist Donald Davies called “A Theory of Chess and Naughts and Crosses.”

In The Essential Turing (2004), a collection of Alan Turing’s papers compiled by B.J. Copeland, Copeland writes that Strachey completed a preliminary version of his program by May 1951 and first tried to run it on the Pilot ACE in July, but was unsuccessful due to program errors.  According to the IEEE Paper, that same month Strachey traveled to the University of Manchester to see the first Ferranti Mark 1 and consult with Turing, who had recently completed a Programmers’ Handbook for the machine.  According to Copeland, Turing’s encouragement was crucial in Strachey finally getting the draughts program in working order on the Mark 1.  According to an article by David Link entitled “Programming ENTER: Christopher Strachey’s Draughts Program” that appeared in issue 60 of Resurrection, the official publication of the Computer Conservation Society, the game was finally completed in July 1952.  That same year, Strachey described his game at a computer conference in Toronto, which, according to Copeland, directly inspired a programmer named Arthur Samuel to create his own version.

Samuel

Arthur Samuel playing with his checkers program

An electrical engineer with a master’s degree from the Massachusetts Institute of Technology (MIT), Arthur Samuel was one of the more important pioneers of AI research in the United States.  According to an article penned by John McCarthy and hosted by the Stanford Artificial Intelligence Laboratory, Samuel experienced his first brush with computing at the relatively early date of 1946 when he joined a team at the University of Illinois that began an ultimately unsuccessful project to build an electronic computer.  According to McCarthy, it was during this project that Samuel first conceived of writing a checkers program that could defeat a champion player of the game to show just how powerful a tool a computer could be. In 1949, Samuel accepted a job at IBM’s Poughkeepsie Laboratory, where he was part of the team that designed the landmark IBM 701 computer.  He remained at the company until retiring from the corporate world in 1966.

According to Copeland, it was Strachey’s presentation in 1952 that rekindled Samuel’s interest in devising a checkers program, and indeed it was near the end of that year that Samuel completed an initial version on the 701, which Copeland surmises was the first AI program created in the United States.  As related by both Copeland and Donovan, Samuel continued to refine this program over the next several years and accomplished a major milestone in 1955 when the program became capable of analyzing its own play and learning from its mistakes.  As the program continued to gain notoriety, it was actually demonstrated on national television on February 24, 1956 (Source: IBM100, a centennial website created by IBM).  According to McCarthy, IBM stock rose 15 points on the back of this demonstration.

In Replay, Donovan claims that by 1961 Samuel’s program was “defeating US Checkers champions,” but this claim is exaggerated. According to McCarthy, in 1961 Samuel answered a call for submissions to the first anthology devoted to AI research, Computers and Thought, with a paper describing his checkers program.  The editors of the collection, Ed Feigenbaum and Julian Feldman, suggested that Samuel include an account of his program’s best game as part of the article, so Samuel decided to challenge a champion player to a match.  McCarthy states that the chosen player was “the Connecticut state checker champion, the number four ranked player in the nation,” but alas this also appears to be an exaggeration.  The IBM 100 website identifies the player, one Robert W. Nealey, as a “self-proclaimed checkers champion.”  Further digging shows that Mr. Nealey was indeed a checkers player from Connecticut, but his champion status derived from a tournament for blind chess players held in Peoria, Illinois, in which he claimed the title of “world blind checker champion” by default when no one else showed up to compete (Source: St. Petersburg Times February 26, 1980).  Therefore, while the program, now running on an IBM 7094, did win its match against Nealey in 1962, this was not as impressive a victory as Donovan and McCarthy make it sound.  Indeed, according to a webpage devoted to Samuel’s program maintained by the Department of Computing Science at the University of Alberta, Nealey actually won a rematch the very next year, while the program later lost eight out of eight games against two actual world-class checkers players at a world championship match held in 1966.

Early Simulation Games (1952-1958)

RAND SPT

The Air Defense Direction Center replica constructed by RAND Corporation for “Project Simulator”

Unlike the purely civilian research projects above, computer games simulating complex systems and interactions originated almost entirely within the military-industrial complex, which already had a long history of war gaming and systems simulation that predated computers.  As this blog is focused on games created for entertainment rather than training, I will largely avoid discussing military and defense contractor projects, but I feel an overview of the earliest such games is appropriate as part of tracing the origins of computer gaming generally.

In the Video Game Explosion, Ahl identifies the earliest military simulations as coming out of the RAND Corporation beginning in 1952.  According to the company’s own website, Project RAND — short for “Research ANd Development” — originated as an outgrowth of advanced weapons research conducted during World War II and began in December 1945 as a collaboration between the Douglas Aircraft Company and the United States Army Air Forces (USAAF).  As explained in RAND and the Information Evolution (2008) by Willis Ware, USAAF commander General “Hap” Arnold recognized that some of the most important military advances of World War II had been the result of collaboration between the military, academia, and industry, and therefore felt a joint project like RAND that was not completely under military control was essential to carrying this spirit of collaboration forward. In 1948, the newly established US Air Force chose to spin the project out as its own non-profit corporation.

RAND employed mathematicians and scientists across a wide array of disciplines and contributed numerous breakthroughs in fields ranging from artificial intelligence to networking to space travel, but for now we turn our attention to the organization’s Systems Research Laboratory.  As related in a RAND paper from October 1956 by F.N. Marzocco entitled The Story of SDD, the Systems Research Laboratory was the brainchild of RAND consultant John Kennedy, a psychologist who believed that RAND could not fully understand the impact of technology on the modern battlefield without also studying the “human factors” present in any interaction between man and machine.  At Kennedy’s recommendation, RAND established the laboratory in May 1951 to undertake such studies, staffing it with a mix of psychologists and mathematicians.

At the time of the lab’s foundation, RAND’s Electronics Division was expending a great deal of energy on the Air Force’s Air Defense System, a network of radar stations that would feed data on incoming aircraft to a series of Air Defense Direction Centers (ADCCs) where personnel would compile data, evaluate threats, and scramble interceptors as necessary to protect the integrity of US air space.  With the Cold War entering its nuclear phase, perfecting the country’s Air Defense System was vital to insure long-range Soviet bombers could be intercepted before delivering a nuclear payload on American soil.  According to Ware, Kennedy and his colleagues, most notably Robert Chapman, William Biel, Bogusław Boghosian, and Milton Weiner, were particularly interested in cognitive learning and how best to train organized groups that were required to coordinate their activities to carry out a larger task.  As such, they were naturally drawn to the ADDCs and with the encouragement of Melvin Kappler of the Electronics Division chose as their first major study “Project Simulator,” an ADDC computer training program.

According to Marzocco, the Systems Research Laboratory constructed a nearly exact physical replica of the ADDC located in Tacoma, Washington, along with partial replicas of three associated early warning stations. Ware relates that the system was constructed in a warehouse at 4th and Broadway in Santa Monica and was based around an IBM 701 that would run a simulation of incoming aircraft that the trainees had to identify, pinpoint, and interdict.  The display consisted of a faux radar screen drawn by an IBM 407 printer that produced a new paper readout each time the radar changed.  Marzocco tells us that the first exercise using the system, code-named “Casey,” ran from February 4 to June 8 1952 and involved twenty-eight students from UCLA.  A second run, code-named “Cowboy,” followed in early 1953 with actual military personnel.

According to Ware, the Air Force was so pleased by the initial results of Project Simulator that it decided to deploy the system across the service and funded a variety of improvements such as a high resolution camera from Mitchell Camera Company and a cathode ray tube (CRT) display for the 701 from IBM so a film strip could replace the paper readouts of the original system.  According to Marzocco, RAND responded to the growing scope of the project by establishing a new System Training and Programming Division (quickly shortened to System Development Division) under Kappler in September 1955 to deploy the training system, which now went by the name “System Training Project” (STP).  By May 1956, the division had installed the STP at seven Air Divisions.  In December 1957, the division was spun off as its own corporation, System Development Corporation (SDC).  While the STP appears to be the first military simulation run primarily by a computer, it cannot really be classified as a complete computer game, as the 701 merely plays the film and traces flight paths over a two-hour training session.  A human team was apparently still required to actually administer the exercise and interpret the results.

The Army, meanwhile followed the lead of the Air Force in establishing a research think tank in 1948, the Operations Research Office (ORO), which operated as an adjunct to Johns Hopkins University.  Researchers at ORO created what may have been the first true computer war game, Hutspiel.  According to a 1964 technical report by Joseph Harrison, Jr. entitled Computer-Aided Information Systems for GamingHutspiel was a 1955 theater-level war game written for an analog computer called the Goodyear Electronic Differential Analyzer (GEDA) intended to study the use of tactical nuclear weapons and conventional air support in Western Europe in the event of a Soviet invasion.  In this game, which built on previous GEDA simulations devoted to exercises such as allocating artillery and missile fire among competing targets, one player would control NATO forces in France, Belgium, and West Germany, while the other player would control a Soviet force attempting to penetrate the region across a frontage of roughly 150 miles.  At the start of the game, each player would allocate his forces across the sectors he controlled and choose targets for his planes and nukes, which could consist of enemy troops, airfields, supply depots, and transportation facilities.  GEDA would then determine the results.  In the original version of the game, the simulation would continue without human intervention until a player paused to issue new orders, but in subsequent versions the game was divided into turns of fixed time increments.  While the game modeled both reinforcement and resupply, it did not model troop movement other than by rail, nor did it account for weather and terrain.  By 1964, the Research Analysis Corporation, an organization founded to continue the work of the ORO after the Army terminated its contract with Johns Hopkins in 1961, was working on a more complex version of Hutspiel called Theaterspiel that ran on an IBM 7094 computer. (Note: In The Video Game Explosion, Ahl incorrectly attributes Hutspiel to the Research Analysis Corporation, which did not yet exist in 1955.) 

Military simulation projects also led directly to the first business simulation game.  In the early 1950s, RAND Corporation developed a pen-and-paper simulation called MONOPLOGS in which the players would learn the principles of logistics by running a portion of the Air Force supply system.  In October 1956, the company inaugurated a new Logistics Systems Laboratory to create computer simulations to train logistics personnel, the first of which was LP-I in 1957.  Meanwhile, according to an article entitled “U.S. Wargaming Grows Up” by Sharon Ghamari-Tabrizi MONOPLOGS so impressed the American Management Association (AMA) that it assembled a team in 1956 that included consultants from both RAND and IBM to create a business management simulation called The Top Management Decision Simulation, which was programmed on an IBM 650 and delivered in May 1957. As with military simulations, the primary purpose of the early business simulations was training.  As such these games quickly spread to business schools as evidenced by The Management Game, a 1958 program devised by Kalman Cohen, Richard Cyert, and William Dill at the Carnegie Institute of Technology in Pittsburgh — renamed Carnegie Mellon University in 1967 after a merger with the Mellon Institute of Industrial Research — that Ahl incorrectly identifies as the first business simulation (though it may have very well been the first one implemented in the classroom since the AMA game was targeted at current executives).  According to Ahl, this game still sees use in business schools to this day — though as the CMU website points out it did receive a major overhaul in 1986 — and runs over the course of two semesters as the player takes control of one of three competing detergent companies and makes decisions over a three year period regarding everything from R&D to marketing.  According to a 1962 article by Paul Greenlaw, Lowell Herron, and Richard Rawdon entitled “Business Simulation in Industrial and University Education,” there were already at least eighty-nine business simulations in use by the end of 1961

Early Graphical Games (1952-1954)

tictactoe

A recreation of the OXO display from a software emulator

To this point, I have examined several programs that first delivered entertainment and/or artificial intelligence to the world of mainframe computers, but little that puts the “video” in video game.  This is because the computers of the 1950s were largely machines of punched cards, paper tape, and printed results, not display screens and CRTs.  Perhaps the first computer game to render images on a display was OXO by Alexander Douglas, which he deployed in 1952.  (NOTE: I have been unable to locate a source that gives the month in 1952 that Douglas implemented his program, so it is possible that the aforementioned draughts game by Strachey, completed in July, actually came first.)

While a doctoral candidate in mathematics at the University of Cambridge in 1952, Douglas decided to develop a thesis concerning human-computer interaction. (NOTE: Some sources claim that this research formed the basis of his dissertation, but a quick look at the catalog of the Newton Library at Cambridge shows this is not the case.  The program was, however, described in the dissertation, which is why we have such complete knowledge of it today.)  Needing a platform to test the theories of his thesis, Douglas chose to program a naughts and crosses (tic-tac-toe to Americans) game for the EDSAC computer at the University of Cambridge.  As will be discussed later, the EDSAC represented a landmark in computer history that pioneered several innovations.  For the purposes of this post, however, the most important advance was the machine’s ability to display a map of its memory on a CRT as a 35 x 16 dot matrix. (Source: “A Tutorial Guide to the EDSAC Simulator” by Martin Campbell-Kelly) This feature was primarily used to help debug programs, but could also be used to create images on the monitor by manipulating specific dots on the grid.  Douglas took full advantage of this capability for OXO.  To play the game, one used a rotary phone dial connected to the computer.  Each space on the tic-tac-toe grid corresponded to one of the numbers on the dial, so the player simply dialed that number to make his mark.

midsac-pool

A pool game created at the University of Michigan’s Willow Run facility on the MIDSAC computer

Both OXO and Strachey’s draughts program sported relatively static graphics to convey the current state of the game board, but a more dynamic graphical game soon emerged from the University of Michigan.  One of the lesser known hotspots for computer research in the 1950s, the university developed several analog and digital computers at an offsite laboratory located at the Willow Run manufacturing complex.  Established by the Ford Motor Company in 1941, Willow Run initially built aircraft components before producing roughly half of the B-24 Liberator bombers that flew in World War II.  After the war, an airfield built as part of the complex passed to civilian control, and the University of Michigan established a research facility there.  This lab became involved in computer research in aid of defense projects ranging from air traffic control systems to the BOMARC (Boeing-Michigan Air Research Center) guided missile.

According to a pamphlet by Norman Scott entitled “Computing at the University of Michigan: The Early Years Through 1960,” Willow Run built two digital computers in 1952, the Michigan Digital Automatic Computer (MIDAC) and the more advanced Michigan Digital Special Automatic Computer (MIDSAC).  According to Scott, both computers were built by an engineer named John DeTurk and derived from the Standards Eastern Automatic Computer (SEAC) built by the National Bureau of Standards in 1950.  According to the June 27, 1954, edition of the Chicago Tribune, the Willow Run facility publicly debuted the two computers for the first time on June 26, 1954, which were both programmed to play games for the occasion.  The MIDAC hosted a craps game that declared its “box point” and then rolled simulated dice until it won or lost, with the results printed on an automatic typewriter attached to the machine.  MIDAC also hosted a tic-tac-toe game that pitted a human against a hardware-controlled opponent.  If a player attempted to cheat by placing multiple symbols on his turn, the computer would call him out for it.

Unlike MIDAC, MIDSAC was hooked up to a 13-inch CRT display, allowing it to host a far more impressive game.  According to a deposition given by a research associate at the lab named William Brown, he and a colleague named Ted Lewis were approached by DeTurk to create a demonstration program for the forthcoming event in early 1954 and suggested a pool game because they were both avid players and felt some form of game would be particularly interesting for the audience.  Developed over the course of roughly six months, the program simulated a standard pool table and a full rack of fifteen balls, which two players would take shots at by controlling a two inch cue stick.  The controls consisted of a joystick, which moved the cue stick around the table, a knob, which rotated the cue stick to choose the angle of the shot, and a button to actually strike the cue ball.  The program subsequently performed 25,000 operations a second to determine the speed, trajectory, and bounce of every ball as they collided with each other and the sides of the table.  Any ball that entered a pocket would disappear.  Due to limited processing power, the sides of the table and the pockets were not actually displayed on the CRT, but instead were drawn in grease pencil on a transparent overlay.  According to both the Chicago Tribune article and Brown’s deposition, the graphics updated seamlessly and gave the illusion of continuous movement, making the MIDSAC pool game perhaps the first computer game to feature real-time graphics.  Despite its pioneering features, however, this game has yet to appear in any monograph of video game history.

In his autobiography, Memoirs of a Computer Programmer (1985), EDSAC designer Maurice Wilkes briefly describes another early CRT game created on his pioneering machine by an enterprising programmer who manipulated the display to create a vertical fence with a single hole in it that the player could manipulate to appear on either the top half or bottom half of the screen.  Periodically, a horizontal row of dots would appear and attempt to pass through the fence.  If the hole was on the same half of the screen as the dots, they would pass through, otherwise they would retreat.  Initially, the movement of the dots would be random, but the program was actually capable of learning so that if the player moved the hole according to a consistent pattern, the row of dots would eventually figure out the sequence and appear in the correct place every time.  Wilkes provides no details on when this program was implemented nor who designed it, but he is quick to point out that “no one took this program very seriously.”  This once again reinforces the idea that 1950s computer researchers like John Bennett and Alexander Douglas might have occasionally found games useful to prove a point, but had no real interest in harnessing computers purely for entertainment.  Indeed, there is only one known computer game created in the entire decade that was designed specifically to entertain the public rather than test or demonstrate computer theories or train students and personnel, and it will be the subject of my next post.

By Any Other Name

So, in my first two posts, I have explained my goals, my methodologies, and my sources of information, so that only leaves one final introductory matter: What exactly are these “video game” things I say I will be writing about?  In today’s world, where over one billion people are estimated to be playing video games (Source: PC Gaming Alliance Research), this may seem like a needless exercise, but the truth is the term “video game” is too often thrown around with wild abandon without a clear idea of what the term actually means and where it comes from.  Therefore, in the next few paragraphs, I will take a look at a few attempts to define the term and try to piece together a workable definition for this blog.

First, a historical note on the origin of the term “video game.”  From a legal standpoint, the concept of a “video game” first manifested in Ralph Baer’s 1972 patent for a “Television Gaming Appartus” (U.S. Patent 3,659,285) and his 1973 patent for a “Television Gaming and Training Apparatus (U.S. Patent 3,728,480).  While the term “video game” does not appear in either of these patents, they set out a basic game system in which a control unit is attached to a television receiver and then generates a video signal to create symbols on the TV screen.  In the first landmark patent case in video game law, The Magnavox Co., et al. v. Chicago Dynamic Industries, et al. 201 U.S.P.Q. 25 (N.D. Ill. 1977), Judge Grady named the ‘480 patent the “pioneer patent” in the field, making the Magnavox Odyssey technology the progenitor of video gaming in the home from a legal standpoint (Source: Patent Arcade blog post)

Technically, the “video” in “video game” is derived from the idea of manipulating a video signal as described in the ‘480 patent.  By this narrow definition, a true video game would be one in which “electronic signals are converted to images on a screen using a raster pattern, a series of horizontal lines composed of individual pixels.” (Source: Brookhaven National Laboratory History of Tennis for Two) This narrow definition would eliminate any game that uses a teletype, oscilloscope, vector monitor, LCD screen, plasma display, etc., since they do not make use of a video signal.  From a technical standpoint, these games would be more properly characterized as “computer games” or “electronic games” rather than video games.  Popular sources aimed at the layman have almost never bothered with this technical distinction, however, so it serves more as an intellectual curiosity than a workable modern definition of the term.

Early video games were called by a variety of names before that term became well established.  In the arcades, for instance, it was common to refer to the products as “TV games” highlighting the main feature that set these games apart from earlier coin-operated amusement products. (Source: Replay by Tristan  Donovan, 2010) Perhaps the earliest reference to a “video game” appeared in the March 17, 1973 issues of Cash Box magazine, which uses the term “video game” in a headline, though it appears to be an abbreviation in this case for the longer “video skill game” as used in the article body (Source: All in Color for a Quarter Blog)  By late 1974, it appears the term had gained at least some acceptance (see, for example, the September 17, 1974 edition of the Lakeland Ledger).  By the late 1970s, the term became standard. (Source: Replay)

As one would expect for such a new concept that is still evolving rapidly, there is no clear consensus yet in authoritative sources as to what a video game actually is.  Let’s start with two gold standards for English-language knowledge: the Oxford English Dictionary and Encyclopedia Britannica. The Oxford English Dictionary defines a video game thus:

a game played by electronically manipulating images produced by a computer program on a television screen or other display screen. (http://www.oxforddictionaries.com/us/definition/american_english/video-game)

Right away, two elements stand out as problematic: a computer program (ie software) needs to be involved, and the results need to be rendered on a screen. This definition is largely workable for today’s games, but it would actually exclude most of the important progenitors of the industry.  Both the Magnavox Odyssey and the Syzygy/Atari Computer Space and PONG units were created entirely through hardware designed to control a CRT to generate and move dots on a screen.  Arcade games did not start incorporating software until 1975, and it was not until 1978 that software began to displace Transistor-Transistor Logic (TTL) circuits entirely.  In the home, dedicated hardware was not complemented by programmability until 1976 and not displaced fully until a couple of years after that.  As for displays, most early mainframe computer games did not incorporate displays, which were extremely rare on most systems until the early 1970s.  Instead, these systems tended to print results on paper via teletype.  As a result, this definition is not completely satisfactory, but we can draw three key concepts from it: manipulating images (ie interactivity), the presence of a computer, and some form of display.

Here is how Encyclopedia Britannica tackles the subject:

any interactive game operated by computer circuitry (http://library.eb.com/eb/article-9001562 [subscription required])

Note that Britannica lumps both “computer games” and “video games” under the catch-all header of “electronic games.”  As for computer circuitry, Britannica also has an article on that concept and defines it thus:

Complete path or combination of interconnected paths for electron flow in a computer. Computer circuits are binary in concept, having only two possible states. They use on-off switches (transistors) that are electrically opened and closed in nanoseconds and picoseconds (billionths and trillionths of a second). (http://library.eb.com/eb/article-9472097 [subscription required])

This general definition works a lot better.  Rather than identifying software as the key element it identifies computer circuitry, which in this context means ciruits incorporating transistors and logic gates.  This means that TTL games like PONG and games executed in software are both covered.  Again, the key elements of interactivity and a computer appear in this definition.

Here are a couple of additional definitions in reputable dictionaries just to paint a more complete picture of how video games are perceived today.  The Mirriam-Webster Dictionary goes with “an electronic game in which players control images on a television or computer screen” (http://www.merriam-webster.com/dictionary/video%20game), while the American Heritage Dictionary claims a video game is “An electronic game played by manipulating moving figures on a display screen, often designed for play on a special gaming console rather than a personal computer.” (http://www.ahdictionary.com/word/search.html?q=video+game&submit.x=0&submit.y=0).  Both of these definitions emphasize that a video game is electronic, that is it relies on parts such as transistors that manipulate electrons in order to function.  Like the Oxford Dictionary, these definitions also emphasize a screen.

So where does that leave us?  Clearly an object that satisfies the modern definition of a video game requires three core components: interactivity, a program run by hardware containing electronic logic circuits, and objects rendered on a display.  Therefore, these are the types of games this blog will cover, whether they be on mainframes, personal computers, arcade hardware, consoles, handheld systems, or mobile devices.  The only exceptions will be the simpler electronic games found in toy aisles that are generally considered “toys” rather then “video games” from a commercial and marketing standpoint and certain systems aimed at young children primarily used for education rather than pure entertainment.

Facts and Figures

Before I actually start spooling out some history, I feel I should take a moment to explain where all of this history is coming from.  My research has focused on acquiring primary sources.  This mostly consists of newspaper and magazine articles, trade publications, and interviews.  Some of these interviews have been conducted by me personally with executives involved in the industry, while others are drawn from the Internet or excerpts in books and magazines.  I have interviewed around two dozen people myself and am still actively collecting more.  When interviews conducted by others are added into the mix, I am drawing from several hundred accounts of the video game industry.  Interviews are one of the less reliable primary sources due both to memory fading over the passage of time and to interview subjects having their own biases and agendas (sometimes even just subconsciously), but they represent some of the best “insider info” currently available due to the difficulty of accessing corporate archival material.

Ideally, my work would be based almost exclusively on internal company documents and personal papers, but except in a few rare cases where documents have appeared online or as illustrations in books, these materials are not available to me.  This is partially due to an inability to travel to where these documents are located, but is largely due to a lack of availability.  Because the video game as a commercial product is only around forty years old, most of the major players are still alive and active in the industry, so there have been few donations to academic institutions as of yet.  Stanford, the Computer History Museum, The Strong Museum, and a few other institutions are beginning to acquire important collections, but there is still a long way to go.  In time, wider access to such collections will probably completely alter what we think we know about the industry, but for now recollections and published sources will have to do.

I would also like to use this post to make a brief comment about sales figures.  Sales figures are naturally a useful tool for ascertaining commercial success and are therefore of great interest for a business history such as this.  If this were a blog about the music or movie industries, finding such figures would be a relatively straightforward process, as there has been reliable and transparent sales tracking in those industries for decades.  Unfortunately, the video game industry is not so lucky.  There is no single source that compiles worldwide sales data with any degree of accuracy.  VGChartz is the only one that even appears to try at all, but the organization does not directly track retail data for the most part, so their estimates are usually unreliable.  In the United States, the NPD group has tracked sales in the video game industry since at least the early 1980s, but few figures were recorded in public sources until the late 1990s, and now the NPD has stopped reporting specific sales figures at all.  Furthermore, while the company has a sound methodology for estimating retail sales, they are still estimates.  Japan has more reliable sales reporting through Media Create, but again these are reliable estimates rather than actual sales figures.

So where does that leave this blog?  I will report sales figures for games and systems whenever I can, but I will make it clear where these figures came from (publisher press release, interview, retail tracking agency, analyst estimate, etc.)  As these numbers come from various sources and many of them will be estimates, they will not be accurate enough to assume an absolute sales ranking or chart with precision the growth and contraction of the industry over time, particularly in the 1970s and 1980s.  It will, however, give a general idea of how the industry was doing at any given time and which games, genres, and hardware systems were particularly popular.

Well, I hope this post helps to explain where my info will be coming from and how reliable it will be.  For my next post, I will probably take a moment to pin down the definition of “video game” and how it has changed over time before diving into the history of the earliest computer games in the 1950s.

Introduction

So here it is, my very own blog on video game history.  This blog will be slightly different from other blogs that have tackled this subject in the past, as I am using it as a research companion for a three-volume history of the industry I have been researching since 2006.  I first became interested in writing about video game history when I realized how many omissions and inaccuracies had crept into most of the books and articles written on the subject by the middle of the last decade, and this blog will serve primarily as a source critique pulling from every book, article, and recollection I can bring to bear on a particular topic.  While the state of video game history scholarship has improved mightily since about 2009, there are still a lot of facts to be uncovered and stories to be debunked.

This blog will unfold a comprehensive history of the video game industry on all formats (coin, console, computer, handheld, mobile, etc.) across all major markets (US, Japan, Europe, East Asia, etc.) from the earliest beginnings of electronic games in the 1950s until the present day.  In addition, I will post what I call “historical interludes” from time to time tackling important developments in other fields that relate to video games such as the history of the coin-op industry between the 1880s and the 1970s, advancements in computer hardware, and the birth of the World Wide Web.  Each post will cover a particular topic in video game history, but will not unfold strictly in narrative form.  Instead, I will present a historiography tracing the original sources for particular facts, any disagreements between the sources, and commentary on what I feel was the most likely sequence of events.  For many topics, this will be a straight forward task of stitching together primary sources to tell a narrative; for other topics, however, this will involve sifting through contradictory accounts and information.  I do not have a particular posting schedule in mind, so we shall see how this goes.

One final caveat to bear in mind: my research into the history of video games is ongoing, so there are still gaps in my knowledge.  Each post will present as comprehensive a treatment of the subject as I can craft with the sources available to me, but there are bound to be gaps.  There are also sure to be errors borne of incomplete data as well.  I will endeavor only to report what the sources say, but that is no guarantee of correctness.  I will, of course, correct any errors that come to my attention.

Well, I think that does a pretty good job of covering all the bases.  I really have no idea whether my ramblings will be of interest to anyone else, but at the very least this will serve as a place to organize my own thoughts and research as I continue wrestling with my manuscript.  Comments and feedback are welcome.  I hope you enjoy this journey through recent history.