alan turing

Worldly Wednesdays: The Father of Video Games

This post is part of an ongoing series annotating my book They Create Worlds: The Story of the People and Companies That Shaped the Video Game Industry, Vol. I. It covers material found in the prologue on pages xviii-xx. It is not necessary to have read the book to comprehend and appreciate the post.

Choosing where exactly to start They Create Worlds was a challenge. My goal was to document most of the early experiments using a television and/or a digital computer to play a game, but starting at the chronological beginning of these efforts does not make for a compelling opening. Its great that in the late 1940s Alan Turing and Donald Michie wrote chess programs that were never implemented or that Thomas Goldsmith and Estle Ray Mann liked to pretend in the lab that a cathode ray beam might be the arc of a missile, but there is no throughline from either of these experiments to the $150 billion industry that exists today. These primordial works are examined in the book, of course, but they did not feel appropriate as a hook to draw the reader in. Clearly then, the book should not start at the beginning, but it still needed to focus on a beginning.

So I opened on a bus station in New York City on August 31, 1966, when Ralph Baer thought to himself it might be neat to control objects displayed on a television set rather than passively consuming network programming. Its nice to have a firm date like that to commence the narrative, though its not nearly so firm as one might think. Ralph Baer was a careful record keeper as befit the meticulous, detail-oriented personality that shines through in his various interviews and in his autobiographical examination of his work in the video game business. For this reason, we do know that he transformed his crazy bus station idea into a formal memo on September 1, 1966. In interviews, Baer usually stated he did so back at his office the day after his brainstorm, but the timeframe may not line up. After all, he was down from his native Nashua, New Hampshire, to meet a business client, and Google tells me that’s a good four hour trip in the modern day by car. A 1966 bus was probably taking it even slower than that. Did he really have a meeting with a client in New York City that afternoon and then immediately scurry back up to Nashua? Its not impossible, but maybe a tad improbable. Nevertheless, that’s his story, so we are sticking to it.

Whether this brainstorm happened on August 30 or August 31 though is really a minor matter of little consequence. A more important statement to analyze is the claim I make at the end of this little vignette: “But Baer was the first person to suggest creating an interactive entertainment experience by conveying game data to a display through use of a video signal, so even though he never used the term in any of his subsequent documentation or patents, he is nevertheless the progenitor of what we now call the video game.”

So there it is right? Extra, extra read all about it! Alexander Smith says Ralph Baer invented the video game! Baer himself would have certainly been pleased to see those words in print had he lived long enough to see this book published, as he always claimed the mantle “Father of Video Games” and defended that title against all comers. Repeatedly. And in detail. I don’t begrudge him any of that: the man was absolutely a key cog in the transformation of video games from backroom lab experiments to mass market entertainment, and he lived in the shadow of Nolan Bushnell much longer than he deserved. But did he really, truly invent the video game or have a valid claim to its paternity? Well, despite my glib pronouncement in the prologue of the book, the answer is a little more complicated.

Ralph Baer (L) and Bill Harrison demonstrate their video game prototype. Are they the proud parents of the video game?

Before ruling on Baer’s case, we must decide what the heck constitutes a video game anyway. Ralph Baer would tell us there is a simple technical definition we can go by: if you are playing a game on a screen and that game data was conveyed to said screen by a video signal, then you have a video game; otherwise you don’t.

So then what is a video signal? A video signal is a modulated electromagnetic wave that conveys image data, with the frequency of the wave determining the chrominance, or color, of the image and the amplitude defining the luma, or brightness. This signal provides instructions to the cathode ray tube (CRT) of a television, which focuses a stream of electrons on a single point on a phosphor-coated screen, causing a sustained glow at that point. A magnetic field generated by coils within the CRT allows this beam of electrons to sweep back and forth across the entire screen, one horizontal line at a time, to create a complete picture from this collection of individual dots according to the parameters of the incoming signal. This is the process Baer is describing when he tells us a video game must, by definition, use a video signal.

Right away, Baer’s definition presents a problem by excluding a set of early products that were widely defined by the public and within the industry as video games in their own time: coin-operated vector games like Atari’s Asteroids (1979). The graphics in these games are drawn by a vector generator that takes direct control of the CRT and instructs it to aim at a specific point on the screen and then move on a specific vector until a command is given to deflect the beam in a different direction. The CRT is still receiving and responding to a signal, but it’s not a video signal. There is no doubt, however, that even the most conservative modern definition of a video game would include Asteroids, so Baer’s simple straightforward technical definition must already be set aside.

According to Baer, this is not a video game.

But once we open up the definition, where do we stop? Well, we have to include the vector games clearly, so it logically follows that any time a player interacts with images drawn by a CRT, it counts as a video game. But why stop at a CRT? Modern video games played on a high-definition television or monitor hooked up to a PlayStation 5 or an IBM PC Compatible certainly must count too. While Baer hews to an old-school definition of a video signal that presumes an analog system, digital displays are also driven by a video signal, albeit in a slightly different way. The prime difference between the two is that a digital signal consists of a series of ones and zeroes that provide instructions to draw a complete bitmapped image all at once rather than the analog method in which the image is drawn one scanline at a time. These digital images are pulsed to the television at a specific, constant frequency that determines how many times a second the display will be updated with a new image, the so-called “frame rate” as measured in frames per second (FPS) that is the obsession of high-end graphics connoisseurs. Its still video, so it counts.

So now we know we have a video game whenever someone interacts with objects on a screen, right? Well, not exactly. One problem with merely focusing on the screen is that in the coin-op world, games with “screens” of one form or another existed as early as the 1920s through the use of film projectors. What do we do with driving simulators like Auto Test (1954), shooting games like Nintendo’s Wild Gunman (1974), or even the Nutting Associates Computer Quiz (1967), all of which use a film projector to display images with which the player interacts?

Is Computer Quiz a video game? It has a screen.

Furthermore, what do we do with old computer games that outputted data to a teletype or some other display that does not incorporate a screen? Baer would tell us these are “computer games” rather than “video games” and that these are overlapping, but not identical, categories. Practically speaking, this feels like a meaningless distinction. After all, if one plays Adventure (1977) on a teletype instead of a CRT terminal, is this truly a different experience considering the computer executes the same code and the game proceeds in an identical manner either way?

Fellow video game historian Ethan Johnson and I pondered this topic at length, and he came up with a critical discriminating element. He did a whole blog post about this, but the relevant portion is as follows:

“[R]ather than needing a certain sort of signal, a display for a video game must be arbitrary. This means the display as a whole has to have a direct relation to the program underlying it and is able to achieve a number of different states rather than just “on” or “off”. In the early tic-tac-toe games for instance, while an individual state of a square only has a boolean value, the board as a whole has hundreds of different possible outcomes which are ultimately not pre-determined. The individual state of a screen in Computer Quiz only has two possible variables: Light on or light off, and therefore can not be said to be using a display in the same way as video games.”

Well that does for Computer Quiz at least, but it does not necessarily answer the question for a more complex game like Wild Gunman, and it only gets us a little closer to solving the conundrum of games on early computer systems that lacked a CRT. Furthermore, by opening up our definition to include all computer games with an arbitrary display, we are forced to address how to treat analog computing devices like the Nimatron displayed in 1940 at the World’s Fair, or Claude Shannon’s chess-playing Caissac machine from 1949. These are unquestionably both computers that play games, but does that really make them video games too? Do we now extend the history of video games all the way back to 1912 and the Spanish El Ajedrecista chess-playing automaton? Clearly, we need to establish some other limiting criteria.

El Ajedrecista, the analog computer that could figure out how to mate a lone king with a king and a rook. Is this the beginning of video games?

The easiest way to distinguish these edge cases from video games is to distinguish between the internal components that generate the game elements. A game like Wild Gunman uses electro-mechanical components, that is wipers, switches, and contacts facilitate the completion of electrical circuits to create playfield action by powering relays, steppers, and other mechanical parts. All video games by the Baer definition, including his own Magnavox Odyssey and Atari’s Pong (1972), use electronic components instead, with streams of electrons directed through a series of logic gates determining what happens over the course of the game. This allows us to distinguish not only electro-mechanical coin-operated games with screens from video games, but also allows us to remove early electro-mechanical analog computing devices from the equation.

Now that we have defined two critical technological elements, we need to add a couple of conceptual components to complete our working definition of the term video game. First, we need to define the user’s place in this interplay between logic circuits and a display. The easiest way to do this is focus on the commonly accepted definition of “playing a game,” which requires active participants rather than passive viewers. For video games, this means the game needs to unfold through direct user interaction via a control scheme allowing the players to directly manipulate objects on the display. There is really no need to elaborate on this element any further: so long as this interaction is happening, the how of it is unimportant.

Finally, we need to define exactly what interactions between a user, some electronic logic, and a display constitute playing a game. If we don’t, then a word processor or a DVD menu is just as much a video game as Pong. The best we can do here is define a video game, which is generally understood to be a leisure activity, as a product intended to provide entertainment. This is the most subjective part of our definition because different people find different things entertaining and even a DVD menu could be turned into a game by a particularly bored individual. The best we can do is point to the intrinsic purpose of the product as determined by authorial intent. If the program was produced or marketed with the primary goal of entertaining a person, then its a game. If the entertainment value is secondary to serving some other function, then it’s not. This is not a perfect test. For example, a product primarily designed to educate might also be deliberately crafted to entertain to hold the user’s interest. More work needs to be done on this element of the definition to clarify gray areas, but I will leave that for others to work out.

Now we have a serviceable, though still imperfect, definition of a video game that eliminates many, though not all, of the edge cases: A video game is an entertainment product played on a device containing electronic logic circuits in which the players interact with objects rendered on an arbitrary display.

Sorry Nimatron, you are not a video game.

So now that we have identified the child, who is the father? There are a few ways to look at this. One is to employ our newly articulated definition and look for the first product that meets all these criteria. That might lead us to 1947 and the prototype cathode-ray tube amusement device (CRTAD) patented by Estle Ray Mann and Thomas Goldsmith. I personally feel CRTAD does not really hold together under our definition of a video game, but that is a discussion for another time and another annotation. Regardless, I feel comfortable ruling that these two engineers are not the fathers because they probably never built a finished product, certainly never displayed the system publicly, and did not influence any of the projects that came afterwards. By the same logic, we can also dismiss the dueling chess AIs created by Michie and Turing in 1948, which were complete computer programs on paper, but were never implemented on an actual computer.

So if the first conceived games do not make the cut, what about the first fully operational and publicly displayed product? That would lead us to Bertie the Brain, a custom tic-tac-toe computer built by Josef Kates and demoed in 1950. There is no doubt that this is the earliest known publicly played device that meets all our criteria for a video game, but is being first really all its cracked up to be? Bertie was displayed at a single Canadian trade show and received virtually no press. It may have been played by a decent number of people — the show draws over one million attendees every year — but it did not stick in the collective memory and was only rediscovered by scholars in the 2010s. Furthermore, it was solely intended to demonstrate the workings of a new type of vacuum tube and was not marketed as a new form of entertainment. Once again, I think our father — an appropriate term only because all our early pioneers in this field were men — needs to do more than bring a simulation into the world; he needs to understand he is creating something that could change the face of entertainment. Clearly, Kates wanted the attendees to be entertained while using his computer, but that is not quite the same thing.

Sorry Messieurs Goldsmith, Turing, Michie, and Kates. You are not the father.

So how about that master of physics and entertainment, “Wonderful Willie” Higinbotham? There is a solid case to be made that his tennis game, retroactively dubbed Tennis for Two (1958) by historians, marked the first time a video game was created solely to entertain the public. Therefore, he is our first real contender for the title “father of video games.” Once again though, I believe we need to exclude him because he did not start a wider movement. Our father is no good to us if his child failed to have children of its own.

So what about the first entertainment program that could be acquired by the general public? Right now, the earliest known game to fit that definition is a baseball simulation created by IBM employee John Burgeson in 1960-61 and briefly requestable as part of the program library for the IBM 1620 computer before being withdrawn from the catalog in 1963. There are a couple of problems here. First, this program only barely meets our definition of a video game because the only player interaction happens at the beginning when creating a team by selecting from a pool of players. More importantly, it appeared and vanished so quickly that it failed to have any sort of impact.

Then maybe its Steve Russell et al. and their Spacewar! (1962), which certainly achieved popularity across a select group of universities and research institutions and itself birthed the first commercial video game, Computer Space (1971)? Now I think we are getting closer. Baer would discount this game because it uses a point-plotting display, which functions in essentially the same manner as a vector monitor except that instead of drawing lines it draws each point individually. As Baer might say, “no video signal, no video game.” But we have already moved past that narrow definition. The main strike against the game is that it did remain confined to a small number of locations and was not commercialized. One could argue that since video games did not capture the imagination of the general public until commercial models were available that anyone could gain access to for a reasonable price, then our father needs to be someone that brought video games into the mass market. I find that argument flimsy, but it can be made.

Sorry Willie, you are not the father. Steve, we’ll get back to you…

So now we come at last to the final two contenders, Ralph Baer and Nolan Bushnell. Among the general public, I think the debate really comes down to just these two. The controversy over which of them birthed the video game has literally existed for as long as people have written about video game history, with Steven Bloom’s monograph Video Invaders debating this very topic as early as 1982. Both have strong claims to the title. Nolan Bushnell came to market first with Computer Space, but Ralph Baer started work on his system earlier and had largely completed it by 1968. Bushnell also debuted the first successful product, Pong, but the game only came about because he saw the table tennis game on the Odyssey.

Which person one prefers really depends on how you define the parameters. Is it first conceived that matters or first released? Is it enough to dream up a system, or does said system also need to capture the public’s imagination? Certainly Baer and Bushnell themselves expended most of their energy trying to prove who came up with the idea of commercializing video games first during a series of patent lawsuits in the 1970s. Baer, with that meticulous streak, was able to provide a plethora of verified documentation from 1966-72 elucidating every step along the way from initial spark to final product. Bushnell could only counter this by claiming he wrote a paper in college in the 1960s about playing games on a computer after he saw Spacewar!. When asked to submit the paper as evidence, he proved unable to do so. The courts rightly sided with Baer, but winning a patent suit is not quite the same thing as winning a paternity suit.

Clash of the Titans. Ralph Baer and Nolan Bushnell duke it out for the title Father of Video Games in this drawing by Howard Cruse found in the book Video Invaders by Steve Bloom.

So now that we have a video game definition and a list of the major contenders for our parental figure, is Ralph Baer the “progenitor of what we now call the video game”? Not really. I feel the video game really has two sets of parents, Russell and friends, who created the first video game to gain a significant following across multiple installations, and Bushnell and his partner Ted Dabney, who were inspired by the work of the Russell group to engineer the first commercial video game product. This leaves Baer the odd man out despite the pride of place I gave him in the book. Baer himself would have certainly not been pleased to see these words in print had he lived long enough to see this blog post published. That said, he really was the first person to follow through on the idea that manipulating objects on a standard television set could be fun; he was the first person to realize it was possible to create a hardware system to do so that was cheap enough it could be commercialized for home use, and he worked out how to interface this hardware system with a television set using an RF modulator and a video signal. These were the building blocks upon which the entire home video game industry was built, and that in itself is a monumental achievement. So while I am not entirely comfortable calling Baer the “father of video games,” I will gladly cede him the title “father of the video game console” and give him pride of place at the beginning of my three-volume history. Baer’s bus stop brainstorm may not have been the beginning, but there is no doubt it was quite a beginning.

They Create Worlds: The Story of the People and Companies That Shaped the Video Game Industry, Vol. I 1971-1982 is available in print or electronically direct from the publisher, CRC Press, as well as through Amazon and other major online retailers.

Advertisement

Historical Interlude: The Birth of the Computer Part 2, The Creation of the Electronic Digital Computer

In the mid-nineteenth century, Charles Babbage attempted to create a program-controlled universal calculating machine, but failed for lack of funding and the difficulty of creating the required mechanical components.  This failure spelled the end of digital computer research for several decades.  By the early twentieth century, however, fashioning small mechanical components no longer presented the same challenge, while the spread of electricity generating technologies provided a far more practical power source than the steam engines of Babbage’s day.  These advances culminated in just over a decade of sustained innovation between 1937 and 1949 out of which the electronic digital computer was born.  While both individual computer components and the manner in which the user interacts with the machine have continued to evolve, the desktops, laptops, tablets, smartphones, and video game consoles of today still function according to the same basic principles as the Manchester Mark 1, EDSAC, and EDVAC computers that first operated in 1949.  This blog post will chart the path to these three computers.

Note: This is the second of four “historical interlude” posts that will summarize the evolution of computer technology between 1830 and 1960.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, Reckoners: The Prehistory of the Digital Computer, From Relays to the Stored Program Concept, 1935-1945 by Paul Ceruzzi, The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, and the articles “Alan Turing: Father of the Modern Computer” by B. Jack Copeland and Diane Proudfoot, “Colossus: The First Large Scale Electronic Computer” by Jack Copeland, and “A Brief History of Computing,” also by Copeland.

Analog Computing

102680080-03-01

Vannevar Bush with his differential analyzer, an analog computer

While a digital computer after the example of Babbage would not appear until the early 1940s, specialized computing devices that modeled specific systems mechanically continued to be developed in the late nineteenth and early twentieth centuries.  These machines were labelled analog computers, a term derived from the word “analogy” because each machine relied on a physical model of the phenomenon being studied to perform calculations unlike a digital computer that relied purely on numbers.  The key component of these machines was the wheel-and-disc integrator, first described by James Thomson, that allowed integral calculus to be performed mechanically.  Perhaps the most important analog computer of the nineteenth century was completed by James’s brother William, better known to history as Lord Kelvin, in 1876.  Called the tide predictor, Kelvin’s device relied on a series of mechanical parts such as pulleys and gears to simulate the gravitational forces that produce the tides and measured the water depth of a harbor at any given time of day, printing the results on a roll of paper.  Before Lord Kelvin’s machine, creating tide tables was so time-consuming that only the most important ports were ever charted.  After Kelvin’s device entered general use, it was finally possible to complete tables for thousands of ports around the world.  Improved versions of Kelvin’s computer continued to be used until the 1950s.

In the United States, interest in analog computing began to take off in the 1920s as General Electric and Westinghouse raced to build regional electric power networks by supplying alternating-current generators to power plants.  At the time, the mathematical equations required to construct the power grids were both poorly understood and difficult to solve by hand, causing electrical engineers to turn to analog computing as a solution.  Using resistors, capacitors, and inducers, these computers could simulate how the network would behave in the real world.  One of the most elaborate of these computers, the AC Network Analyzer, was built at MIT in 1930 and took up an entire room.  With one of the finest electrical engineering schools in the country, MIT quickly became a center for analog computer research, which soon moved from highly specific models like the tide predictor and power grid machines to devices capable of solving a wider array of mathematical problems through the work of MIT professor Vannevar Bush.

One of the most important American scientists of the mid-twentieth century, Bush possessed a brilliant mind coupled with a folksy demeanor and strong administration skills.  These traits served him well in co-founding the American Appliance Company in 1922 — which later changed its name to Raytheon and became one of the largest defense contractors in the world — and led to his appointment in 1941 to head the new Office of Scientific Research and Development, which oversaw and coordinated all wartime scientific research by the United States government during World War II and was instrumental to the Allied victory.

Bush built his first analog computer in 1912 while a doctoral student at Tufts College.  Called the “profile tracer,” it consisted of a box hung between two bicycle wheels and would trace the contours of the ground as it was rolled.  Moving on to MIT in 1919, Bush worked on problems involving electric power transmission and in 1924 developed a device with one of his students called the “product integraph” to simplify the solving and plotting of the first-order differential equations required for that work.  Another student, Harold Hazen, suggested this machine be extended to solve second-order differential equations as well, which would make the device useful for solving a wide array of physics problems.  Bush immediately recognized the potential of this machine and worked with Hazen to build it between 1928 and 1931.  Bush called the resulting machine the “differential analyzer.”

The differential analyzer improved the operation of Thomson’s wheel-and-disc integrator through a device called a torque amplifier, allowing it to mechanically model, solve, and plot a wider array of differential equations than any analog computer that came before, but it still fell short of the Babbage ideal of a general-purpose digital device.  Nevertheless, the machine was installed at several universities, corporations, and government laboratories and demonstrated the value of using a computing device to perform advanced scientific calculations.  It was therefore an important stepping stone on the path to the digital computer.

Electo-Mechanical Digital Computers

23593-004-D5156F2C

The Automatic Sequence Controlled Calculator (ASCC), also known as the Harvard Mark I, the first proposed electro-mechanical digital computer, though not the first completed

With problems like power network construction requiring ever more complex equations and the looming threat of World War II requiring world governments to compile large numbers of ballistics tables and engage in complex code-breaking operations, the demand for computing skyrocketed in the late 1930s and early 1940s.  This led to a massive expansion of human computing and the establishment of the first for-profit calculating companies, beginning with L.J. Comrie’s Scientific Computing Services Limited in 1937.  Even as computing services were expanding, however, the armies of human computers required for wartime tasks were woefully inadequate for completing necessary computations in a timely manner, while even more advanced analog computers like the differential analyzer were still too limited to carry out many important tasks.  It was in this environment that researchers in the United States, Great Britain, and Germany began attempting to address this computing shortfall by designing digital calculating machines that worked similarly to Babbage’s Analytical Engine but made use of more advanced components not available to the British mathematician.

The earliest digital calculating machines were based on electromechanical relay technology.  First developed in the mid nineteenth century for use in the electric telegraph, a relay consists in its simplest form of a coil of wire, an armature, and a set of contacts.  When a current is passed through the coil, a magnetic field is generated that attracts the armature and therefore draws the contacts together, completing a circuit.  When the current is removed, a spring causes the armature to return to the open position.  Electromechanical relays played a crucial role in the telephone network in the United States, routing calls between different parts of the network.  Therefore, Bell Labs, the research arm of the telephone monopoly AT&T, served as a major hub for relay research and was one of the first places where the potential of relays and similar switching units for computer construction was first contemplated.

The concept of the binary digital circuit, which continues to power computers to this day, was independently articulated and applied by several scientists and mathematicians in the late 1930s.  Perhaps the most influential of these thinkers — due to his work being published and widely disseminated — was Claude Shannon.  A graduate of the University of Michigan with degrees in electrical engineering and math, Shannon matriculated to MIT, where he secured a job helping Bush run his Differential Analyzer.  In 1937, Shannon took a summer job at Bell Labs, where he gained hands on experience with the relays used in the phone network and connected their function with another interest of his — the symbolic logic system created by mathematician George Boole in the 1840s.

Basically, Boole had discovered a way to represent formal logical statements mathematically by giving a true proposition a value of 1 and a false proposition a value of 0 and then constructing mathematical equations that could represent the basic logical operations such as “and,” “or” and “not.”  Shannon realized that since a relay either existed in an “on” or an “off” state, a series of relays could be used to construct logic gates that emulated Boolean logic and therefore carry out complex instructions, which in their most basic form are a series of “yes” or “no,” “on” or “off,” “1” or “0” propositions.  When Shannon returned to MIT that fall, Bush urged him to include these findings in his master’s thesis, which was published later that year under the name “A Symbolic Analysis of Relay and Switching Circuits.”  In November 1937, a Bell Labs researcher named George Stibitz, who was aware of Shannon’s theories, applied the concept of binary circuits to a calculating device for the first time when he constructed a small relay calculator he dubbed the K-Model because he built it at his kitchen table.  Based on this prototype, Stibitz received permission to build a full-sized model at Bell Labs, which was named the Complex Number Calculator and completed in 1940.  While not a full-fledged programmable computer, Stibitz’s machine was the first to use relays to perform basic mathematical operations and demonstrated the potential of relays and binary circuits for computing devices.

One of the earliest digital computers to use electromechanical relays was proposed by Howard Aiken in 1936.  A doctoral candidate in mathematics at Harvard University, Aiken needed to solve a series of non-linear differential equations as part of his dissertation, which was beyond the capabilities of Bush’s differential analyzer at neighboring MIT.  Unenthused by the prospect of solving these equations by hand, Aiken, who was already a skilled electrical engineer, proposed that MIT build a large-scale digital calculator to do the work.  The university turned him down, so Aiken approached the Monroe Calculating Machine Company, which also failed to see any value in the project.  Monroe’s chief engineer felt the idea had merit, however, and urged Aiken to approach IBM.

When last we left IBM in 1928, the company was growing and profitable, but lagged behind several other companies in overall size and importance.  That all changed with the onset of the Great Depression.  Like nearly every other business in the country, IBM was devastated by the market crash of 1929, but Tom Watson decided to boldly soldier on without laying off workers or cutting production, keeping his faith that the economy could not continue in a tailspin for long.  He also increased the company’s emphasis on R&D, building one of the world’s first corporate research laboratories to house all his engineers in Endicott, New York in 1932-33 at a cost of $1 million.  As the Depression dragged on, machines began piling up in the factories and IBM’s growth flattened, threatening the solvency of the company.  Watson’s gambles increasingly appeared to be a mistake, but then President Franklin Roosevelt began enacting his New Deal legislation.

In 1935, the United States Congress passed the Social Security Act.  Overnight, every company in the country was required to keep detailed payroll records, while the Social Security Administration had to keep a file on every worker in the nation.  The data processing burden of the act was enormous, and IBM, with its large stock of tabulating machines and fully operational factories, was the only company able to begin filling the demand immediately.  Between 1935 and 1937, IBM’s revenues rose from $19 million to $31 million and then continued to grow for the next 45 years.  The company was never seriously challenged in tabulating equipment again.

Traditionally, data processing revolved around counting tangible objects, but by the time Aiken approached IBM Watson had begun to realize that scientific computing was a natural extension of his company’s business activities.  The man who turned Watson on to this fact was Ben Wood, a Columbia professor who pioneered standardized testing and was looking to automate the scoring of his tests using tabulating equipment.  In 1928, Wood wrote ten companies to win support for his ideas, but only Watson responded, agreeing to grant him an hour to make his pitch.  The meeting began poorly as the nervous Wood failed to hold Watson’s interest with talk of test scoring, so the professor expanded his presentation to describe how nearly anything could be represented mathematically and therefore quantified by IBM’s machines.  One hour soon stretched to over five as Watson grilled Wood and came to see the value of creating machines for the scientific community.  Watson agreed to give Wood all the equipment he needed, dropped in frequently to monitor Wood’s progress, and made the professor an IBM consultant.  As a result of this meeting, IBM began supplying equipment to scientific labs around the world.

Aiken

Howard Aiken, designer of the Automatic Sequence Control Calculator

In 1937, Watson began courting Harvard, hoping to create the same kind of relationship he had long enjoyed with Columbia.  He dispatched an executive named John Phillips to meet with deans and faculty, and Aiken used the opportunity to introduce IBM to his calculating device.  He also wrote a letter to James Bryce, IBM’s chief engineer, who sold Watson on the concept.  Bryce assigned Clair Lake to oversee the project, which would be funded and built by IBM in Endicott according to Aiken’s design and then installed at Harvard.

Aiken’s initial concept basically stitched together a card reader, a multiplying punch, and a printer, removing human intervention in the process by connecting the components through electrical wiring and incorporating relays as switching units to control the passage of information through the parts of the machine.  Aiken drew inspiration from Babbage’s Analytical Enginge, which the mathematician first learned about soon after proposing his device when a technician informed him that the university actually owned a fragment of one of Babbage’s calculating machines that had been donated by the inventor’s son in 1886. Unlike Babbage, however, Aiken did not employ separate memory and computing elements, as all calculations were performed across a series of 72 accumulators that both stored and modified the data transmitted to them by the relays.  Without something akin to a CPU, the machine was actually less advanced than the Analytical Engine in that it did not support conditional branching — the ability to modify a program on the fly to incorporate the results of previous calculations — and therefore required all calculations to be done in a set sequence while requiring complex programs to use large instruction sets and long lines of paper tape.

Work began on the Automatic Sequence Control Calculator (ASCC) Mark I in 1939, but the onset of World War II resulted in the project being placed on the back burner as IBM shifted its focus to more important war work and Aiken entered the Navy.  It was finally completed in January 1943 at a cost of $500,000 and subsequently installed at Harvard in early 1944 after undergoing a year of testing in Endicott.  Measuring 8 feet tall and 51 feet long, the machine was housed in a gleaming metal case designed by Norman Bel Geddes, known for his art deco works such as the Metropolitan Opera House in New York.  By the time of its completion, the ASCC already lagged behind several other machines technologically and therefore did not play a significant role in the further evolution of the computer.  It is notable, however, both as the earliest proposed digital computer to actually be built and as IBM’s introduction to the world of computing.

zuse

Konrad Zuse, designer of the Z1, the first completed digital computer

While Howard Aiken was still securing support for his digital computer, a German named Konrad Zuse was busy completing one of his own.  Born in Berlin, Zuse spent most of his childhood in Braunsberg, East Prussia (modern Braniewo, Poland).  Deciding on a career as an engineer, he enrolled at the Technical College of Berlin-Charlottenburg in 1927.  While not particularly interested in mathematics, Zuse did have to work with complex equations to calculate the lode-bearing capability of structures, and like Aiken across the Atlantic he was not enthused at having to perform these calculations by hand.  Therefore, in 1935 Zuse began designing a universal automatic calculator consisting of a computing element, a storage unit, and a punched tape reader, independently arriving at the same basic design that Babbage had developed a century before.

While Zuse’s basic concept did not stray far from Babbage, however, he did incorporate one crucial improvement in his design that neither Babbage nor Aiken had considered, storing the numbers in memory according to a binary rather than a decimal system.  Zuse’s reason for doing so was practical — as an accomplished mechanical engineer he preferred keeping his components as simple as possible to make the computer easier to design and build — but the implications of this decision went far beyond streamlined memory construction.  Like Shannon, Zuse realized that by recognizing data in only two states, on and off, a computing device could represent not just numbers, but also instructions.  As a result, Zuse was able to use the same basic building blocks for both his memory and computing elements, simplifying the design further.

By 1938, Zuse had completed his first computer, a mechanical binary digital machine called the Z1. (Note: Originally, Zuse called this computer the V1 and continued to use the “V” designation on his subsequent computers.  After World War II, he began referring to these machines using the “Z” designation instead to avoid confusion with Germany’s V1 and V2 rockets.)  This first prototype was fairly basic, but it proved two things for Zuse: that he could create a working automatic calculating device and that the computing element could not be mechanical, as the components were just too unreliable.  The solution to this problem came from college friend Helmut Schreyer, an electrical engineer who convinced Zuse that the electrical relays used in telephone networks would provide superior performance.  Schreyer also worked as a film projectionist and convinced Zuse to switch from paper tape to punched film stock for program control.  These improvements were incorporated into the Z2 computer, completed in 1939, which never worked reliably, but was essential for securing funding for Zuse’s next endeavor.

Z3_1

A reconstruction of Konrad Zuse’s Z3, the world’s first programmable fully automatic digital computer

In 1941, Konrad Zuse completed the Z3 for the German government, the first fully operational digital computer in the world.  The computer consisted of two cabinets containing roughly 2,600 relays — 1,800 for memory, 600 for computing, and 200 for the tape reader — and a small display/keyboard unit for inputting programs.  With a memory of only 64 characters, the computer was too limited to carry out useful work, but it served as an important proof of concept and illustrated the potential of a programmable binary computer.

Unfortunately for Zuse, the German government proved disinterested in further research.  Busy fighting a war it was convinced would be over in just a year or two, the Third Reich limited its research activities to projects that could directly impact the war effort in the short-term and ignored the potential of computing entirely.  While Zuse continued to work on the next evolution of his computer design, the Z4, between 1942 and 1945, he did so on his own without the support of the Reich, which also turned down a computer project by his friend Schreyer that would have replaced relays with electronics.  Isolated from the rest of the developed world by the war, Zuse’s theories would have little impact on subsequent developments in computing, while the Z3 itself was destroyed in an Allied bombing raid on Berlin in 1943 before it could be studied by other engineers.  That same year, Great Britain’s more enthusiastic support of computer research resulted in the next major breakthrough in computing technology.

The Birth of the Electronic Computer

Colossus

Colossus, the world’s first programmable electronic computer

Despite the best efforts of Aiken and Zuse, relays were never going to play a large role in computing, as they were both unreliable and slow due to a reliance on moving parts.  In order for complex calculations to be completed quickly, computers would need to transition from electro-mechanical components to electronic ones, which function instead by manipulating a beam of electrons.

The development of the first electronic components grew naturally out of Thomas Edison’s work with the incandescent light bulb.  In 1880, Edison was conducting experiments to determine why the filament in his new incandescent lamps would sometimes break and noticed that an electrical charge would not flow through a negatively charged plate.  Although this effect had been observed by other scientists as early as 1873, Edison was the first to patent a voltage-regulating device based on this principle in 1883, which resulted in the phenomenon being named the “Edison effect.”

Edison, who did not have a solid grasp of the underlying science, did not follow up on his discovery.  In 1904, however, John Fleming, a consultant with the Marconi Company engaged in research relating to wireless telegraphy, realized that the Edison effect could be harnessed to create a device that would only allow the flow of electric current in one direction and thus serve as a rectifier that turned a weak alternating current into a stronger direct current.  This would in turn allow a receiver to be more sensitive to radio waves, thus making reliable trans-Atlantic wireless communication possible.  Based on his research, Fleming created the first diode, the Fleming Valve, in which an electric current was passed in one direction from a negatively-charged cathode to a positively-charged anode through a vacuum-sealed glass container.  The vacuum tube concept invented by Fleming remained the primary building block of electronic devices for the next fifty years.

In 1906, an American electrical engineer named Lee DeForest working independently of Fleming began creating his own series of electron tubes, which he called Audions.  DeForest’s major breakthrough was the development of the triode, which used a third electrode called a grid that could control the voltage of the current in the tube and therefore serve as an amplifier to boost the power of a signal.  DeForest’s tube contained gas at low pressure, which inhibited reliable operation, but by 1913 the first vacuum tube triodes had been developed.  In 1918, British physicists William Eccles and F.W. Jordan used two triodes to create the Eccles-Jordan circuit, which could flip between two states like an electrical relay and therefore serve as a switching device.

Even after the invention of the Eccles-Jordan circuit, few computer pioneers considered using vacuum tubes in their devices.  Conventional wisdom held they were unsuited for large-scale projects because a triode contains a filament that generates a great deal of heat and is prone to burnout.  Consequently, the failure rate would be unacceptable in a device requiring thousands of tubes.  One of the first people to challenge this view was a British electrical engineer named Thomas Flowers.

Tommy_Flowers

Tommy Flowers, the designer of Colossus

Born in London’s East End, Flowers, the son of a bricklayer, simultaneously took an apprenticeship in mechanical engineering at the Royal Armory, Woolwich, while attending evening classes at the University of London.  After graduating with a degree in electrical engineering, Flowers took a job with the telecommunications branch of the General Post Office (GPO) in 1926.  In 1930, he was posted to the GPO Research Branch at Dollis Hill, where he established a reputation as a brilliant engineer and achieved rapid promotion.

In the early 1930s, Flowers began conducting research into the use of electronics to replace relays in telephone switchboards.  Counter to conventional wisdom, Flowers realized that vacuum tube burnout usually occurred when a device was switched on and off frequently.  In a switchboard or computer, the vacuum tubes could remain in continuous operation for extended periods once switched on, thus greatly increasing their longevity.  Before long, Flowers began experimenting with equipment containing as many as 3,000 vacuum tubes.  Flowers would make the move from switchboards to computing devices with the onset of World War II.

With the threat of Nazi Germany rising in the late 1930s, the United Kingdom began devoting more resources to cracking German military codes.  Previously, this work had been carried out in London at His Majesty’s Government Code and Cypher School, which was staffed with literary scholars rather than cryptographic experts.  In 1938, however, MI6, the British Intelligence Service, purchased a country manor called Bletchley Park, near the intersection of the rail lines connecting Oxford and Cambridge and London and Birmingham, to serve as a cryptographic and code-breaking facility.  The next year, the government began hiring mathematicians to seriously engage in code-breaking activities.  The work conducted at the manor has been credited with shortening the war in Europe and saving countless lives. It also resulted in the development of the first electronic computer.

Today, the Enigma Code, broken by a team led by Alan Turing, is the most celebrated of the German ciphers decrypted at Bletchley, but this was actually just one of several systems used by the Reich and was not even the most complicated.  In mid-1942, Germany initiated general use of the Lorenz Cipher, which was reserved for messages between the German High Command and high-level army commands, as the encryption machine — which the British code-named “Tunny” — was not easily portable like the Enigma Machine.  In 1942, Bletchley established a section dedicated to breaking the cipher, and by November a system called the “statistical method” had been developed by William Tutte to crack the code, which built on earlier work by Turing.  When Tutte presented his method, mathematician Max Newman decided to establish a new section — soon labelled the Newmanry — to apply the statistical method with electronic machines.  Newman’s first electronic codebreaking machine, the Heath Robinson, was both slow and unreliable, but it worked well enough to prove that Newman was on the right track.

Meanwhile, Flowers joined the code-breaking effort in 1941 when Alan Turing enlisted Dollis Hill to create some equipment for use in conjunction with the Bombe, his Enigma-cracking machine.  Turing was greatly impressed by Flowers, so when Dollis Hill encountered difficulty crafting a combining unit for the Heath Robison, Turing suggested that Flowers be called in to help.  Flowers, however, doubted that the Heath Robisnon would ever work properly, so in February 1943 he proposed the construction of an electronic computer to do the work instead.  Bletchley Park rejected the proposal based on existing prejudices over the unreliability of tubes, so Flowers began building the machine himself at Dollis Hill.  Once the computer was operational, Bletchley saw the value in it and accepted the machine.

Installed at Bletchley Park in January 1944, Flowers’s computer, dubbed Colossus, contained 1600 vacuum tubes and processed 5,000 characters per second, a limit imposed not by the speed of the computer itself, but rather by the speed that the reader could safely operate without risk of destroying the magnetic tape.  In June 1944, Flowers completed the first Colossus II computer, which contained 2400 tubes and used an early form of shift register to perform five simultaneous operations and therefore operated at a speed of 25,000 characters per second.  The Colossi were not general purpose computers, as they were dedicated solely to a single code-breaking operation, but they were program-controlled. Unlike electro-mechanical computers, however, electronic computers process information too quickly to accept instructions from punched cards or paper tape, so the Colossus actually had to be rewired using plugs and switches to run a different program, a time-consuming process.

As the first programmable electronic computer, Colossus was an incredibly significant advance, but it ultimately exerted virtually no influence on future computer design.  By the end of the war, Bletchley Park was operating nine Colossus II computers alongside the original Colossus to break Tunny codes, but after Germany surrendered, Prime Minister Winston Churchill ordered the majority of the machines dismantled and kept the entire project classified.  It was not until the 1970s that most people knew that Colossus had even existed, and the full function of the machine remained unknown until 1996.  Therefore, instead of Flowers being recognized as the inventor of the electronic computer, that distinction was held for decades by a group of Americans working at the Moore School of the University of Pennsylvania.

ENIAC

ENIAC_Image_2

The Electronic Numerical Integrator and Computer (ENIAC), the first widely known electronic computer

In 1935, the United States Army established a new Ballistic Research Laboratory (BRL) at the Aberdeen Proving Grounds in Maryland dedicated to calculating ballistics tables for artillery.  With modern guns capable of lofting projectiles at targets many miles away, properly aiming them required the application of complex differential equations, so the BRL assembled a staff of thirty to create trajectory tables for various ranges, which would be compiled into books for artillery officers.  Aberdeen soon installed one of Bush’s differential analyzers to help compute the tables, but the onset of World War II overwhelmed the lab’s capabilities.  Therefore, it began contracting some of its table-making work with the Moore School, the closest institution with its own differential analyzer.

The Moore School of Electrical Engineering of the University of Pennsylvania owned a fine reputation, but it carried nowhere near the prestige of MIT and therefore did not receive the same level of funding support from the War Department for military projects.  It did, however, place itself on a war footing by accelerating degree programs through the elimination of vacations and instituting a series of war-related training and research programs.  One of these was the Engineering, Science, Management, War Training (ESMWT) program, an intensive ten-week course designed to familiarize physicists and mathematicians with electronics to address a manpower shortfall in technical fields.  One of the graduates of this course was a physics instructor at a nearby college named John Mauchly.

Born in Cincinnati, Ohio, John William Mauchly grew up in Chevy Chase, Maryland, after his physicist father became the research chief for the Department of Terrestrial Magnetism of the Carnegie Insitution, a foundation established in Washington, D.C. to support scientific research around the country.  Sebastien Mauchly specialized in recording atmospheric electrical conditions to further weather research, so John became particularly interested in meteorology.  After completing a Ph.D. at Johns Hopkins University in 1932, Mauchly took a position at Ursinus College, a small Philadelphia-area institution, where he studied the effects of solar flares and sunspots on long-range weather patterns.  Like Aiken and Zuse before him, Mauchly grew tired of solving the complex equations required for his research and began to dream of building a machine to automate this process.  After viewing an IBM electric calculating machine and a vacuum tube encryption machine at the 1939 World’s Fair, Mauchly felt electronics would provide the solution, so he began taking a night course in electronics and crafting his own experimental circuits and components.  In December 1940, Moore gave a lecture articulating his hopes of building a weather prediction computer to the American Association for the Advancement of Science.  After the lecture, he met an Iowa State College professor named John Atanasoff, who would play an important role in opening Mauchly to the potential of electronics by inviting him out to Iowa State to study a computer project he had been working on for several years.

atanasoff-berry-computer

The Atanasoff-Berry Computer (ABC), the first electronic computer project, which was never completed

A graduate of Iowa State College that earned a Ph.D. in theoretical physics from the University of Wisconsin-Madison in 1930, John Atanasoff, like Howard Aiken, was drawn to computing due to the frustration of solving equations for his dissertation.  In the early 1930s, Atanasoff experimented with tabulating machines and analog computing to make solving complex equations easier, culminating in a decision in December 1937 to create a fully automatic electronic digital computer.  Like Shannon and Zuse, Atanasoff independently arrived at binary digital circuits as the most efficient way to do calculations, remembering childhood lessons by his mother, a former school teacher, on calculating in base 2.  While he planned to use vacuum tubes for his calculating circuits, however, he rejected them for storage due to cost.  Instead, he developed a system in which paper capacitors would be attached to a drum that could be rotated by a bicycle chain.  By keeping the drums rotating so that the capacitors would sweep past electrically charged brushes once per second, Atanasoff believed he would be able to keep the capacitors charged and therefore create a low-cost form of electronic storage.  Input and output would be accomplished through punch cards or paper tape.  Unlike most of the other computer pioneers profiled so far, Atanasoff was only interested in solving a specific set of equations and therefore hardwired the instructions into the machine, meaning it would not be programmable.

By May 1939, Atanasoff was ready to put his ideas into practice, but he lacked electrical engineering skills himself and therefore needed an assistant to actually build his computer.  After securing a $650 grant from the Iowa State College Research Council, Atanasoff hired a graduate student recommended by one of his colleagues named Clifford Berry.  A genius who graduated high school at sixteen, Berry had been an avid ham radio operator in his youth and worked his way through college at Iowa State as a technician for a local company called Gulliver Electric.  He graduated in 1939 at the top of his engineering school class.  The duo completed a small-scale prototype of Atanasoff’s concept in late 1939 and then secured $5,330 from a private foundation to begin construction of what they named the Atanasoff-Berry Computer (ABC), the first electronic computer to employ separate memory and computing elements and a binary system for processing instructions and storing data, predating Colossus by just a few years.  By 1942, the ABC was nearly complete, but it remained unreliable and was ultimately abandoned when Atanasoff left Iowa State for a wartime posting with the Naval Ordinance Laboratory.  With no other champion at the university, the ABC was cannibalized for parts for more important wartime projects, after which the remains were placed in a boiler room and forgotten.  Until a patent lawsuit brought renewed attention to the computer in the 1960s, few were aware the ABC had ever existed, but in June 1941 Mauchly visited Atanasoff and spent five days learning everything he could about the machine.  While there is still some dispute regarding how influential the ABC was on Mauchly’s own work, there is little doubt that at the very least the computer helped guide his own thoughts on the potential of electronics for computing.

Upon completing the ESMWT at the Moore School, Mauchly was offered a position on the school’s faculty, where he soon teamed with a young graduate student he met during the course to realize his computer ambitions.  John Presper Eckert was the only son of a wealthy real estate developer from Philadelphia and an electrical engineering genius who won a city-wide science fair at twelve years old by building a guidance system for model boats and made money in high school by building and selling radios, amplifiers, and sound systems.  Like Tommy Flowers in England, Eckert was a firm believer in the use of vacuum tubes in computing projects and worked with Mauchly to upgrade the differential analyzer by using electronic amplifiers to replace some of its components.  Meanwhile, Mauchly’s wife was running a training program for human computers, which the university was employing to work on ballistics tables for the BRL.  Even with the differential analyzer working non-stop and over two hundred human computers doing calculations by hand, a complete table of roughly 3,000 trajectories took the BRL thirty days to complete.  Mauchly was uniquely positioned in the organization to understand both the demands being placed on Moore’s computers and the technology that could greatly increase the efficiency of their work.  He therefore drafted a memorandum in August 1942 entitled “The Use of High Speed Vacuum Devices for Calculating” in an attempt to interest the BRL in greatly speeding up artillery table creation through use of an electronic computer.

Mauchly submitted his memorandum to both the Moore School and the Army Ordinance Department and was ignored by both, most likely due to the continued skepticism over the use of vacuum tubes in large-scale computing projects.  The paper did catch the attention of one important person, however, Lieutenant Herman Goldstine, a mathematics professor from the University of Chicago currently serving as the liaison between the BRL and the Moore School human computer training program.  While not one of the initial recipients of the memo, Goldstine became friendly with Mauchly in late 1942 and learned of the professor’s ideas.  Aware of the acute manpower crisis faced by the BRL for creating its ballistic tables, Goldstine urged Mauchly to resubmit his memo and promised he would use all his influence to aid its acceptance.  Therefore, in April 1943, Mauchly submitted a formal proposal for an electronic calculating machine that was quickly approved and given the codename “Project PX.”

g

John Mauchly (right) and J. Presper Eckert, the men behind ENIAC

Eckert and Mauchly began building the Electronic Numerical Integrator and Computer (ENIAC) in autumn 1943 with a team of roughly a dozen engineers.  Mauchly remained the visionary of the project and was largely responsible for defining its capabilities, while the brilliant engineer Eckert turned that vision into reality.  ENIAC was a unique construction that had more in common with tabulating machines than later electronic computers, as the team decided to store numbers in decimal rather than binary and stored and modified numbers in twenty accumulators, therefore failing to separate the memory and computing elements.  The machine was programmable, though like Colossus this could only be accomplished through rewiring, as the delay of waiting for instructions to be read from a tape reader was unacceptable in a machine operating at electronic speed.  The computer was powerful for its time, driven by 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays, and could output a complete artillery table in just fifteen minutes.  The entire computer took up 1,800 square feet of floor space, consumed 150 kilowatts of power, and generated an enormous amount of heat.  Costing roughly $500,000, ENIAC was completed in November 1945 and successfully ran its first program the following month.

Unlike the previously discussed Z3, Colossus, and ABC computers, the ENIAC was announced to the general public with much fanfare in February 1946, was examined by many other scientists and engineers, and became the subject of a series of lectures held at the Moore School over eight weeks in the summer of 1946 in which other aspiring computer engineers could learn about the machine in detail.  While it was completed too late to have much impact on the war effort and exerted virtually no influence on future computers from a design perspective, the ENIAC stands as the most important of the early computers because it proved to the world at large that vacuum tube electronic computers were possible and served as the impetus for later computer projects.  Indeed, even before the ENIAC had been completed, Eckert and Mauchly were moving on to their next computer concept, which would finally introduce the last important piece of the computer puzzle: the stored program.

The First Stored Program Computers

Manchester_Mark2

The Manchester Small-Scale Experimental Machine (SSEM), the first stored-program computer to successfully run a program

As previously discussed, electronic computers like the Colossus and ENIAC were limited in their general utility because they could only be configured to run a different program by actually rewiring the machine, as there were no input devices capable of running at electronic speeds.  This bottleneck could be eliminated, however, if the programs themselves were also stored in memory alongside the numbers they were manipulating.  In theory, the binary numeral system made this feasible since the instructions could be represented through symbolic logic as a series of “yes or no,” “on or of,” “1 or 0” propositions, but in reality the amount of storage needed would overwhelm the current technology.  The mighty ENIAC with its 18,000 vacuum tubes could only store 200 characters in memory.  This was fine if all you needed to store were a few five or ten digit numbers at a time, but instruction sets would require thousands of characters.  By the end of World War II the early computer pioneers of both Great Britain and the United States began tackling this problem independently.

The brilliant British mathematician Alan Turing, who has already been mentioned several times in this blog for both his code breaking and early chess programming feats, first articulated the stored program concept.  In April 1936, Turing completed a paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” as a response to a lecture by Max Newman he attended at Cambridge in 1935.  In a time when the central computing paradigm revolved around analog computers tailored to specific problems, Turing envisioned a device called the Universal Turing Machine consisting of a scanner reading an endless roll of paper tape. The tape would be divided into individual squares that could either be blank or contain a symbol.  By reading these symbols based on a simple set of hardwired instructions and following any coded instructions conveyed by the symbols themselves, the machine would be able to carry out any calculation possible by a human computer, output the results, and even incorporate those results into a new set of calculations.  This concept of a machine reacting to data in memory that could consist of both instructions and numbers to be manipulated encapsulates the basic operation of a stored program computer.

Turing was unable to act on his theoretical machine with the technology available to him at the time, but when he first saw the Colossus computer in operation at Bletchley Park, he realized that electronics would make such a device possible.  In 1945, Turing moved from Bletchley Park to the National Physical Laboratory (NPL), where late in the year he outlined the first relatively complete design for a stored-program computer.  Called the Automatic Computing Engine (ACE), the computer defined by Turing was ambitious for its time, leading others at the NPL to fear it could not actually be built.  The organization therefore commissioned a smaller test model instead called the Pilot ACE.  Ultimately, Turing left the NPL in frustration over the slow progress of building the Pilot ACE, which was not completed until 1950 and was therefore preceded by several other stored program computers.  As a result, Turing, despite being the first to articulate the stored program concept, exerted little influence over how the stored program concept was implemented.

One of the first people to whom Turing gave a copy of his landmark 1936 paper was its principle inspiration, Max Newman.  Upon reading it, Newman became interested in building a Universal Turing Machine himself.  Indeed, he actually tried to interest Tommy Flowers in the paper while he was building his Colossi for the Newmanry at Bletchley Park, but Flowers was an engineer, not a mathematician or logician, and by his own admission did not really understand Turing’s theories.  As early as 1944, however, Newman himself was expressing his enthusiasm about taking what had been learned about electronics during the war and establishing a project to build a Universal Turing Machine at the war’s conclusion.

In September 1945, Newman took the Fielden Chair of Mathematics at Manchester University and soon after applied for a grant from the Royal Society to establish the Computing Machine Laboratory at the university.  After the grant was approved in May 1946, Newman had portions of the dismantled Colossi shipped to Manchester for reference and began assembling a team to tackle a stored-program computer project.  Perhaps the most important members of the team were electrical engineers Freddie Williams and Tom Kilburn.  While working on radar during the war, the duo developed a storage method in which a cathode ray tube can “remember” a piece of information by virtue of firing an electron “dot” onto the surface of the tube, thus creating a persistent charge well.  By placing a metal plate against the surface of the tube, this data can be “read” in the form of a voltage pulse transferred to the plate whenever a charge well is created or eliminated by drawing or erasing a dot.  Originally developed to eliminate stationary background objects from a radar display, a Williams tube could also serve as computer memory and store 1,024 characters.  As any particular dot on the tube could be read at any given time, the Williams tube was an early form of random access memory (RAM)

In June 1948, Williams and Kilburn completed the Manchester Small Scale Experimental Machine (SSEM), which was specifically built to test the viability of the Williams Tube as a computer memory device.  While this computer contained only 550 tubes and was therefore not practical for actual computing projects, the SSEM was the first device in the world with all the characteristics of a stored program computer and proved the viability of Williams Tube memory.  Building on this work, the team completed the Manchester Mark 1 computer in October 1949, which contained 4,050 tubes and used more reliable custom-built CRTs from industrial conglomerate the General Electric Company (GEC) to increase the reliability of the memory.

978

John von Neumann stands next to the IAS Machine, which he developed based on his consulting work on the Electronic Discrete Variable Automatic Computer (EDVAC), the first stored-program computer in the United States

Meanwhile, at the Moore School Eckert and Mauchly were already beginning to ponder building a computer superior to the ENIAC by the middle of 1944.  The duo felt the most serious limitation of the computer was its paltry storage, and like Newman in England, they turned to radar technology for a solution.  Before joining the ENIAC project, Eckert had devised the first practical method of eliminating stationary objects from a display called delay line memory.  Basically, rather than displaying the result of a single pulse on the screen, the radar would compare two pulses, one of which was delayed by passing it through a column of mercury, allowing both pulses to arrive at the same time, with the radar screen displaying only those objects that were in different locations between the two pulses.  Eckert realized that using additional electronic components to keep the delayed pulse trapped in the mercury would allow it to function as a form of computer memory.

The effort to create a better computer received a boost when Herman Goldstine had a chance encounter with physicist John von Neumann at the Aberdeen railroad station.  A brilliant Hungarian emigre teaching at Princeton, von Neumann was consulting on several government war programs, including the Manhattan Project, but had not been aware of the ENIAC.  When Goldstine started discussing the computer on the station platform, von Neumann took an immediate interest and asked for access to the project.  Impressed by what he saw, von Neumann not only used his influence to help gain the BRL’s approval for Project PY to create the improved machine, he also held several meetings with Eckert and Mauchly in which he helped define the basic design of the computer.

The extent of von Neumann’s contribution to the Electronic Discrete Variable Automatic Computer (EDVAC) remains controversial.  Because the eminent scientist penned the first published general overview of the computer in May 1945, entitled “First Draft of a Report on the EDVAC,” the stored program concept articulated therein came to be called the “von Neumann architecture.”  In truth, the realization that the increased memory provided by mercury delay lines would allow both instructions and numbers to be stored in memory occurred during meetings between Eckert, Mauchly, and von Neumann, and his contributions were probably not definitive.  Von Neumann did, however, play a critical role in defining the five basic elements of the computer — the input, the output, the control unit, the arithmetic unit, and the memory — which remain the basic building blocks of the modern computer.  It is also through von Neumann, who was keenly interested in the human brain, that the term “memory” entered common use in a computing context.  Previously, everyone from Babbage forward had used the term “storage” instead.

The EDVAC project commenced in April 1946, but the departure of Eckert and Mauchly with most of their senior engineers soon after disrupted the project, so the computer was not completed until August 1949 and only became fully operational in 1951 after several problems with the initial design were solved.  It contained 6,000 vacuum tubes, 12,000 diodes, and two sets of 64 mercury delay lines capable of storing eight characters per line, for a total storage capacity of 1,024 characters.  Like the ENIAC, EDVAC cost roughly $500,000 to build.

cambridge

The Electronic Delay Storage Automatic Calculator (EDSAC)

Because of the disruptions caused by Eckert and Mauchley’s departures, the EDVAC was not actually the first completed stored program computer conforming to von Neumann’s report.  In May 1946, computing entrepreneur L.J. Comrie visited the Moore School to view the ENIAC and came away with a copy of the von Neumann EDVAC report.  Upon his return to England, he brought the report to physicist Maurice Wilkes, who had established a computing laboratory at Cambridge in 1937, but had made little progress in computing before World War II.  Wilkes devoured the report in an evening and then paid his own way to the United States so he could attend the Moore School lectures.   Although he arrived late and only managed to attend the final two weeks of the course, Wilkes was inspired to initiate his own stored-program computer project at Cambridge, the Electronic Delay Storage Automatic Calculator (EDSAC).  Unlike the competing computer projects at the NPL and Manchester University, Wilkes decided that completing a computer was more important than advancing computer technology and therefore decided to create a machine of only modest capability and to use delay line memory rather than the newer Williams tubes developed at Manchester.  While this resulted in a less powerful computer than some of its contemporaries, it did allow the EDSAC to become the first practical stored-program computer when it was completed in May 1949.

Meanwhile, after concluding his consulting work at the Moore School, John von Neumann established his own stored-program computer project in late 1945 at the Institute of Advanced Study (IAS) at Princeton University.  Primarily designed by Julian Bigelow, the IAS Machine employed 3,000 vacuum tubes and could hold 4,096 40-bit words in its Williams Tube memory.  Although not completed until June 1952, the functional plan of the computer was published in the late 1940s and widely disseminated.  As a result, the IAS Machine became the template for many of the scientific computers built in the 1950s, including the MANIAC, JOHNNIAC, MIDAC, and MIDSAC machines that hosted some of the earliest computer games.

With the Moore lectures about the ENIAC and the publication of the IAS specifications helping to spread interest in electronic computers across the developed world and the EDSAC computer demonstrating that crafting a reliable stored program computer was possible, the stage was now set for the computer to spread beyond a few research laboratories at prestigious universities and become a viable commercial product.

Searching for Bobby Fisher

Before leaving the 1950s behind, we now turn to the most prolific computer game concept of the decade: chess.  While complex simulations drove the majority of AI research in the military-industrial complex during the decade, the holy grail for much of academia was a computer that could effectively play this venerable strategy game.   As Alex Bernstein and Michael de V. Roberts explain it for Scientific American in June 1958, this is because chess is a perfect game to build an intelligent computer program around because the rules are straightforward and easy to implement, but playing out every possible scenario at a rate of one million complete games per second would take a computer 10108 years.  While this poses no real challenge for modern computers, the machines available in the 1950s and 1960s could never hope to complete a game of chess in a reasonable timeframe, meaning they actually needed to learn to react and adapt to a human player to win rather than just drawing on a stock of stored knowledge.  Charting the complete course of the quest to create a perfect chess-playing computer is beyond the scope of this blog, but since chess computer games have been popular entertainment programs as well as platforms for AI research, it is worth taking a brief look at the path to the very first programs to successfully play a complete game of chess.  The Computer History Museum presents a brief history of computer chess on its website called Mastering the Game, which will provide the framework for most of this examination.

El Ajedrecista (1912)

torres03

Leonardo Torres y Quevedo (left) demonstrates his chess-playing automaton

According to scholar Nick Montfort in his monograph on interactive fiction, Twisted Little Passages (2005), credit for the first automated chess-playing machine goes to a Spanish engineer named Leonardo Torres y Quevedo, who constructed an electro-mechanical contraption in 1912 called El Ajedrecista (literally “the chessplayer”) that simulated a KRK chess endgame, in which the machine attempted to mate the player’s lone king with his own king and rook.  First demonstrated publicly in 1914 in Paris and subsequently described in Scientific American in 1915, El Ajedrecista not only calculated moves, but actually moved the pieces itself using a mechanical arm.  A second version constructed in 1920 eliminated the arm and moved pieces via magnets under the board instead.  Montfort believes this machine should qualify as the very first computer game, but a lack of any electronics, a key component of every modern definition of a computer game — though not a requirement for a machine to be classified as an analog computer — makes this contention problematic, though perhaps technically correct.  Regardless of how one chooses to classify Quevedo’s contraption, however, it would be nearly four decades before anyone took up the challenge of computer chess again.

Turochamp and Machiavelli (1948)

alan-turing-2

Alan Turing, father of computer science and computer chess pioneer

As creating a viable chess program became one of the long-standing holy grails of computer science, it is only fitting that the man considered the father of that field, Alan Turing, was also the first person to approach the problem.  Both the computer history museum and Replay state that in 1947 Turing became the first person to write a complete chess program, but it proved so complex that no existing computer possessed sufficient memory to run it.  While this account contains some truth, however, it does not appear to be fully accurate.

As recounted by Andrew Hodges in the definitive Turing biography Alan Turing: The Enigma (1983), Turing had begun fiddling around with chess as early as 1941, but he did not sketch out a complete program until later in the decade, when he and economist David Champernowne developed a set of routines they called Turochamp. While it is likely that Turing and Champerdowne were actively developing this program in 1947, Turing did not actually complete Turochamp until late 1948 after hearing about a rival chess-playing program called Machiavelli written by his colleagues Donald Michie and Shaun Wylie.  This is demonstrated by a letter Hodges reprinted in the book from September 1948 in which Turing directly states that he had never actually written out the complete chess program, but would be doing so shortly.  Copeland also gives a 1948 date for the completion of Turochamp in The Essential Turing.

This may technically make Machiavelli the first completed chess program, though Michie relates in Alan M. Turing (1959), a biography written by the subject’s own mother, that Machiavelli was inspired by the already in development Turochamp.  It is true that Turochamp — and presumably Machiavelli as well — never actually ran on a computer, but apparently Turing began implementing it on the Ferranti Mark 1 before his untimely death.  Donovan goes on to say that Turing tested out the program by playing the role of the computer himself in a single match in 1952 that the program lost, but Hodges records that the program played an earlier simulated game in 1948 against Champerdowne’s wife, a chess novice, who lost to the program.

Programming a Computer for Playing Chess, by Claude Shannon (1950)

2-0 and 2-1.shannon_lasker.prior_1970.102645398.NEWBORN.lg

Claude Shannon (right) demonstrates a chess-playing automaton of his own design to chess champion Edward Lasker

While a fully working chess game would not arrive for another decade, key theoretical advances were made over 1949 and 1950 by another pioneer of computer science, Claude Shannon.  Shannon was keenly interested in the chess problem and actually built an “electric chess automaton” in 1949 — described in Vol. 12 No. 4 of the International Computer Chess Association (ICCA) Journal (1989) — that could handle six pieces and was used to test programming methods.

His critical contribution, however, was an article he wrote for Philosophical Magazine in 1950 entitled “Programming a computer for playing chess.” While Shannon’s paper did not actually outline a specific chess program, it was the first attempt to systematically identify some of the basic problems inherent in constructing such a program and proffered several solutions.  As Allen Newell, J.C. Shaw, and H.A. Simon relate in their chapter for the previously mentioned landmark AI anthology Computers and Thought, “Chess-Playing Programs and the Problem of Complexity,” Shannon was the first person to recognize that a chess game consists of a finite series of moves that will ultimately terminate in one of three states for a player: a win, a loss, or a draw.  As such, a game of chess can be viewed as a decision tree in which each node represents a specific board layout and each branch from that node represents a possible move.  By working backwards from the bottom of the tree, a player would know the best move to make at any given time.  This concept, called minimaxing in game theory, would conceivably allow a computer to play a perfect game of chess every time.

Of course, as we already discussed, chess may have a finite number of possible moves, but that number is still so large that no computer could conceivably work through every last move in time to actually play a game.  Shannon recognized this problem and proposed that a program should only track moves to a certain depth on the tree and then choose the best alternative under the circumstances, which would be determined by evaluating a series of static factors such as the value and mobility of pieces — weighted based on their importance in the decision-making process of actual expert chess players — and combining these values with a minimaxing procedure to pick a move.  The concept of evaluating the decision tree to a set depth and then using a combination of minimaxing and best value would inform all the significant chess programs that followed in the next decade.

Partial Chess-Playing Programs (1951-1956)

Chapter_5-154

Paul Stein (seated) plays chess against a program written for the MANIAC computer

The complexities inherent in programming a working chess-playing AI that adhered to Shannon’s principles guaranteed it would be nearly another decade before a fully working chess program emerged, but in the meantime researchers were able to implement more limited chess programs by focusing on specific scenarios or by removing specific aspects of the game. Dr. Dietrich Prinz, a follower of Turing who led the development of the Ferranti Mark 1, created the first such program to actually run on a computer.  According to Copeland and Diane Proudfoot in their online article Alan Turing: Father of the Modern Computer, Prinz’s program first ran in November 1951.  As the computer history museum explains, however, this program could not actually play a complete game of chess and instead merely simulated the “mate-in-two problem,” that is it could identify the best move to make when two moves away from a checkmate.

In The Video Game Explosion, Ahl recognizes a 1956 program written for the MANIAC I at the Los Alamos Atomic Energy Laboratory by James Kister, Paul Stein, Stanislaw Ulam, William Walden, and Mark Wells as the first chess-playing program, apparently missing the Prinz game.  Los Alamos had been at the forefront of digital computing almost from its inception, as the lab had used the ENIAC, one of the first Turing-complete digital computers, to perform calculations and run simulations for research relating to the atomic bomb.  As a result, Los Alamos personnel kept a close watch on advances in stored program computers in the late 1940s and early 1950s and decided to construct their own as they raced to complete the first thermonuclear weapon, colloquially known as a “hydrogen bomb.”  Designed by a team led by Nicholas Metropolis, the Mathematical Analyzer, Numerical Integrator, and Computer, or MANIAC, ran its first program in March 1952 and was put to a wide variety of physics experiments over the next five years.

While MANIAC was primarily used for weapons research, the scientists at Los Alamos implemented game programs on more than one occasion.  According to a brief memoir published by Jeremy Bernstein in 2012 in the London Review of Books, many of the Los Alamos scientists were drawn to the card tables of the casinos of nearby Las Vegas, Nevada.  Therefore, when they heard that four soldiers at the Aberdeen Proving Ground had published an article called “The Optimum Strategy in Blackjack” in the Journal of the American Statistical Association in 1956, they immediately created a program on the MANIAC to run tens of thousands of Blackjack hands to see if the strategy actually worked. (Note: Ahl and a small number of other sources allude to a Blackjack game being created at Los Alamos on an IBM 701 computer in 1954, but I have been unable to substantiate this claim in primary sources, leading me to wonder if these authors have confused some other experiment and the 1956 blackjack program on the MANIAC).  Therefore, it is no surprise that scientists at the lab would decided to create a chess program as well.

Unlike Prinz’s program, the MANIAC program could play a complete game of chess, but the programmers were only able to accomplish this feat using a simplified 6×6 board without bishops.  The program did, however, implement Shannon’s system of calculating all possible moves over two levels of the decision tree and then using static factors and minimaxing to determine its next move.  Capable of performing roughly 11,000 operations per second, the program only played three games and was estimated to have the skill of a human player with about twenty games experience according to Shaw.  By the time Shaw’s article was published in 1961, the program apparently no longer existed.  Presumably it was lost when the original MANIAC was retired in favor of the MANIAC II in 1957.

The Bernstein Program (1957)

2-1.Bernstein-alex.1958.L02645391.IBM_ARCHIVES.lg

Alex Bernstein with his chess program in 1958

A complete chess playing program finally emerged in 1957 from IBM, implemented by Alex Bernstein with the help of Michael de V. Roberts, Timothy Arbuckle, and Martin Belsky.  Like the MANIAC game, Bernstein’s program only examined two levels of moves, but rather than exploring every last possibility, his team instead programmed the computer to examine only the seven most plausible moves, determined by operating a series of what Shaw labels “plausible move generators” that identified the best moves based on specific goals such as king safety or prioritizing attack or defense.  After cycling through these generators, the program picked seven plausible continuations and then made a decision based on minimaxing and static factors just like the MANIAC program.  It did so much more efficiently, however, as it considered only about 2,500 of over 800,000 possible permutations.  Running on the faster IBM 704 computer, the program could handle 42,000 operations per second, though according to Shaw the added complexity of using the full 8×8 board rendered much of this speed advantage moot and the program still took about eight minutes to make a move compared to twelve for the MANIAC program.  According to Shaw, Bernstein’s program played at the level of a “passable amateur,” but exhibited surprising blind spots due to the limitations of its move analysis.  It apparently never actually defeated a human opponent.

The NSS Chess Program (1958)

2-3a.Carnegie_Mellon_University.Newell-Allen_Simon-Herbert.19XX.L062302007.CMU.lg

Herbert Simon (left) and Allan Newell (right), two-thirds of the team that created the NSS program

We end our examination of 1950s computer chess with the NSS chess program that emerged from Carnegie-Mellon University.  Allan Newell and Herbert Simon, professors at the university who consulted for RAND Corporation, were keenly interested in AI and joined with a RAND employee named Cliff Shaw in 1955 to fashion a chess program of their own.  According to their essay in Computers and Thought, the trio actually abandoned the project within a year to focus on writing programs for discovering symbolic logic proofs, but subsequently returned to their chess work and completed the program in 1958 on the JOHNNIAC, a stored program computer built by the RAND Corporation and operational between 1953 and 1966.  According to an essay by Edward Feigenbaum called “What Hath Simon Wrought?” in the 1989 anthology Complex Information Processing: The Impact of Herbert A. Simon, Newell and Shaw handled most of the actual development work, while Simon immersed himself in the game of chess itself in order to imbue the program with as much chess knowledge as possible.

The resulting program, with a name derived from the authors’ initials, improved upon both the MANIAC and Berstein programs. Like the Bernstein program, the NSS program used a combination of minimaxing, static value, and a plausible move generator to determine the best move to make, but Newell, Simon, and Shaw added a new important wrinkle to the process through a “branch and bounds” method similar to the technique that later researchers termed “alpha-beta pruning.”  Using this method, each branch of the decision tree was given a maximum lower and a minimum upper value, alpha and beta, and the program only considered those branches that fell in between these values in previously explored branches.  In this way, the program was able to consider far fewer moves than previous minimaxing-based programs, yet mostly ignored poor solutions rather than valuable ones.  While this still resulted in a program that played at an amateur level, the combination of minimaxing and alpha-beta pruning provided a solid base for computer scientists to carry chess research into the 1960s.