computer game

Searching for Bobby Fisher

Before leaving the 1950s behind, we now turn to the most prolific computer game concept of the decade: chess.  While complex simulations drove the majority of AI research in the military-industrial complex during the decade, the holy grail for much of academia was a computer that could effectively play this venerable strategy game.   As Alex Bernstein and Michael de V. Roberts explain it for Scientific American in June 1958, this is because chess is a perfect game to build an intelligent computer program around because the rules are straightforward and easy to implement, but playing out every possible scenario at a rate of one million complete games per second would take a computer 10108 years.  While this poses no real challenge for modern computers, the machines available in the 1950s and 1960s could never hope to complete a game of chess in a reasonable timeframe, meaning they actually needed to learn to react and adapt to a human player to win rather than just drawing on a stock of stored knowledge.  Charting the complete course of the quest to create a perfect chess-playing computer is beyond the scope of this blog, but since chess computer games have been popular entertainment programs as well as platforms for AI research, it is worth taking a brief look at the path to the very first programs to successfully play a complete game of chess.  The Computer History Museum presents a brief history of computer chess on its website called Mastering the Game, which will provide the framework for most of this examination.

El Ajedrecista (1912)

torres03

Leonardo Torres y Quevedo (left) demonstrates his chess-playing automaton

According to scholar Nick Montfort in his monograph on interactive fiction, Twisted Little Passages (2005), credit for the first automated chess-playing machine goes to a Spanish engineer named Leonardo Torres y Quevedo, who constructed an electro-mechanical contraption in 1912 called El Ajedrecista (literally “the chessplayer”) that simulated a KRK chess endgame, in which the machine attempted to mate the player’s lone king with his own king and rook.  First demonstrated publicly in 1914 in Paris and subsequently described in Scientific American in 1915, El Ajedrecista not only calculated moves, but actually moved the pieces itself using a mechanical arm.  A second version constructed in 1920 eliminated the arm and moved pieces via magnets under the board instead.  Montfort believes this machine should qualify as the very first computer game, but a lack of any electronics, a key component of every modern definition of a computer game — though not a requirement for a machine to be classified as an analog computer — makes this contention problematic, though perhaps technically correct.  Regardless of how one chooses to classify Quevedo’s contraption, however, it would be nearly four decades before anyone took up the challenge of computer chess again.

Turochamp and Machiavelli (1948)

alan-turing-2

Alan Turing, father of computer science and computer chess pioneer

As creating a viable chess program became one of the long-standing holy grails of computer science, it is only fitting that the man considered the father of that field, Alan Turing, was also the first person to approach the problem.  Both the computer history museum and Replay state that in 1947 Turing became the first person to write a complete chess program, but it proved so complex that no existing computer possessed sufficient memory to run it.  While this account contains some truth, however, it does not appear to be fully accurate.

As recounted by Andrew Hodges in the definitive Turing biography Alan Turing: The Enigma (1983), Turing had begun fiddling around with chess as early as 1941, but he did not sketch out a complete program until later in the decade, when he and economist David Champernowne developed a set of routines they called Turochamp. While it is likely that Turing and Champerdowne were actively developing this program in 1947, Turing did not actually complete Turochamp until late 1948 after hearing about a rival chess-playing program called Machiavelli written by his colleagues Donald Michie and Shaun Wylie.  This is demonstrated by a letter Hodges reprinted in the book from September 1948 in which Turing directly states that he had never actually written out the complete chess program, but would be doing so shortly.  Copeland also gives a 1948 date for the completion of Turochamp in The Essential Turing.

This may technically make Machiavelli the first completed chess program, though Michie relates in Alan M. Turing (1959), a biography written by the subject’s own mother, that Machiavelli was inspired by the already in development Turochamp.  It is true that Turochamp — and presumably Machiavelli as well — never actually ran on a computer, but apparently Turing began implementing it on the Ferranti Mark 1 before his untimely death.  Donovan goes on to say that Turing tested out the program by playing the role of the computer himself in a single match in 1952 that the program lost, but Hodges records that the program played an earlier simulated game in 1948 against Champerdowne’s wife, a chess novice, who lost to the program.

Programming a Computer for Playing Chess, by Claude Shannon (1950)

2-0 and 2-1.shannon_lasker.prior_1970.102645398.NEWBORN.lg

Claude Shannon (right) demonstrates a chess-playing automaton of his own design to chess champion Edward Lasker

While a fully working chess game would not arrive for another decade, key theoretical advances were made over 1949 and 1950 by another pioneer of computer science, Claude Shannon.  Shannon was keenly interested in the chess problem and actually built an “electric chess automaton” in 1949 — described in Vol. 12 No. 4 of the International Computer Chess Association (ICCA) Journal (1989) — that could handle six pieces and was used to test programming methods.

His critical contribution, however, was an article he wrote for Philosophical Magazine in 1950 entitled “Programming a computer for playing chess.” While Shannon’s paper did not actually outline a specific chess program, it was the first attempt to systematically identify some of the basic problems inherent in constructing such a program and proffered several solutions.  As Allen Newell, J.C. Shaw, and H.A. Simon relate in their chapter for the previously mentioned landmark AI anthology Computers and Thought, “Chess-Playing Programs and the Problem of Complexity,” Shannon was the first person to recognize that a chess game consists of a finite series of moves that will ultimately terminate in one of three states for a player: a win, a loss, or a draw.  As such, a game of chess can be viewed as a decision tree in which each node represents a specific board layout and each branch from that node represents a possible move.  By working backwards from the bottom of the tree, a player would know the best move to make at any given time.  This concept, called minimaxing in game theory, would conceivably allow a computer to play a perfect game of chess every time.

Of course, as we already discussed, chess may have a finite number of possible moves, but that number is still so large that no computer could conceivably work through every last move in time to actually play a game.  Shannon recognized this problem and proposed that a program should only track moves to a certain depth on the tree and then choose the best alternative under the circumstances, which would be determined by evaluating a series of static factors such as the value and mobility of pieces — weighted based on their importance in the decision-making process of actual expert chess players — and combining these values with a minimaxing procedure to pick a move.  The concept of evaluating the decision tree to a set depth and then using a combination of minimaxing and best value would inform all the significant chess programs that followed in the next decade.

Partial Chess-Playing Programs (1951-1956)

Chapter_5-154

Paul Stein (seated) plays chess against a program written for the MANIAC computer

The complexities inherent in programming a working chess-playing AI that adhered to Shannon’s principles guaranteed it would be nearly another decade before a fully working chess program emerged, but in the meantime researchers were able to implement more limited chess programs by focusing on specific scenarios or by removing specific aspects of the game. Dr. Dietrich Prinz, a follower of Turing who led the development of the Ferranti Mark 1, created the first such program to actually run on a computer.  According to Copeland and Diane Proudfoot in their online article Alan Turing: Father of the Modern Computer, Prinz’s program first ran in November 1951.  As the computer history museum explains, however, this program could not actually play a complete game of chess and instead merely simulated the “mate-in-two problem,” that is it could identify the best move to make when two moves away from a checkmate.

In The Video Game Explosion, Ahl recognizes a 1956 program written for the MANIAC I at the Los Alamos Atomic Energy Laboratory by James Kister, Paul Stein, Stanislaw Ulam, William Walden, and Mark Wells as the first chess-playing program, apparently missing the Prinz game.  Los Alamos had been at the forefront of digital computing almost from its inception, as the lab had used the ENIAC, one of the first Turing-complete digital computers, to perform calculations and run simulations for research relating to the atomic bomb.  As a result, Los Alamos personnel kept a close watch on advances in stored program computers in the late 1940s and early 1950s and decided to construct their own as they raced to complete the first thermonuclear weapon, colloquially known as a “hydrogen bomb.”  Designed by a team led by Nicholas Metropolis, the Mathematical Analyzer, Numerical Integrator, and Computer, or MANIAC, ran its first program in March 1952 and was put to a wide variety of physics experiments over the next five years.

While MANIAC was primarily used for weapons research, the scientists at Los Alamos implemented game programs on more than one occasion.  According to a brief memoir published by Jeremy Bernstein in 2012 in the London Review of Books, many of the Los Alamos scientists were drawn to the card tables of the casinos of nearby Las Vegas, Nevada.  Therefore, when they heard that four soldiers at the Aberdeen Proving Ground had published an article called “The Optimum Strategy in Blackjack” in the Journal of the American Statistical Association in 1956, they immediately created a program on the MANIAC to run tens of thousands of Blackjack hands to see if the strategy actually worked. (Note: Ahl and a small number of other sources allude to a Blackjack game being created at Los Alamos on an IBM 701 computer in 1954, but I have been unable to substantiate this claim in primary sources, leading me to wonder if these authors have confused some other experiment and the 1956 blackjack program on the MANIAC).  Therefore, it is no surprise that scientists at the lab would decided to create a chess program as well.

Unlike Prinz’s program, the MANIAC program could play a complete game of chess, but the programmers were only able to accomplish this feat using a simplified 6×6 board without bishops.  The program did, however, implement Shannon’s system of calculating all possible moves over two levels of the decision tree and then using static factors and minimaxing to determine its next move.  Capable of performing roughly 11,000 operations per second, the program only played three games and was estimated to have the skill of a human player with about twenty games experience according to Shaw.  By the time Shaw’s article was published in 1961, the program apparently no longer existed.  Presumably it was lost when the original MANIAC was retired in favor of the MANIAC II in 1957.

The Bernstein Program (1957)

2-1.Bernstein-alex.1958.L02645391.IBM_ARCHIVES.lg

Alex Bernstein with his chess program in 1958

A complete chess playing program finally emerged in 1957 from IBM, implemented by Alex Bernstein with the help of Michael de V. Roberts, Timothy Arbuckle, and Martin Belsky.  Like the MANIAC game, Bernstein’s program only examined two levels of moves, but rather than exploring every last possibility, his team instead programmed the computer to examine only the seven most plausible moves, determined by operating a series of what Shaw labels “plausible move generators” that identified the best moves based on specific goals such as king safety or prioritizing attack or defense.  After cycling through these generators, the program picked seven plausible continuations and then made a decision based on minimaxing and static factors just like the MANIAC program.  It did so much more efficiently, however, as it considered only about 2,500 of over 800,000 possible permutations.  Running on the faster IBM 704 computer, the program could handle 42,000 operations per second, though according to Shaw the added complexity of using the full 8×8 board rendered much of this speed advantage moot and the program still took about eight minutes to make a move compared to twelve for the MANIAC program.  According to Shaw, Bernstein’s program played at the level of a “passable amateur,” but exhibited surprising blind spots due to the limitations of its move analysis.  It apparently never actually defeated a human opponent.

The NSS Chess Program (1958)

2-3a.Carnegie_Mellon_University.Newell-Allen_Simon-Herbert.19XX.L062302007.CMU.lg

Herbert Simon (left) and Allan Newell (right), two-thirds of the team that created the NSS program

We end our examination of 1950s computer chess with the NSS chess program that emerged from Carnegie-Mellon University.  Allan Newell and Herbert Simon, professors at the university who consulted for RAND Corporation, were keenly interested in AI and joined with a RAND employee named Cliff Shaw in 1955 to fashion a chess program of their own.  According to their essay in Computers and Thought, the trio actually abandoned the project within a year to focus on writing programs for discovering symbolic logic proofs, but subsequently returned to their chess work and completed the program in 1958 on the JOHNNIAC, a stored program computer built by the RAND Corporation and operational between 1953 and 1966.  According to an essay by Edward Feigenbaum called “What Hath Simon Wrought?” in the 1989 anthology Complex Information Processing: The Impact of Herbert A. Simon, Newell and Shaw handled most of the actual development work, while Simon immersed himself in the game of chess itself in order to imbue the program with as much chess knowledge as possible.

The resulting program, with a name derived from the authors’ initials, improved upon both the MANIAC and Berstein programs. Like the Bernstein program, the NSS program used a combination of minimaxing, static value, and a plausible move generator to determine the best move to make, but Newell, Simon, and Shaw added a new important wrinkle to the process through a “branch and bounds” method similar to the technique that later researchers termed “alpha-beta pruning.”  Using this method, each branch of the decision tree was given a maximum lower and a minimum upper value, alpha and beta, and the program only considered those branches that fell in between these values in previously explored branches.  In this way, the program was able to consider far fewer moves than previous minimaxing-based programs, yet mostly ignored poor solutions rather than valuable ones.  While this still resulted in a program that played at an amateur level, the combination of minimaxing and alpha-beta pruning provided a solid base for computer scientists to carry chess research into the 1960s.

Advertisements

Tennis Anyone?

So now we turn to the most discussed of all the 1950s computer games: Tennis for Two, designed by Willy Higinbotham and largely built by Robert Dvorak at the Brookhaven National Laboratory (BNL) in 1958.  Unlike the games discussed previouslyTennis for Two was built specifically to entertain the public rather than just to demonstrate the power of a computer or train a group of students, giving it some claim as the first true computer “game” from a philosophical standpoint.  That is certainly the contention of BNL itself, which dismisses NIMROD and OXO as programming demonstrations rather than entertainment.  Ultimately, this debate matters little, as Tennis For Two only existed briefly and did not influence later developments in the industry.

While Tennis for Two did not inspire later designers, however, it did gain a new notoriety in the 1970s when lawyers for arcade companies defending against a patent lawsuit brought by Magnavox discovered the existence of the game and unsuccessfully attempted to portray it as an example of prior art that invalidated Ralph Baer’s television gaming patents.  Higinbotham was called to testify on multiple occasions during various patent suits that continued into the 1980s, which is one reason the game is far better documented than most of its contemporaries.  The game also received public recognition after Creative Computing ran a feature devoted to it in October 1982 because the magazine’s editor, David Ahl, had actually played the game at Brookhaven back in 1958.  As a result of this article, Tennis for Two was considered the first computer game until more in-depth research in the late 2000s uncovered some of the earlier games listed in the previous post, and the early monographs such as PhoenixHigh Score!, and The Ultimate History of Video Games accord the game pioneering status.  Even though newer works like Replay and All Your Base Are Belong to Us acknowledge earlier programs, however, they continue to give Tennis for Two pride of place in the early history of video games due to it arguably being the first pure entertainment product created on a computer.

higinbotham-300px

William A. Higinbotham

Before diving into the game itself, we should examine the man who created it.  According to an unpublished account now hosted at the BNL site that he wrote in the early 1980s supplemented by a deposition he gave in 1985, William A. Higinbotham graduated from Williams College in 1932 with a bachelor’s degree in physics and spent eight years working on a Ph.D at Cornell that he ultimately abandoned due to a lack of money.  Higinbotham first worked with an oscilloscope during a senior honor’s project at Williams and spent his last six years at Cornell working as a technician in the physics department, which gave him the opportunity to learn a great deal about electronics.  As a result, he was invited to MIT in December 1940 to work on radar at the university’s Radiation Laboratory, where he concentrated on CRT radar displays.   In December 1943, Higinbotham transferred to Los Alamos to work on the Manhattan Project, where he was quickly promoted to lead the electronics division and, according to Replay, worked on timing circuits.  He left Los Alamos for Washington, DC, in December 1945, where he spent two years doing education and PR work for the American Federation of Scientists, a group that worked to stem nuclear proliferation.  In 1947, he came to BNL, where he became the head of instrumentation in 1951 or 1952.

The above provides a solid overview of Higinbotham the scientist, but Harold Goldberg in All Your Base Are Belong to Us also gives us a portrait of Higinbotham’s less serious side.  According to Goldberg, who drew his information from a profile in Parade, Willy was a natural entertainer who called square dances, played the accordion, and led a Dixieland band named the Isotope Stompers.  He also exhibited a penchant for making technology fun, once attaching a sulky and two wagons to the family lawnmower so he could drive his kids around the yard.  Seeing this side of the eminent physicist, its no surprise that he would find a way to make a computer entertaining as well.

hqdefault

The original Tennis for Two display

Higinbotham created Tennis for Two as a public relations vehicle.  Every year, BNL held three visitor’s days in the fall — one each for high school students, college students, and the general public — in which the scientists gave tours of the facilities and built exhibits related to the lab’s work in the staff gymnasium.  Most accounts of the exhibits emphasize that they consisted of unengaging static displays, but in his 1976 deposition for the first Magnavox patent lawsuit, Higinbotham states that the staff always tried to include something with “action,” though he does not specify whether this included games. Therefore, Higinbotham may not have been the first person to liven up the event through audience participation, but he was still definitely the first person that decided to entertain the public with a computer game.

As the BNL website and his notes indicate, Higinbotham was inspired to create Tennis for Two after reading through the instruction manual for the lab’s Donner Model 30, a vacuum tube analog computer.  The manual described how the system could be hooked up to an oscilloscope to display curves to model a missile trajectory or a bouncing ball complete with an accurate simulation of gravity and wind resistance.  The bouncing ball reminded Higinbotham of tennis, so he sketched out a system to interface an oscilloscope with the computer and then gave the diagram to technician Robert Dvorak to implement.  Laying out the initial design only took Higinbotham a couple of hours, after which he spent a couple of days putting together a final spec based on the components available in the lab.  Dvorak then built the system over three weeks and spent a day or two debugging it with Higinbotham.  The game was largely driven by the vacuum tubes and relays that had defined electronics for decades, but in order to render graphics on the oscilloscope, which required rapidly switching between several different elements, Higinbotham and Dvorak incorporated transistors, which were just beginning to transform the electronics industry.

Tennis for Two‘s graphics consisted of a side-view image of a tennis court — rendered as a long horizontal line to represent the court itself and a small vertical line to represent the net — and a ball with a trajectory arc displayed on the oscilloscope.  Each player used a controller consisting of a knob and a button.  To start a volley, one player would use the knob to select an angle to hit the ball and then press the button.  At that point, the ball could either hit the net, hit the other side of the court, or sail out of bounds.  Once the ball made it over the net, the other player could either hit the ball on the fly or the bounce by selecting his own angle and pressing the button to return it.  Originally, the velocity of the ball could be chosen by the player as well, but Higinbotham decided that three controls would make the game too complicated and therefore left the velocity fixed.

800px-Tennis_for_Two_-_Modern_recreation

A modern recreation of the Tennis for Two controller

According to Higinbotham, Tennis for Two was a great success, with long lines of eager players quickly forming to play the game.  Based on this positive reception, Higinbotham brought the game back in 1959 on a larger monitor and with more sophisticated gravity modelling that allowed the player to simulate the low gravity of the Moon or the high gravity environment of Jupiter.  After the second round of visitor days, the game was dismantled so its components could be put to other uses.  Higinbotham never patented the device because he felt at the time that he was just adapting the bouncing ball program already discussed in the manual and had created no real breakthrough.  While he appears to have been proud of creating the game, he stated in his notes that he considered it a “minor achievement” at best and wanted to be remembered as a scientist who fought the spread of nuclear weapons rather than as an inventor of a computer game.

By Any Other Name

So, in my first two posts, I have explained my goals, my methodologies, and my sources of information, so that only leaves one final introductory matter: What exactly are these “video game” things I say I will be writing about?  In today’s world, where over one billion people are estimated to be playing video games (Source: PC Gaming Alliance Research), this may seem like a needless exercise, but the truth is the term “video game” is too often thrown around with wild abandon without a clear idea of what the term actually means and where it comes from.  Therefore, in the next few paragraphs, I will take a look at a few attempts to define the term and try to piece together a workable definition for this blog.

First, a historical note on the origin of the term “video game.”  From a legal standpoint, the concept of a “video game” first manifested in Ralph Baer’s 1972 patent for a “Television Gaming Appartus” (U.S. Patent 3,659,285) and his 1973 patent for a “Television Gaming and Training Apparatus (U.S. Patent 3,728,480).  While the term “video game” does not appear in either of these patents, they set out a basic game system in which a control unit is attached to a television receiver and then generates a video signal to create symbols on the TV screen.  In the first landmark patent case in video game law, The Magnavox Co., et al. v. Chicago Dynamic Industries, et al. 201 U.S.P.Q. 25 (N.D. Ill. 1977), Judge Grady named the ‘480 patent the “pioneer patent” in the field, making the Magnavox Odyssey technology the progenitor of video gaming in the home from a legal standpoint (Source: Patent Arcade blog post)

Technically, the “video” in “video game” is derived from the idea of manipulating a video signal as described in the ‘480 patent.  By this narrow definition, a true video game would be one in which “electronic signals are converted to images on a screen using a raster pattern, a series of horizontal lines composed of individual pixels.” (Source: Brookhaven National Laboratory History of Tennis for Two) This narrow definition would eliminate any game that uses a teletype, oscilloscope, vector monitor, LCD screen, plasma display, etc., since they do not make use of a video signal.  From a technical standpoint, these games would be more properly characterized as “computer games” or “electronic games” rather than video games.  Popular sources aimed at the layman have almost never bothered with this technical distinction, however, so it serves more as an intellectual curiosity than a workable modern definition of the term.

Early video games were called by a variety of names before that term became well established.  In the arcades, for instance, it was common to refer to the products as “TV games” highlighting the main feature that set these games apart from earlier coin-operated amusement products. (Source: Replay by Tristan  Donovan, 2010) Perhaps the earliest reference to a “video game” appeared in the March 17, 1973 issues of Cash Box magazine, which uses the term “video game” in a headline, though it appears to be an abbreviation in this case for the longer “video skill game” as used in the article body (Source: All in Color for a Quarter Blog)  By late 1974, it appears the term had gained at least some acceptance (see, for example, the September 17, 1974 edition of the Lakeland Ledger).  By the late 1970s, the term became standard. (Source: Replay)

As one would expect for such a new concept that is still evolving rapidly, there is no clear consensus yet in authoritative sources as to what a video game actually is.  Let’s start with two gold standards for English-language knowledge: the Oxford English Dictionary and Encyclopedia Britannica. The Oxford English Dictionary defines a video game thus:

a game played by electronically manipulating images produced by a computer program on a television screen or other display screen. (http://www.oxforddictionaries.com/us/definition/american_english/video-game)

Right away, two elements stand out as problematic: a computer program (ie software) needs to be involved, and the results need to be rendered on a screen. This definition is largely workable for today’s games, but it would actually exclude most of the important progenitors of the industry.  Both the Magnavox Odyssey and the Syzygy/Atari Computer Space and PONG units were created entirely through hardware designed to control a CRT to generate and move dots on a screen.  Arcade games did not start incorporating software until 1975, and it was not until 1978 that software began to displace Transistor-Transistor Logic (TTL) circuits entirely.  In the home, dedicated hardware was not complemented by programmability until 1976 and not displaced fully until a couple of years after that.  As for displays, most early mainframe computer games did not incorporate displays, which were extremely rare on most systems until the early 1970s.  Instead, these systems tended to print results on paper via teletype.  As a result, this definition is not completely satisfactory, but we can draw three key concepts from it: manipulating images (ie interactivity), the presence of a computer, and some form of display.

Here is how Encyclopedia Britannica tackles the subject:

any interactive game operated by computer circuitry (http://library.eb.com/eb/article-9001562 [subscription required])

Note that Britannica lumps both “computer games” and “video games” under the catch-all header of “electronic games.”  As for computer circuitry, Britannica also has an article on that concept and defines it thus:

Complete path or combination of interconnected paths for electron flow in a computer. Computer circuits are binary in concept, having only two possible states. They use on-off switches (transistors) that are electrically opened and closed in nanoseconds and picoseconds (billionths and trillionths of a second). (http://library.eb.com/eb/article-9472097 [subscription required])

This general definition works a lot better.  Rather than identifying software as the key element it identifies computer circuitry, which in this context means ciruits incorporating transistors and logic gates.  This means that TTL games like PONG and games executed in software are both covered.  Again, the key elements of interactivity and a computer appear in this definition.

Here are a couple of additional definitions in reputable dictionaries just to paint a more complete picture of how video games are perceived today.  The Mirriam-Webster Dictionary goes with “an electronic game in which players control images on a television or computer screen” (http://www.merriam-webster.com/dictionary/video%20game), while the American Heritage Dictionary claims a video game is “An electronic game played by manipulating moving figures on a display screen, often designed for play on a special gaming console rather than a personal computer.” (http://www.ahdictionary.com/word/search.html?q=video+game&submit.x=0&submit.y=0).  Both of these definitions emphasize that a video game is electronic, that is it relies on parts such as transistors that manipulate electrons in order to function.  Like the Oxford Dictionary, these definitions also emphasize a screen.

So where does that leave us?  Clearly an object that satisfies the modern definition of a video game requires three core components: interactivity, a program run by hardware containing electronic logic circuits, and objects rendered on a display.  Therefore, these are the types of games this blog will cover, whether they be on mainframes, personal computers, arcade hardware, consoles, handheld systems, or mobile devices.  The only exceptions will be the simpler electronic games found in toy aisles that are generally considered “toys” rather then “video games” from a commercial and marketing standpoint and certain systems aimed at young children primarily used for education rather than pure entertainment.