IBM 704

People Get Ready, There’s a Train A-Coming

Before World War II, MIT had not been much of a digital computer hotspot.  While Howard Aiken at neighboring Harvard and John Atanasoff at Iowa State College were exploring digital solutions to solving complex differential equations, MIT remained firmly planted in the analog world with Vannevar Bush’s differential analyzer.  During the war, however, the university became one of the primary centers for war-related scientific research.  From the development of fire control systems at the Servomechanisms Laboratory to the breakthroughs in radar delivered by the Radiation Laboratory, MIT secured its place in the military-industrial complex as a critical research hub and became deeply involved in digital computer design through Projects Whirlwind and SAGE.

As Project Whirlwind gathered steam in 1950, MIT provost Julius Stratton formed a committee chaired by physics professor Philip Morse to study the question of whether and how MIT should introduce a computer for general use by faculty and staff at the university.  In 1954, the committee returned a recommendation that MIT should build a Computation Center on campus “to aid faculty in keeping up to date on computer use within their fields and to assist them in introducing the use of computers into their courses; to educate all MIT students in computer use; and to explore and develop new ways of using computers in engineering and scientific research.” (Source: Guide to the Records of the Massachusetts Institute of Technology Computation Center)  After considering whether to re-purpose the Whirlwind I or invest in a commercial machine, Morse decided in July 1955 to recommend MIT acquire an IBM 704 computer — which he managed to convince the company to provide free of charge, but would not be ready until 1957.  Formally announced on September 23, 1955, the Computation Center was incorporated into the forthcoming Building 26 as an 18,000 square foot area near the northwest corner of the building dedicated solely to housing the 704 computer. (Source: A Century of Electrical Engineering and Computer Science at MIT, 1882-1982 by Karl Wildes and Nilo Lindgren)  The center came online with the installation of the 704 in 1957 just as a new generation of college students that had received limited exposure to computers in the mid-1950s matriculated to MIT bound and determined to learn everything they could about the new machines.  The interaction of these students with MIT’s new computing resources ultimately resulted in the creation of the first widely disseminated computer game.

The Tech Model Railroad Club

dec.anderson_kotok_and_man_at_Chevron-Hilton_hotel.1964.102665174.lg

Alan Kotok (seated right with glasses), TMRC member and early computer hacker

In September 1946, a group of 26 students (according to the membership rolls maintained by TMRC on its website) established a new organization on the MIT campus called the Tech Model Railroad Club (TMRC).  Located in Building 20, which had been built during World War II to house the Radiation Laboratory, TMRC dedicated itself to building and operating what quickly became an immense model railroad system.  As discussed in Stephen Levy’s book Hackers: Heroes of the Computer Revolution, this work attracted two distinct types of students: the train and modelling buffs that would meticulously construct accurate railroad cars and elaborate scenery, and the electrical engineering buffs of the Signals and Power (S&P) Subcommittee that would constantly update and refine a track control system of impressive complexity described by Levy as appearing like “a collaboration between Rube Goldberg and Wernher von Braun.”  Spending long hours together under the train layout installing parts donated by Western Electric or scrounged from Eli Heffreon’s junkyard in nearby Somerville, members of the S&P quickly bonded over shared interests and even developed their own lexicon.  For example, a person who studied instead of joining in the fun was called a “tool,” garbage was called “cruft,” and a clever project undertaken just for the fun of it was called a “hack.”  Ultimately, this group of tinkerers would launch the computer revolution referenced in the title of Levy’s book.

Hackers paints portraits of the key TMRC members that matriculated to MIT in 1958.  Foremost among them were Alan Kotok and Peter Samson.  According to Levy, Kotok grew up in the New Jersey suburbs of Philadelphia, where his parents learned he was an electrical engineering prodigy when he was already building and wiring lamps by the time he was six years old.  As touched on in Hackers and elaborated on in an oral history Kotok conducted with the Computer History Museum, Kotok’s first exposure to a computer was a high school field trip to a Socony-Mobil research laboratory in Paulsboro, NJ (Note: Hackers claims the facility was in nearby Haddonfield, but Kotok’s contention in his oral history that it was in Paulsboro appears to accurate), where the students not only viewed a mainframe computer, but actually ran through a programming exercise using punched cards.  From that day forward, Kotok knew his future lay with computers, which is why he applied to MIT.  Interested in model railroads, Kotok quickly gravitated to TMRC, where according to Levy he was quickly accounted one of the best electrical engineers in S&P.

Samson, on the other hand, was a local boy who grew up just thirty miles away from the university in Lowell, Massachusetts.  His first exposure to computers was a television program on the Boston public TV channel WGBH that gave a basic introduction to computer programming.  Inspired, he learned everything he could about computing and actually tried to build his own computer using relays pried out of pinball machines.  He also viewed computers on trips to MIT, where he resolved to continue his education after high school.  Samson joined TMRC on the first day of Freshman orientation in Fall 1958 and was instantly hooked when he beheld the complex system of wires, relays, and switches that kept the track running.  TMRC members received their own key to the club room after putting in forty hours on the layout: Samson earned his key in less than three days.

From available evidence, it appears few TMRC upperclassmen shared the same interest in computers as the class of 1962.  One that did was Bob Saunders, who joined TMRC in 1956 and by 1958 had become the president of the S&P Subcommittee.  Unlike Kotok and Samson, Saunders appears not to have received exposure to computers before matriculating to the school.  Levy does describe several engineering exploits he undertook as a boy in the suburbs of Chicago, however, including the construction of a six-foot-tall high-frequency transformer that Saunders claimed blew out television reception for miles around and working a summer job at the phone company installing central office equipment.  Indeed, it was the telephone parts used in the train control system that first attracted Saunders to TMRC.

Samson, Kotok, and several other TMRC students gained exposure to the IBM 704 in the Computation Center in Spring 1959 through the first computer course MIT had ever offered to Freshmen, and Kotok even became intimately involved in a chess project being implemented on the computer (and which will be discussed in detail in a later post), but Levy recounts that this experience did not satisfy the bright and curious TMRC members.  As a batch processing computer, the 704 required trained IBM staff to actually run programs and provided little feedback to the students and professors who would bring their punched cards to Building 26 and return hours later to see the results, all the while hoping no serious errors had prevented the program from running.  Levy, echoing the words of Ted Nelson in his seminal 1974 work Computer Lib, compared these interactions to acolytes (the programmers) asking for divine aid from a fickle god (the computer) through a dedicated priesthood (the operators).  This metaphor of a computer priesthood remains an oft-invoked image to this day when discussing batch processing mainframes.  Frustrated by their limited access to the 704, TMRC students searched for alternative means to scratch their computing itch.

As described by Levy, Peter Samson particularly enjoyed stalking the hallways of Building 26 at all hours looking for new activities to feed his insatiable curiosity.  He would trace wiring, examine telephone switching equipment, and look for unguarded technology to fiddle with.  One of these excursions led him to the Electronic Accounting Machinery (EAM) room in the basement, where the university had installed several IBM accounting machines, including an IBM 407.  These were electromechanical tabulators of limited capability, but they could read and sort cards and print out the results.  Even better, they were only guarded during the day, making the 407 the closest thing to a computer to which TMRC members could secure direct access.  Before long, Samson and other TMRC members could be found clustered around the 407 late into the night using the machine to keep track of the expanding array of switches under their train layout and seeing just how far they could push the technology.  This work on the 407 represented one of the earliest manifestations of a new computer-centric culture within TMRC.

Hacking the TX-0

jm027 Univac Trip Nov 1963 Jack Dennis

Jack Dennis, the former TMRC member and MIT professor that introduced TMRC to the Tx-0

In July 1958, Lincoln Laboratory decided it had no further need for the TX-0 computer built by Ken Olsen and Wes Clark and therefore placed it on semi-permanent loan to MIT, which housed it in the Research Laboratory of Electronics (RLE) in Building 26, located, according to Levy, just one floor above the 704 in the Computation Center.  As the computer was coming online, a new MIT instructor by the name of Jack Dennis was just settling into his office down the hall.  An MIT alum, Dennis, according to a TX-0 retrospective in the Spring 1984 issue of the Computer Museum Report, had recently completed his dissertation and accepted the instructor position in the fall of 1958, but he was uninterested in pursuing his dissertation topic further.  Dennis was soon drawn to the nearby TX-0 and began writing programs for the computer, the most important of which were FLIT, a debugger he wrote with Thomas Stockham, and MACRO, an assembler.  These programs allowed a programmer to work in assembly language rather than the more difficult machine language and more easily identify and correct bad code, therefore opening TX-0 programming to a larger user base.  About a year and a half after the TX-0 arrived, Dennis was placed in charge of the machine.

Unlike the 704 in the Computation Center, which was operated by trained staff, the TX-0 was generally available for faculty and graduate student research: all a person needed to do was sign up for a block of time.   Jack Dennis, however, wanted to go a step further.  As an undergraduate, Dennis had the opportunity to program on the Whirlwind, and he believed that interested undergraduate students were a valuable resource that should be encouraged to run their own computer experiments.  Dennis had also joined TMRC as a freshman in 1949 and still had contacts within the group, so he knew exactly where to go to recruit his cadre of interested programmers.  In his oral history, Alan Kotok remembers Dennis approaching TMRC members in Fall 1958 and asking if they would like to learn to program the TX-0.  He took aside an interested group of students that included S&P president Bob Saunders and freshmen Kotok, Samson, Dick Wagner, and Dave Gross and delivered a crash course on the TX-0.  The students were amazed to discover a computer that allowed them to program directly and fix their code on the fly.  With Dennis’s support, they negotiated with the people in charge of the computer, Earl Pugh and John McKenzie, who agreed to allow them access to the computer during blocks of time not already committed to official research.

During the day, the TX-0 was usually being put to serious use, but few projects were ever run overnight.  Therefore, the TMRC members became nocturnal creatures, ignoring both their classes and any semblance of a social life to maximize the amount of time they could spend programming the machine.  The young coders derived great joy from pushing the computer to its limits and mastering its capabilities.  Like the work they did on the railroad in building 20, the projects they undertook on the TX-0 purely for the fun and the challenge came to be called “hacks,” and the programmers began referring to themselves as “hackers.”

Few of the programs created by the TMRC coders did anything useful — or at least nothing useful enough to justify employing a multi-million dollar computer.  Hackers and the Computer Museum Report describe several of these programs.  Peter Samson created a program to convert Arabic numbers to Roman numerals and then puzzled out a way to manipulate the primitive built-in audio speaker to play simple, single-voice melodies using a square wave.  Kotok discovered a way to interface an FM receiver with the analog-to-digital converter on the computer to create a program he called the Expensive Tape Recorder, while Wagner, who had been using an electro-mechanical calculator in a numerical analysis class, was inspired to write a program called Expensive Desk Calculator.

The Demo Scene

MOuse in a Maze Recreation

A screenshot of an emulated recreation of MOUSE

In addition to the experiments of the TMRC hackers, the TX-0 also became home to a number of demos.  As explained by J.M. Gratez in his August 1981 article for Creative Computing, “The Origin of Spacewar,” getting the general public interested in early computers was rarely easy.  While many people were attracted by the high technology on display, they were soon bored watching a computer work, as there were no manifestations of its activity save for blinking lights and whirring tape.  This quickly led programmers to create programs that were visually striking and/or interactive in order to generate interest in computer use.  The previously discussed Bertie, NIMROD, MIDSAC pool, and Tennis for Two were all essentially interactive demos created for this purpose, and TX-0 programmers were soon crafting their own demos to achieve the same result.

The TX-0 demo programmers most likely took some inspiration from the program recognized as the earliest computer demo, a bouncing ball program created on the Whirlwind I by Charles Adams in 1950.  As described by Graetz, this simple program began with a single dot falling from the top of the screen and bouncing when it hit the bottom of the screen, accompanied by a sound from the Whirlwind speaker.  The ball would continue to bounce around all four sides of the screen until finally running out of momentum and rolling off through a hole in the floor.  While the program was simple, the effect proved stunning in a time when no other computer could actually update a CRT display in real-time.

Graetz describes several demos on the TX-0.  One, called HAX, would generate an ever-changing array of shapes to show off the capabilities of the TX-0’s CRT.  Another was a Tic-Tac-Toe game — played against the computer by typing commands using the flexowriter — designed to show off the computer’s interactivity.  Perhaps the most impressive hack, combining the visual interest of HAX and the interactivity of Tic-Tac-Toe, was the MOUSE program developed by Doug Ross and John Ward and first publicized in January 1959.  As described in the Spring 1984 Computer Museum Report, Ward had observed people programming on the Whirlwind at Lincoln Labs but had never had the opportunity to program the machine himself.  Therefore, when the TX-0 became available, he decided to sign up for time but did not know what type of program to write.  Remembering a program he had developed while working with a UNIVAC 1103 on Eglin Air Force Base with Ross, the head of MIT’s Computer Applications Group and the person who first coined the term “computer-aided design” (CAD), Ward convinced Ross to help him create a similar program on the TX-0.  In the finished product, with logic by Ross and a display by Ward, the user would create a maze directly on the CRT by erasing lines from an 8×8 grid of squares using the light pen and then place pieces of cheese throughout the maze.  A mouse would then traverse the maze while eating all the cheese.  The mouse would run out of energy if it did not reach a piece of cheese within a certain amount of time, but it would remember the paths taken in each attempt and therefore develop a more efficient route over time.  A variant replaced the cheese with martini glasses and had the mouse stagger the more it drank.

MOUSE and Tic-Tac-Toe highlighted the potential of an interactive computer as a device for playing games, but the TX-0 display remained too limited to create a truly engaging interactive visual experience.  In 1961, however, DEC donated one of the first PDP-1 computers to MIT, which was placed in the RLE in the room next to the TX-0.  Sporting a more sophisticated display than the TX-0, the PDP-1 was the perfect platform for the TMRC hackers to take the lessons learned through programming the TX-0 to create the first truly influential computer game, Spacewar!

Advertisement

Historical Interlude: The Birth of the Computer Part 4, Real-Time Computing

By 1955, computers were well on their way to becoming fixtures at government agencies, defense contractors, academic institutions, and large corporations, but their function remained limited to a small number of activities revolving around data processing and scientific calculation.  Generally speaking, the former process involved taking a series of numbers and running them through a single operation, while the latter process involved taking a single number and running it through a series of operations.  In both cases, computing was done through batch processing — i.e. the user would enter a large data set from punched cards or magnetic tape and then leave the computer to process that information based on a pre-defined program housed in memory.  For companies like IBM and Remington Rand, which had both produced electromechanical tabulating equipment for decades, this was a logical extension of their preexisting business, and there was little impetus for them to discover novel applications for computers.

In some circles, however, there was a belief that computers could move beyond data processing and actually be used to control complex systems.  This would require a completely different paradigm in computer design, however, based around a user interacting with the computer in real-time — i.e. being able to give the computer a command and have it provide feedback nearly instantaneously.  The quest for real-time computing not only expanded the capabilities of the computer, but also led to important technological breakthroughs instrumental in lowering the cost of computing and opening computer access to a greater swath of the population.  Therefore, the development of real-time computers served as the crucial final step in transforming the computer into a device capable of delivering credible interactive entertainment.

Note: This is the fourth and final post in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray,  A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh, and The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar.

Project Whirlwind

Forrester_Taylor_CutCut2

Jay Forrester (l), the leader of Project Whirlwind

The path to the first real-time computer began with a project that was never supposed to incorporate digital computing in the first place.  In 1943, the head of training at the United States Bureau of Aeronautics, a pilot and MIT graduate named Captain Luis de Florez, decided to explore the feasibility of creating a universal flight simulator for military training.  While flight simulators had been in widespread use since Edwin Link had introduced a system based around pneumatic bellows and valves called the Link Trainer in 1929 and subsequently secured an Army contract in 1934, these trainers could only simulate the act of flying generally and were not tailored to specific planes.  Captain de Florez envisioned using an analog computer to simulate the handling characteristics of any extant aircraft and turned to his alma mater to make this vision a reality.

At the time, MIT was already the foremost center in the United States for developing control systems thanks to the establishment of the Servomechanisms Laboratory in 1941, which worked closely with the military to develop electromechanical equipment for fire control, bomb sights, aircraft stabilizers, and similar projects.  The Bureau of Aeronautics therefore established Project Whirlwind within the Servomechanisms Laboratory in 1944 to create de Florez’s flight trainer.  Leadership of the Whirlwind project fell to an assistant director of the Servomechanisms Laboratory named Jay Forrester.  Born in Nebraska, Forrester had been building electrical systems since he was a teenager, when he constructed a 12-volt electrical system out of old car parts to provide his family’s ranch with electricity for the first time.  After graduating from the University of Nebraska, Forrester came to MIT as a graduate student in 1939 and joined the Servomechanisms Laboratory at its inception.  By 1944, Forrester was getting restless and considering establishing his own company, so he was given his choice of projects to oversee to prevent his defection.  Forrester chose Whirlwind.

In early 1945, Forrester drew up the specifications for a trainer consisting of a mock cockpit connected to an analog computer that would control a hydraulic transmission system to provide feedback to the cockpit.  Based on this preliminary work, MIT drafted a proposal in May 1945 for an eighteen-month project budgeted at $875,000, which was approved.  As work on Whirlwind began, the mechanical elements of the design came together quickly, but the computing element remained out of reach.  To create an accurate simulator, Forrester required a computer that updated dozens of variables constantly and reacted to user input instantaneously.  Bush’s Differential Analyzer, perhaps the most powerful analog computer of the time, was still far too slow to handle these tasks, and Forrester’s team could not figure out how to produce a more powerful machine solely through analog components.  In the summer of 1945, however, a fellow MIT graduate student named Perry Crawford that had written a master’s thesis in 1942 on using a digital device as a control system alerted Forrester to the breakthroughs being made in digital computing at the Moore School.  In October, Forrester and Crawford attended a Conference on Advanced Computational Techniques hosted by MIT and learned about the ENIAC and EDVAC in detail.  By early 1946, Forrester was convinced that the only way forward for Project Whirlwind was the construction of a digital computer that could operate in real time.

The shift from an analog computer to a digital computer for the Whirlwind project resulted in a threefold increase in cost to an estimated $1.9 million. It also created an incredible technical challenge.  In a period when the most advanced computers under development were struggling to achieve 10,000 operations a second, Whirlwind would require the capability of performing closer to 100,000 operations per second for seamless real-time operation.  Furthermore, the first stored-program computers were still three years away, so Forrester’s team also faced the prospect of integrating cutting edge memory technologies that were still under development.  By 1946, the size of the Whirlwind team had grown to over a hundred staff members spread across ten groups each focused on a particular part of the system in an attempt to meet these challenges.  All other aspects of the flight simulator were placed on hold as the entire team focused its attention on creating a working real-time computer.

200908311113506364_0

The Whirlwind I, the first real-time computer

By 1949, Forrester’s team had succeeded in designing an architecture fast enough to support real-time operation, but the computer could not operate reliably for extended periods.  With costs escalating and no end to development in sight, continued funding for the project was placed in jeopardy.  After the war, responsibility for Project Whirlwind had transferred from the Bureau of Aeronautics to the Office of Naval Research (ONR), which felt the project was not providing much value relative to a cost that had by now far surpassed $1.9 million.  By 1948, Whirlwind was consuming twenty percent of ONR’s entire research budget with little to show for it, so ONR began slowly trimming the budget.  By 1950, ONR was ready to cut funding all together, but just as the project appeared on the verge of death, it was revived to serve another function entirely.

On August 29, 1949, the Soviet Union detonated its first atomic bomb.  In the immediate aftermath of World War II, the United States had felt relatively secure from the threat of Soviet attack due to the distance between the two nations, but now the USSR had both a nuclear capability and a long range bomber capable of delivering a payload on U.S. soil.  During World War II, the U.S. had developed a primitive radar early warning system to protect against conventional attack, but it was wholly insufficient to track and interdict modern aircraft.  The United States needed a new air defense system and needed it quickly.

In December 1949, the United States Air Force formed a new Air Defense System Engineering Committee (ADSEC) chaired by MIT professor George Valley to address the inadequacies in the country’s air-defense system.  In 1950, ADSEC recommended creating a series of computerized command-and-control centers that could analyze incoming radar signals, evaluate threats, and scramble interceptors as necessary to interdict Soviet aircraft.  Such a massive and complex undertaking would require a powerful real-time computer to coordinate.  Valley contacted several computer manufacturers with his needs, but they all replied that real-time computing was impossible.

Despite being a professor at MIT, Valley knew very little about the Whirlwind project, as he was not interested in analog computing and had no idea it had morphed into a digital computer.  Fortunately, a fellow professor at the university, Jerome Wiesner, pointed him towards the project.  By early 1950, the Whirlwind I computer’s basic architecture had been completed, and it was already running its first test programs, so Forrester was able to demonstrate its real-time capabilities to Valley.  Impressed by what he saw, Valley organized a field-test of the Whirlwind as a radar control unit in September 1950 at Hanscom Field outside Bedford, Massachusettes, where a radar station connected to Whirlwind I via a phone line successfully delivered a radar signal from a passing aircraft.  Based on this positive result, the United States Air Force established Project Lincoln in conjunction with MIT in 1951 and moved Whirlwind to the new Lincoln Laboratory.

Project SAGE

IBM's_$10_Billion_Machine

A portion of an IBM AN/FSQ-7 Combat Direction Central, the heart of the SAGE system and the largest computer ever built

By April 1951, the Whirlwind I computer was operational, but still rarely worked properly due to faulty memory technology.  At Whirlwind’s inception, there were two primary forms of electronic memory in use, the delay-line storage pioneered for the EDVAC and CRT memory like the Williams Tube developed for the Manchester Mark I.  From his exposure to the EDVAC, Forrester was already familiar with delay-line memory early in Whirlwind’s development, but that medium functioned too slowly for a real-time design.  Forrester therefore turned his attention to CRT memory, which could theoretically operate at a sufficient speed, but he rejected the Williams Tube due to its low refresh rate.  Instead, Forrester incorporated an experimental tube memory under development at MIT, but this temperamental technology never achieved its promised capabilities and proved unreliable besides.  Clearly, a new storage method would be required for Whirlwind.

In 1949, Forrester saw an advertisement for a new ceramic material called Deltamax from the Arnold Engineering Company that could be magnetized or demagnetized by passing a large enough electric current through it.  Forrester believed the properties of this material could be used to create a fast and reliable form of computer memory, but he soon discovered that Deltamax could not switch states quickly at high temperatures, so he assigned a graduate student named William Papian to find an alternative.  In August 1950, Papian completed a master’s thesis entitled “A Coincident-Current Magnetic Memory Unit” laying out a system in which individual cores — small doughnut-shaped objects — with magnetic properties similar to Deltamax are threaded into a three-dimensional matrix of wires.  Two wires are passed through the center of the core to magnetize or demagnetize it by taking advantage of a property called hysteresis in which an electrical current only changes the magnetization of the material if it is above a certain threshold.  Only when currents are run through both wires and passed in the same direction will the magnetization change, making the cores a suitable form of computer memory.  A third wire is threaded through all of the cores in the matrix, allowing any portion of the memory to be read at any time.

Papian built the first small core memory matrix in October 1950, and by the end of 1951 he was able to construct a 16 x 16 array of cores.  During this period, Papian tested a wide variety of materials for his cores and settled on a silicon-steel ribbon wrapped around a ceramic bobbin, but these cores still operated too slowly and also required an unacceptably high level of current.  At this point Forrester discovered a German ceramicist in New Jersey named E. Albers-Schoenberg was attempting to create a transformer for televisions by mixing iron ore with certain oxides to create a compound called a ferrite that exhibited certain magnetic properties.  While ferrites generated a weaker output than the metallic cores Papian was experimenting with, they could switch up to ten times faster.  After experimenting with various chemical compositions, Papian finally constructed a ferrite-based core memory system in May 1952 that could switch between states in less than a microsecond and therefore serve the needs of a real-time computer.  First installed in the Whirlwind I in August 1953, ferrite core memory was smaller, cheaper, faster, and more reliable than delay-line, CRT, and magnetic drum memory and ultimately doubled the operating speed of the computer while reducing maintenance time from four hours a day to two hours a week.  Within five years, core memory had replaced all other forms of memory in mainframe computers, netting MIT a hefty profit in patent royalties.

With Whirlwind I finally fully functional the Lincoln Laboratory turned its attention to transforming the computer into a commercial command-and-control system suitable for installation in the United States Air Force’s air defense system.  This undertaking was beyond the scope of the lab itself, as it would require fabrication of multiple components on a large scale.  Lincoln Labs evaluated three companies to take on this task, defense contractor Raytheon, which had recently established a computer division, Remington Rand — through both its EMCC and ERA subsidiaries — and IBM.  At the time, Remington Rand was still the powerhouse in the new commercial computer business, while IBM was only just preparing to bring its first products to market.  Nonetheless, Forrester and his team were impressed with IBM’s manufacturing facilities, service force, integration, and experience deploying electronic products in the field and therefore chose the new kid on the block over its more established competitor.  Originally designated Project High by IBM — due to its location on the third floor of a necktie factory on High Street in Poughkeepsie — and the Whirlwind II by Lincoln Laboratory, the project eventually went by the name Semi-Automatic Ground Environment, or SAGE.

The heart of the SAGE system was a new IBM computer derived from the Whirlwind design called the AN/FSQ-7 Combat Direction Central.  By far the largest computer system ever built, the AN/FSQ-7 weighed 250 tons, consumed three megawatts of electricity, and took up roughly half an acre of floor space.  Containing 49,000 vacuum tubes and a core memory capable of storing over 65,000 33-bit words, the computer was capable of performing roughly 75,000 operations per second.  In order to insure uninterrupted operation, each SAGE installation actually consisted of two AN/FSQ-7 computers so that if one failed, the other could seamlessly assume control of the air defense center.  As the first deployed real-time computer system, it inaugurated a number of firsts in commercial computing such as the ability generate text and vector graphics on a display screen, the ability to directly enter commands via a typewriter-style keyboard, and the ability to select or draw items directly on the display using a light pen, a technology developed specifically for Whirlwind in 1955.  In order to remain in constant contact with other segments of the air defense system, the computer was also the first outfitted with a new technology called a modem developed by AT&T’s Bell Labs research division to allow data to be transmitted over a phone line.

The first SAGE system was deployed at McChord Air Force Base in November 1958, and the entire network of twenty-three Air Defense Direction Centers were online by 1963 at a total cost to the government of $8 billion.  While IBM agreed to do the entire project at cost as part of its traditional support for national defense, the project still brought the company $500 million in revenues in the late 1950s.  SAGE was perhaps the key project in IBM’s rise to dominance in the computer industry.  Through this massive undertaking, IBM became the most knowledgeable company in world at designing, fabricating, and deploying both large-scale mainframe systems and their critical components such as core memory and computer software.  In 1954, IBM upgraded its 701 computer to replace Williams Tubes memory with magnetic cores and released the system as the IBM 704.  The next year, a core-memory replacement for the 702 followed designated the IBM 705.  These new computers were instrumental in vaulting IBM past Remington Rand in the late 1950s.  SAGE, meanwhile, remained operational until 1983.

The Transistor and the TX-0

102631231-03-01

Kenneth Olsen, co-designer of the TX-0 and co-founder of the Digital Equipment Corporation (DEC)

While building a real-time computer for the SAGE air-defense system was the primary purpose of Project Whirlwind, the scope of the project grew large enough by the middle of the 1950s that staff could occasionally indulge in other activities, such as a new computer design proposed by staff member Kenneth Olsen.  Born in Bridgeport, Connecticut, Olsen began experimenting with radios as a teenager and took an eleven-month electronics course after entering the Navy during World War II.  The war was over by the time his training was complete, so after a single deployment on an admiral’s staff in the Far East, Olsen left the Navy to attend MIT in 1947, where he majored in electrical engineering.  After graduating in 1950, Olsen decided to continue his studies at MIT as a graduate student and joined Project Whirlwind.  One of Olsen’s duties on the project was the design and construction of the Memory Test Computer (MTC), a smaller version of the Whirlwind I built to test various core memory solutions.  In creating the MTC, Olsen innovated with a modular design in which each group of circuits responsible for a particular function was placed on a single plug-in unit placed on a rack that could be easily swapped out if it malfunctioned.  This was a precursor of the plug-in circuit boards still used today on computers.

One of the engineers who helped Olsen debug the MTC was Wes Clark, a physicist that came to Lincoln Laboratory in 1952 after working at the Hanford nuclear production site in Washington State.  Clark and Olsen soon bonded over their shared views on the future of computing and their desire to create a computer that would apply the lessons learned during the Whirlwind project and the construction of the MTC to the latest advances in electronics to demonstrate the potential of a fast and power-efficient computer to the defense industry.  Specifically, Olsen and Clark wanted to explore the potential of a relatively new electronic component called the transistor.

Bardeen_Shockley_Brattain_1948

John Bardeen (l), William Shockley (seated), and Walter Brattain, the team that invented the transistor

For over forty years, the backbone of all electronic equipment was the vacuum tube pioneered by John Fleming in 1904.  While this device allowed for switching at electronic speeds, however, its limitations were numerous.  Vacuum tubes generated a great deal of heat during operation, which meant that they consumed power at a prodigious rate and were prone to burnout over extended periods of use.  Furthermore, they could not be miniaturized beyond a certain point and had to be spaced relatively far apart for heat management, guaranteeing that tube-based electronics would always be large and bulky.  Unless an alternative switching device could be found, the computer would never be able to shrink below a certain size.  The solution to the vacuum tube problem came not from one of the dozen or so computer projects being funded by the U.S. government, but from the telephone industry.

In the 1920s and 1930s, AT&T, which held a monopoly on telephone service in the United States, began constructing a series of large switching facilities in nearly every town in the country to allow telephone calls to be placed between any two phones in the United States.  These facilities relied on the same electromechanical relays that powered several of the early computers, which were bulky, slow, and wore out over time.  Vacuum tubes were sometimes used as well, but the problems articulated above made them particularly unsuited for the telephone network.  As AT&T continued to expand its network, the size and speed limitations of relays became increasingly unacceptable, so the company gave a mandate to its Bell Labs research arm, one of the finest corporate R&D organizations in the world, to discover a smaller, faster, and more reliable switching device.

In 1936, the new director of research at Bell Labs, Mervin Kelly, decided to form a group to explore the possibility of creating a solid-state switching device.  Both solid-state physics, which explores the properties of solids based on the arrangement of their sub-atomic particles, and the related field of quantum mechanics, in which physical phenomena are studied on a nanoscopic scale, were in their infancy and not widely understood, so Kelly scoured the universities for the smartest chemists, metallurgists, physicists, and mathematicians he could find.  His first hire was a brilliant, but difficult physicist named William Shockley.  Born in London to a mining engineer and a geologist, William Bradford Shockley, Jr. grew up in Palo Alto, California, in the heart of the Santa Clara Valley, a region known as the “Valley of the Heart’s Delight” for its orchards and flowering plants.  Shockley’s father spent most of his time moving from mining camp to mining camp, so he grew especially close to his mother, May, who taught him the ins and outs of geology from a young age.  After attending Stanford to stay close to his mother, Shockley received a Ph.D. from MIT in 1936 and went to work for Bell.  Gruff and self-centered, Shockley never got along with his colleagues anywhere he worked, but there was no questioning his brilliance or his ability to push colleagues towards making new discoveries.

Kelly’s group began educating itself on the field of quantum mechanics through informal sessions where they would each take a chapter of the only quantum mechanics textbook in existence and teach the material to the rest of the group.  As their knowledge of the underlying science grew in the late 1930s, the group decided the most promising path to a solid-state switching device lay with a group of materials called semiconductors.   Generally speaking, most materials are either a conductor of electricity, allowing electrons to flow through them, or an insulator, halting the flow of electrons.  As early as 1826, however, Michael Faraday, the brilliant scientist whose work paved the way for electric power generation and transmission, had observed that a small number of compounds would not only act as a conductor under certain conditions and an insulator in others, but would also serve as amplifiers under certain conditions as well.  These properties allowed a semiconductor to behave like a triode under the right conditions, but for decades scientists remained unable to determine why changes in heat, light, or magnetic field would alter the conductivity of these materials and therefore could not harness this property.  It was not until the field of quantum mechanics became more developed in the 1930s that scientists gained a great enough understanding of electron behavior to attack the problem.  Kelly’s new solid-state group hoped to unlock the mystery of semiconductors once and for all, but their work was interrupted by World War II.

In 1945, Kelly revived the solid-state project under the joint supervision of William Shockley and chemist Stanley Morgan.  The key members of this new team were John Bardeen, a physicist from Wisconsin known as one of the best quantum mechanics theorists in the world, and Walter Brattain, a farm boy from Washington known for his prowess at crafting experiments.  During World War II, great progress had been made in creating crystals of the semiconducting element germanium for use in radar, so the group focused its activities on that element.  In late 1947, Bardeen and Brattain discovered that if they introduced impurities into just the right spot on a lump of germanium, the germanium could amplify a current in the same manner as a vacuum tube triode.  Shockley’s team gave an official demonstration of this phenomenon to other Bell Labs staff on December 23, 1947, which is often recognized as the official birthday of the transistor, so named because it effects the transfer of a current across a resistor — i.e. the semiconducting material.  Smaller, less power-hungry, and more durable than the vacuum tube, the transistor paved the way for the development of the entire consumer electronics and personal computer industries of the late twentieth century.

tumblr_mrf93w8XJQ1s6mxo0o1_500

The TX-0, one of the earliest transistorized computers, designed by Wes Clark and Kenneth Olsen

Despite its revolutionary potential, the transistor was not incorporated into computer designs right away, as there were still several design and production issues that had to be overcome before it could be deployed in the field in large numbers (which will be covered in a later post).  By 1954, however, Bell Labs had deployed the first fully transistorized computer, the Transistor Digital Computer or TRADIC, while electronics giant Philco had introduced a new type of transistor called a surface-barrier transistor that was expensive, but much faster than previous designs and therefore the first practical transistor for use in a computer.  It was in this environment that Clark and Olsen proposed a massive transistorized computer called the TX-1 that would be roughly the same size as a SAGE system and deploy one of the largest core memory arrays ever built, but they were turned down because Forrester did not find their design practical.  Clark therefore went back to the drawing board to create as simple a design as he could that still demonstrated the merits of transistorized computing.  As this felt like a precursor to the larger TX-1, Olsen and Clark named this machine the TX-0.

Completed in 1955 and fully operational the next year, the TX-0 — often pronounced “Tixo” — incorporated 3,600 surface-barrier transistors and was capable of performing 83,000 operations per second.  Like the Whirlwind, the TX-0 operated in real time, and it also incorporated a display with a 512×512 resolution that could be manipulated by a light pen, and a core memory that could store over 65,000 words, though Clark and Olsen settled on a relatively short 18-bit word length.  Unlike the Whirlwind I, which occupied 2,500 square feet, the TX-0 took up a paltry 200 square feet.  Both Clark and Olsen realized that the small, fast, interactive TX-0 represented something new: a (relatively) inexpensive computer that a single user could interact with in real time.  In short, it exhibited many of the hallmarks of what would become the personal computer.

With the TX-0 demonstrating the merits of high-speed transistors, Clark and Olsen returned to their goal of creating a more complex computer with a larger memory, which they dubbed the TX-2.  Completed in 1958, the TX-2 could perform a whopping 160,000 operations per second and contained a core memory of 260,000 36-bit words, far surpassing the capability of the earlier TX-0.  Olsen once again designed much of the circuitry for this follow-up computer, but before it was completed he decided to leave MIT behind.

The Digital Equipment Corporation

vs-dec-pdp-1

The PDP-1, Digital Equipment Corporation’s First Computer

Despite what Olsen saw as the nearly limitless potential of transistorized computers, the world outside MIT remained skeptical.  It was one thing to create an abstract concept in a college laboratory, people said, but another thing entirely to actually deploy an interactive transistorized system under real world conditions.  Olsen fervently desired to prove these naysayers wrong, so along with a fellow student who worked with him on the MTC named Harlan Anderson he decided to form his own computer company.  As a pair of academics with no practical real-world business experience, however, Olsen and Anderson faced difficulty securing financial backing.  They approached defense contractor General Dynamics first, but were flatly turned down.  Unsure how to proceed next, they visited the Small Business Administration office in Boston, which recommended they contact investor Georges Doriot.

Georges Doriot was a Frenchman who immigrated to the United States in the 1920s to earn an MBA from Harvard and then decided to stay on as a professor at the school.  In 1940, Doriot became an American citizen, and the next year he joined the United States Army as a lieutenant colonel and took on the role of director of the Military Planning Division for the Quartermaster General.  Promoted to brigadier general before the end of the war, Doriot returned to Harvard in 1946 and also established a private equity firm called the American Research and Development Corporation (ARD).  With a bankroll of $5 million raised largely from insurance companies and educational institutions, Doriot sought out startups in need of financial support in exchange for taking a large ownership stake in the company.  The goal was to work closely with the company founders to grow the business and then sell the stake at some point in the future for a high return on investment.  While many of the individual companies would fail, in theory the payoff from those companies that did succeed would more than make up the difference and return a profit to the individuals and groups that provided his firm the investment capital.  Before Doriot, the only outlets for a new business to raise capital were the banks, which generally required tangible assets to back a loan, or a wealthy patron like the Rockefeller or Whitney families.  After Doriot’s model proved successful, inexperienced entrepreneurs with big ideas now had a new outlet to bring their products to the world.  This outlet soon gained the name venture capital.

In 1957, Olsen and Anderson wrote a letter to Doriot detailing their plans for a new computer company.  After some back and forth and refinement of the business plan, ARD agreed to provide $70,000 to fund Olsen and Anderson’s venture in return for a 70% ownership stake, but the money came with certain conditions.  Olsen wanted to build a computer like the TX-0 for use by scientists and engineers that could benefit from a more interactive programming environment in their work, but ARD did not feel it was a good idea to go toe-to-toe with an established competitor like IBM.  Instead, ARD convinced Olsen and Anderson to produce components like power supplies and test equipment for core memory.  Olsen and Anderson had originally planned to call their new company the Digital Computer Corporation, but with their new ARD-mandated direction, they instead settled on the name Digital Equipment Corporation (DEC).

In August 1957, DEC moved into its new office space on the second floor of Building 12 of a massive woolen mill complex in Maynard, Massachusetts, originally built in 1845 and expanded many times thereafter.  At the time, the company consisted of just three people: Ken Olsen, Harlan Anderson, and Ken’s younger brother Stan, who had worked as a technician at Lincoln Lab.  Ken served as the leader and technical mastermind of the group, Anderson looked after administrative matters, and Stan focused on manufacturing.  In early 1958, the company released its first products.

DEC arrived on the scene at the perfect moment.  Core memory was in high demand and transistor prices were finally dropping, so all the major computer companies were exploring new designs, creating an insatiable demand for testing equipment.  As a result, DEC proved profitable from the outset.  In fact, Olsen and Anderson actually overpriced their stock due to their business inexperience, but with equipment in such high demand, firms bought from DEC anyway, giving the company extremely high margins and allowing it to exceed its revenue goals.  Bolstered by this success, Olsen chose to revisit the computer project with ARD, so in 1959 DEC began work on a fully transistorized interactive computer.

Designed by Ben Gurley, who had developed the display for the TX-0 at MIT, the Programmed Data Processor-1, more commonly referred to as the PDP-1, was unveiled in December 1959 at the Eastern Joint Computer Conference in Boston.  It was essentially a commercialized version of the TX-0, though it was not a direct copy.  The PDP-1 incorporated a better display than its predecessor with a resolution of 1024 x 1024 and it was also faster, capable of 100,000 operations per second.  The base setup contained only 4,096 18-bit words of core memory, but this could be upgraded to 65,536.  The primary method of inputting programs was a punched tape reader, and it was hooked up to a typewriter as well.  While not nearly as powerful as the latest computers from IBM and its competitors in the mainframe space, the PDP-1 only cost $120,000, a stunningly low price in an era where buying a computer would typically set an organization back a million dollars or more.  Lacking developed sales, manufacturing, or service organizations, DEC sold only a handful of PDP-1 computers over its first two years on the market to organizations like Bolt, Beranek, and Newman and the Lawrence Livermore Labs.  A breakthrough occurred in late 1962 when the International Telegraph and Telephone Company (ITT) decided to order fifteen PDP-1 computers to form the heart of a new telegraph message switching system designated the ADX-7300.  ITT would continue to be DEC’s most important PDP-1 customer throughout the life of the system, ultimately purchasing roughly half of the fifty-three computers sold.

While DEC only sold around fifty PDP-1’s over its lifetime, the revolutionary machine introduced interactive computing commercially and initiated the process of opening computer use to ever greater portions of the public, which culminated in the birth of the personal computer two decades later.  With its monitor and real-time operation, it also provided a perfect platform for creating engaging interactive games.  Even with these advances, the serious academics and corporate data handlers of the 1950s were unlikely to ever embrace the computer as an entertainment medium, but unlike the expensive and bulky mainframes reserved for official business, the PDP-1 and its successors soon found their way into the hands of students at college campuses around the country, beginning with the birthplace of the PDP-1 technology: MIT.