Computer History

Historical Interlude: From the Mainframe to the Minicomputer Part 3, DEC and Data General

While IBM was crushing its competition in the mainframe space, another computer market began opening up that IBM virtually ignored.  Following the success of the PDP-1, Ken Olsen and his Digital Equipment Corporation (DEC) continued their work in real-time computing and cultivated a new market for computerized control systems for scientific and engineering projects.  After stumbling in its attempts to build larger systems in the IBM mold, the company decided to create machines even smaller and cheaper than low-end mainframes like the 1401 and H200.  These so-called “minicomputers” could not hope to compete with mainframe systems on power and were often more difficult to program due to a comparably limited memory, but DEC’s new line of computers were also far cheaper and more interactive than any system on the market and opened up computer use to a larger swath of the population than ever before.  Building on these advances, by the end of the 1960s a DEC competitor established by a disgruntled former employee was able to introduce a minicomputer that in its most basic configuration cost just under $4,000, bringing computers tantalizingly close to a mass-market product.  The combination of lower prices and real-time operation offered by the minicomputer provided the final key element necessary to introduce computer entertainment programs like Spacewar! to the general public.

Note: Once again we have a historical interlude post discussing the technological breakthroughs in computing in the 1960s that culminated in the birth of the electronic entertainment industry.  The material in this section is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, A History of Modern Computing by Paul Ceruzzi, The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar, and oral histories conducted by the Computer History Museum with Gordon Bell, Ed de Castro, Alan Kotok, and Harlan Anderson.

The Matrix

ken-olsen-founding-Globe-File-Photo-Yunghi-Kim-ken__1297713235_7602

Ken Olsen poses outside The Mill, DEC corporate headquarters

When last we left DEC, the company had just introduced its first computer, the PDP-1, to a favorable response.  Buoyed by continuing demand for system modules and test equipment and the success of the PDP-1, DEC’s profits rose to $807,000 on sales of $6.5 million for the 1962 fiscal year.  Growing financial success, however, could not compensate for serious underlying structural problems at the company.  From his time serving as a liaison between Project Whirlwind and IBM, Ken Olsen had inherited an extreme loathing for bureaucracy and the trappings of corporate culture and preferred to encourage individual initiative and experimentation more in line with practices in the academic sector.  This atmosphere suited most of DEC’s employees, many of them transplants from MIT and Lincoln Labs eager — like Olsen — to continue their academic work in a private setting.  DEC headquarters, affectionately called “The Mill,” practically became an extension of the MIT campus as students traveled back and forth between Cambridge and Maynard to work part time or just hang out with DEC engineers and learn how the company’s computers operated.  There were no set engineering teams, so employees would organically form groups around specific projects.  While this freedom and lack of oversight spurred creative thinking, however, it left DEC without a coherent product strategy or well developed sales, manufacturing, and servicing organizations.

In 1963, DEC revenues soared to $10 million, while profits jumped to $1.2 million.  The next year, however, revenues flattened and earnings declined, coming in at $11 million and $900,000 respectively.  With little management guidance, DEC engineering teams tended to over commit and under deliver on products, while lack of communication between sales, order processing, and manufacturing resulted in difficulties delivering the company’s existing product line to customers in sufficient quantities.  Clearly, DEC needed to implement a more rigorous corporate structure to remain viable.  The struggle to reform DEC ultimately pitted the company’s two founders against each other as Olsen steadfastly refused to implement a rigid hierarchy, while Harlan Anderson backed Jay Forrester, the Whirlwind project leader turned MIT Sloan School of Business professor who served as a director of DEC, in his efforts to implement some of his own management theories at the company.  Georges Doriot, the most important director of the company due to ARD’s large stake in DEC, remained a staunch supporter of and adviser to Olsen, but preferred to stay out of the conflict, feeling directors should not tell management what to do unless a company is in dire straits.

While struggling to operate efficiently, DEC also experienced difficulty creating a successor to the PDP-1.  Initial plans to create 24- and 36-bit versions of the computer, designated the PDP-2 and PDP-3 respectively, floundered due to technical hurdles and a lack of customer interest and never entered production.  Worse, PDP-1 designer Ben Gurley announced his resignation in December 1962 to join a new startup before being tragically murdered less than a year later by a former co-worker.  With Gurley’s departure, DEC’s primary computer designer became a young engineer named Gordon Bell.

portrait_of_gordon_bell.102631243.lg

Gordon Bell, DEC’s principal computer designer after the departure of Ben Gurley

Born in Kirksville, Missouri, Gordon Bell exhibited an aptitude for electrical engineering at an early age and was earning $6/hour as an electrician by the time he was about twelve years old.  Matriculating to MIT in 1952, Bell earned his B.S. in electrical engineering from the school in 1956 and his M.S. in the same field the next year.  Originally interested in being a power engineer, Bell worked for American Electric Power and GE through a co-op program while attending MIT, but he ultimately decided not to pursue that path further.  Unsure what to do after graduation, he accepted an offer to travel to Australia to set up a new computer lab in the electrical engineering department of the University of New South Wales.  After a brief stint in the Speech Computation Laboratory at MIT, Bell Joined DEC in 1960 and did some work on the I/O subsystem of the PDP-1.  After helping with the aborted PDP-3, which had been an attempt to enter the scientific market served by the 36-bit IBM 7090, Bell initiated a project to create a cheaper, but more limited version of the PDP-1 intended for process control.  Dubbed the PDP-4, the computer sold for just $65,000 and included some updated features such as auto index registers, but a lack of compatibility with the PDP-1 coupled with reduced capabilities compared to DEC’s original computer ultimately killed interest in the product.  While DEC managed to sell fifty-four PDP-4s, one more unit than the PDP-1, it was considered a commercial disappointment.

In early 1963, Olsen and Anderson decided to return to the PDP-3 concept of a large scientific computer that could challenge IBM in the mainframe space and tapped Bell for the project, who was assisted by Alan Kotok, the noted MIT hacker who joined DEC upon graduating in 1962.  Dubbed the PDP-6, Bell’s computer was capable of performing 250,000 operations per second and came equipped with a core memory with a capacity of 32,768 36-bit words.  While not quite on par with the industry-leading IBM 7094, the computer was capable of real-time operation and incorporated native support for time sharing unlike the IBM model, and it was also far cheaper, retailing for just $300,000.  Unfortunately, the computer was poorly engineered and not thoroughly tested, leading to serious technical defects only discovered once the first computers began shipping to customers in 1964.  As a result, the computer turned out to be a disaster, with only twenty-three units sold.  Harlan Anderson, who had championed the computer heavily, bore the brunt of the blame for its failure from his co-founder Olsen.  Combined with their on-going fight over the future direction of the company, the stigma of the PDP-6 fiasco ultimately drove Anderson from the company in 1966.  The failure of the PDP-6 was the clearest indicator yet that DEC needed to reform its corporate structure to survive.

In 1965, Olsen finally hit upon a solution to the company’s organizational woes.  Rather than a divisional structure, Olsen reorganized DEC along product lines.  Each computer sold by the company, along with the company’s module and memory test equipment lines, would become its own business unit run by a single senior executive with full profit and loss responsibility and complete independence to define, develop, and market his product as he saw fit.  To actually execute their visions, each of these senior executives would have to present his plans to a central Operations Committee composed of Olsen and his most trusted managers, where they would bid for resources from the company’s functional units such as sales, manufacturing, and marketing.  In effect, each project manager became an entrepreneur and the functional managers became investors, allocating their resources based on which projects the Operations Committee felt deserved the most backing.  While DEC was not the first company to try this interconnected corporate structure — which soon gained the moniker “matrix management” — the ensuing financial success of DEC caused the matrix to become closely associated with Ken Olsen in subsequent decades.

 The Minicomputer

dec_pdp-8.pdp8.102630610.lg

The PDP-8, the first widely sold minicomputer

One of DEC’s oldest computer customers was Atomic Energy of Canada, which had purchased one of the first PDP-1 computers for its Chalk River facility.  The company proceeded to buy a PDP-4 to control the reactor at Chalk River, but the computer was not quite able to handle all the duties it had been assigned.  To solve this problem, Gordon Bell proposed in early 1963 that rather than create custom circuitry to meet Atomic Energy’s needs, DEC should build a smaller computer that could serve as a front end to interface with the PDP-4 and provide the needed functionality.  Rather than just create a system limited to Atomic Energy’s needs, however, Bell decided to design the machine so it could also function as an independent general-purpose computer.  DEC named this new computer the PDP-5.

Bell was not the first person to create a small front-end computer: in 1960 Control Data released the Seymour Cray-designed CDC 160 to serve as an I/O device to interface with its 1604 mainframe.  Soon after, CDC repurposed the machine as a stand-alone device and marketed it as the CDC 160A.  The brilliant Cray employed bank switching and other techniques to allow the relatively limited 12-bit computer to address almost as much memory as a large mainframe, though not as easily or efficiently.  While not as powerful as a full-scale mainframe, the 160A provided most of the same functionality — albeit scaled down at a speed of only 67,000 operations per second — at a price of only $60,000 and a footprint the size of a metal desk.  CDC experienced some success with the 160A, but as the company was primarily focused on supercomputers, it paid little attention to the low-end market.

While Bell planned for the 12-bit PDP-5 to be a general purpose computer, DEC essentially treated the computer as a custom solution for Atomic Energy and not as a key part of its future product line, which was then focused around the large-scale PDP-6.  As a result, DEC planned to only sell roughly ten computers, just enough to recoup its development costs.  Just as IBM had underestimated demand for the relatively cheap 1401, however, DEC did not realized how interested the market would be in a fully functional computer that sold for just $27,000, by far the cheapest core-memory computer on the market.  Orders soon began pouring in, and the company ultimately sold roughly 1,000 PDP-5s, making it the company’s best-selling computer by a factor of twenty.  With the PDP-6 floundering, Ken Olsen decided to champion smaller computers, and the company began considering a more advanced followup to the PDP-5.

_DCASTRO

Edson de Castro, the engineer who designed the PDP-8 and later established Data General

Just as Harlan Anderson was forced out of DEC due to the failure of the PDP-6, so too did Gordon Bell decide it was time to move on.  While he did not officially leave the company, he took a sabbatical in 1966 that lasted six years in which he did some work in academia and continued to serve as a DEC consultant.  In his place, the task of developing a followup to the PDP-5 fell to another engineer named Edson de Castro.

Born in Plainfield, New Jersey, Ed de Castro spent the majority of his childhood in Newton, Massachusetts.  The son of a chemical engineer, de Castro had a fascination with mechanical devices from a young age and always knew he wanted to be an engineer.  Accepted into MIT, de Castro opted instead to attend the much smaller and less prestigious Lowell Technological Institute, where he felt he would receive more attention from the school faculty.  Interested in business, de Castro applied to Harvard Business School after graduation, but the school said it would only accept him after the next academic year.  He therefore needed a job in the short term and was recruited by Stan Olsen as a systems engineer for DEC in late 1960, where he worked with customers to develop applications for DEC’s systems modules.  After just under a year at DEC, de Castro left to attend Harvard, but his grades were insufficient to qualify for the second year of the program, so he returned to DEC to work in the custom products division, which focused on memory test equipment.

After Gordon Bell and Alan Kotok outlined the PDP-5, de Castro became the primary engineer responsible for building it.  The original design called for the machine to be a 10-bit computer, but de Castro upped this to 12 bits — multiples of 6 being the standard in the industry at the time — so it could address more memory and be more useful.  When the PDP-5 became successful, de Castro went back to working as a systems engineer and helped install the computers in the field.  Soon after, he turned his attention to the computer’s successor, the PDP-8.

The PDP-8 had several advantages over the small computers that preceded it.  First of all, it used a transistor from Philco, the germanium micro-alloy diffused transistor, that operated particularly quickly and allowed the computer to perform 500,000 operations per second.  Furthermore, DEC harnessed its expertise in core memory to lower the memory cycle time to 1.6 microseconds, slightly faster than an IBM 7090 and much faster than the CDC 160A.  While the 12-bit computer could only directly address 7 bits of memory, DEC employed several techniques to allow the computer to indirectly address full 12-bit words and perform virtually any operation a larger computer could, albeit sometimes much slower.  While complex calculations might take a long time, however, many simpler operations could be performed just as quickly on a PDP-8 as on a much larger and more expensive computer.  The PDP-8 was also incredibly small, as de Castro employed an especially efficient board design that allowed the entire computer to fit into a case that occupied only eight cubic feet of volume, meaning it was small enough to place on top of a standard workbench.

In 1965, DEC introduced the PDP-8 with 4,000 words of memory and a teletype for user input for just $18,000.  Within just a few years, the price fell to under $10,000 as DEC continued to cost reduce the computer though new technologies like integrated circuits, which were first used in the PDP-8 in 1969.  Thanks to de Castro, organizations could now purchase a computer that fit on top of a desk yet provided nearly all the same functionality at nearly the same speed (for most operations, at least) as a million dollar computer taking up half a room.  The limitations of the PDP-8 guaranteed it would not displace mainframes entirely, but the low price helped it become a massive success with over 50,000 units sold over a fifteen year period.  Many of these machines were sold under a new business model in which DEC would act as an original equipment manufacturer (OEM) by selling a PDP-8 to another company that would add its own software and peripheral hardware.  This company would then sell the package under its own name and take responsibility for service and maintenance.  Before long, OEM arrangements grew to represent fifty percent of DEC’s computer sales while allowing DEC to keep its costs down by farming out labor intensive tasks like software creation.  As DEC rode the success of the PDP-8, revenues climbed from $15 million in 1965 to almost $23 million in 1966 to $39 million in 1967, while profits increased sixfold between 1965 and 1967 to $4.5 million.

The Nova

dg-nova

The Data General Nova, a minicomputer that combined an incredibly small size with an incredibly cheap price

The success of the PDP-8 opened up a whole new market for small, cheap machines that soon gained the designation “minicomputers.”  With IBM and most of its competitors remaining focused on full-sized mainframes, however, this market was largely populated by newcomers to the computer industry.  Hewlett-Packard, the large West Coast electronics firm, first offered to buy DEC and then went into competition with its own minicomputer line.  Another west-coast electronics firm, Varian Associates, also entered the fray, as did an array of start-ups like Wang Laboratories and Computer Control Company, which was quickly purchased by Honeywell.  By 1970, over seventy companies were manufacturing minicomputers, and a thriving high-technology sector had emerged along Route 128 in the suburbs of Boston.  DEC continued to be the leader in the field, but soon faced some of its most serious competition from within the company itself.

Ed de Castro had brought great success to DEC by designing the PDP-8, but he was not particularly happy at the company.  The Silicon Valley concept of rewarding engineering talent with generous stock options did not yet exist, so while DEC had gone public in 1966, only senior executives reaped the benefits while de Castro, for all the value he added to the company, had to make do with an engineer’s salary of around $12,000 a year.  Furthermore, de Castro had hoped to be placed in charge of the PDP-8 product line, but Ken Olsen refused him.  Sensing de Castro was unhappy and not wanting to lose such a talent, DEC executive Nick Mazzarese hoped to placate de Castro by giving him charge of a new project to define the company’s next-generation successor to the PDP-8.

Although the PDP-8 was only two years old by the time de Castro turned to designing a followup in 1967, the computer market had changed drastically.  The integrated circuit was by now well established and promised significant increases in performance alongside simultaneous reductions in size and cost.  Furthermore, the dominance of the System/360 had caused a shift from a computer architecture based on multiples of six bits to one based on multiples of the 8-bit byte, which remains the standard in the computer industry to this day.  DEC’s competitors in the minicomputer space were therefore focusing on creating 16-bit machines, and the 12-bit PDP-8 looked increasingly obsolete in comparison.

In late 1967, de Castro and fellow engineers Henry Burkhardt and Dick Sogge unveiled an ambitious computer architecture designed to keep DEC on top of the minicomputer market well into the 1970s.  Dubbed the PDP-X, de Castro’s system was built around medium-scale integration circuits and — like the System/360 — would offer a range of power and price options all enjoying software and peripheral compatibility.  Furthermore, while the base architecture would be 16-bit, the PDP-X was designed to be easily configurable for 32-bit technology, allowing customers to upgrade as their needs grew over time without having to redo all their software or buy all new hardware.  Rather than being just a replacement for the PDP-8, the PDP-X was positioned as a product that could supplant DEC’s entire existing computer line.

But the PDP-X was too ambitious for DEC.  Olsen still remembered the failure of the PDP-6 project, and he was horrified when de Castro told him that the PDP-X would be an even bigger undertaking than that computer.  Worse, de Castro was known for bucking DEC management practices and doing things his own way, so he had butted heads with nearly everyone on the company’s Operations Committee while simultaneously alienating nearly every product line manager by proposing to replace all of their products.  Unlike Tom Watson Jr., who bet his company on an integrated product line and came to dominate the mainframe industry as a result, Olsen could not bring himself to pledge so many resources to a single project.  DEC turned the PDP-X down.

This was the last straw for de Castro.  He had long been interested in business — witness his brief stint at Harvard — and he had long chafed under DEC management.  He had also toyed with the idea of establishing his own company in the past, and with the Route 128 tech corridor taking off, there was plenty of venture money to be had for a computer startup.  Therefore, de Castro brought in his former boss in custom products, Pat Greene, to run his prospective company and a Fairchild salesman named Herb Richman that he had purchased circuits from to run marketing and began designing a new 8-bit computer with Burkhardt and Sogge before actually leaving DEC.  After initially garnering little interest from venture capitalists, Richman placed de Castro in touch with George Cogar, co-founder of a company called Mohawk Data Sciences, who agreed to become the lead investor in what turned out to be $800,000 in financing.

In early 1968, the group was finally ready to leave DEC, but Pat Greene got cold feet and appeared ready to back out, uncomfortable with the work the group was doing behind Ken Olsen’s back.  Therefore, de Castro, Burkhardt, and Sogge waited until April 15, when Greene was out of the country on a business trip to Japan, to resign and officially establish Data General.  When Greene returned from Japan, he turned over all materials he had related to the new company to Olsen, including the plans for the 8-bit computer the three engineers had been secretly building at DEC.  Olsen felt betrayed and carried an enmity for Data General for decades, convinced de Castro had stolen DEC technology when he departed.  Despite this belief, however, DEC never sued.

In 1969, de Castro, Burkhardt, and Sogge released their first computer, the Data General Nova.  Quickly abandoning their 8-bit plans once leaving DEC, the trio designed the Nova using medium-scale integration circuits so that the entire computer fit on just two printed circuit boards: one containing the 16-bit CPU and the other containing various support systems.  By fitting all the circuitry on only two boards with minimal wiring, Data General was able to significantly undercut the PDP-8 on cost while simultaneously making the system easier to manufacture and therefore more reliable.  With these savings, Data General was able to offer the Nova at the extremely low price of $3,995, though practically speaking, the computer was essentially useless without also buying a 4K core memory expansion, which pushed the price up to around $7,995.  Still this was an unheard of price for a fully functional computer and spurred brisk sales.  It also piqued the interest of a young engineer recently graduated from the University of Utah who thought it just might be possible to use the Nova to introduce the Spacewar! game so popular in certain university computer labs to the wider world.

Advertisement

Historical Interlude: From the Mainframe to the Minicomputer Part 2, IBM and the Seven Dwarfs

The computer began life in the 1940s as a scientific device designed to perform complex calculations and solve difficult equations.  In the 1950s, the United States continued to fund scientific computing projects at government organizations, defense contractors, and universities, many of them based around the IAS architecture derived from the EDVAC and created by John von Neumann’s team at Princeton.  Some of the earliest for-profit computer companies emerged out of this scientific work such as the previously discussed Engineering Research Associates, the Hawthorne, California-based Computer Research Corporation, which spun out of a Northrup Aircraft project to build a computer for the Air Force in 1952, and the Pasadena-based ElectroData Corporation, which spun out of the Consolidated Engineering Corporation that same year.  All of these companies remained fairly small and did not sell many computers.

Instead, it was Remington Rand that identified the future path of computing when it launched the UNIVAC I, which was adopted by businesses to perform data processing.  Once corporate America understood the computer to be a capable business machine and not just an expensive calculator, a wide array of office equipment and electronics companies entered the computer industry in the mid 1950s, often buying out the pioneering computer startups to gain a foothold.  Remington Rand dominated this market at first, but as discussed previously, IBM soon vaulted ahead as it acquired computer design and manufacturing expertise participating in the SAGE project and unleashed its world-class sales and service organizations.  Remington Rand attempted to compensate by merging with Sperry Gyroscope, which had both a strong relationship with the military and a more robust sales force, to form Sperry Rand in 1955, but the company never seriously challenged IBM again.

While IBM maintained its lead in the computer industry, however, by the beginning of the 1960s the company faced threats to its dominance at both the low end and the high end of the market from innovative machines based around new technologies like the transistor.  Fearing these new challengers could significantly damage IBM, Tom Watson Jr. decided to bet the company on an expensive and technically complex project to offer a complete line of compatible computers that could not only be tailored to a customer’s individual’s needs, but could also be easily modified or upgraded as those needs changed over time.  This gamble paid off handsomely, and by 1970 IBM controlled well over seventy percent of the market, with most of the remainder split among a group of competitors dubbed the “seven dwarfs” due to their minuscule individual market shares.  In the process, IBM succeeded in transforming the computer from a luxury item only operated by the largest firms into a necessary business appliance as computers became an integral part of society.

Note: Yet again we have a historical interlude post that summarizes key events outside of the video game industry that nevertheless had a significant impact upon it.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of the Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh. and Building IBM: Shaping an Industry and Its Technology by Emerson Pugh.

IBM Embraces the Transistor

IBM1401_TapeSystem_Mwhite

The IBM 1401, the first mainframe to sell over 10,000 units

Throughout most of its history in computers, IBM has been known more for evolution than revolution.  Rarely first with a new concept, IBM excelled at building designs based around proven technology and then turning its sales force loose to overwhelm the competition.  Occasionally, however, IBM engineers have produced important breakthroughs in computer design.  Perhaps none of these were more significant than the company’s invention of the disk drive.

On the earliest computers, mass data storage was accomplished through two primary methods: magnetic tape or magnetic drums.  Tape could hold a large amount of data for the time, but it could only be read serially, and it was a fragile medium.  Drums were more durable and had the added benefit of being random access — that is any point of data on the drum could be read at any time — but they were low capacity and expensive.  As early as the 1940s, J. Presper Eckert had explored using magnetic disks rather than drums, which would be cheaper and feature a greater storage capacity due to a larger surface area, but there were numerous technical hurdles that needed to be ironed out.  Foremost among these was the technology to read the disks.  A drum memory array used rigid read-write heads that could be readily secured, though at high cost.  A disk system required a more delicate stylus to read the drives, and the constant spinning of the disk created a high risk that the stylus would make contact with and damage it.

The team that finally solved these problems at IBM worked not at the primary R&D labs in Endicott or Poughkeepsie, but rather a relatively new facility in San Jose, California, led by IBM veteran Reynold Johnson that had been established in 1952 as an advanced technologies research center free of the influence of the IBM sales department, which had often shut down projects with no immediate practical use.  One of the lab’s first projects was to improve storage for IBM’s existing tabulating equipment.  This task fell to a team led by Arthur Critchlow, who decided based on customer feedback to develop a new random access solution that would allow IBM’s tabulators and low-end computers to not only be useful for data processing, but also for more complicated jobs like inventory management.  After testing a wide variety of memory solutions, Critchlow’s team settled on the magnetic disk as the only viable solution, partially inspired by a similar project at the National Bureau of Standards on which an article had been published in August 1952.

To solve the stylus problem on the drive, Critchlow’s team attached a compressor to the unit that would pump a thin layer of air between the disk and the head.  Later models would take advantage of a phenomenon known as the “boundry layer” in which the fast motion of the disks would generate the air cushion themselves.  After experimenting with a variety of head types and positions throughout 1953 and 1954, the team was ready to complete a final design.  Announced in 1956 as the Model 305 Disk Storage Unit and later renamed RAMAC (for Random Access Memory Accounting Machine), IBM’s first disk drive consisted of fifty 24-inch diameter aluminum disks rotating at 1200 rpm with a storage capacity of five million characters.  Marketed as an add-on to the IBM 650, RAMAC revolutionized data processing by eliminating the time consuming process of manually sorting information and provided the first compelling reason for small and mid-sized firms to embrace computers and eliminate electro-mechanical tabulating equipment entirely.

IBM_7090_computer

The IBM 7090, the company’s first transistorized computer

In August 1958, IBM introduced its latest scientific computer, the IBM 709, which improved on the functionality of the IBM 704.  The 709 continued to depend on vacuum tubes, however, even as competitors were starting to bring the first transistorized computers to market.  While Tom Watson, Jr. and his director of engineering, Wally McDowell, were both excited by the possibilities of transistors from the moment they first learned about them and as early as 1950 charged Ralph Palmer’s Poughkeepsie laboratory to begin working with the devices, individual project managers continued to have the final authority in choosing what parts to use in their machines, and many of them continued to fall back on the more familiar vacuum tube.  In the end, Tom Watson, Jr. had to issue a company-wide mandate in October 1957 that transistors were to be incorporated into all new projects.  In the face of this resistance, Palmer felt that IBM needed a massive project to push its solid-state designs forward, something akin to what Project SAGE had done for IBM’s efforts with vacuum tubes and core memory.  He therefore teamed with Steve Dunwell, who had spent part of 1953 and 1954 in Washington D.C. assessing government computing requirements, to propose a high-speed computer tailored to the ever-increasing computational needs of the military-industrial complex.  A contract was eventually secured with the National Security Agency, and IBM approved “Project Stretch” in August 1955, which was formally established in January 1956 with Dunwell in charge.

Project Stretch experienced a long, difficult, and not completely successful development cycle, but it did achieve Palmer’s goals of greatly improving IBM’s solid-state capabilities, with particularly important innovations including a much faster core memory and a “drift transistor” that was faster than the surface-barrier transistor used in early solid-state computing projects like the TX-0.  As work on Stretch dragged on, however, these advances were first introduced commercially through another product.  In response to Sputnik, the United States Air Force quickly initiated a new Ballistic Missile Early Warning System (BMEWS) project that, like SAGE, would rely on a series of linked computers.  The Air Force mandated, however, that these computers incorporate transistors, so Palmer offered to build a transistorized version of the 709 to meet the project’s needs.  The resulting IBM 7090 Data Processing System, deployed in November 1959 as IBM’s first transistorized computer, provided a six-fold increase in performance over the 709 at only one-third additional cost.  In 1962,  an upgraded version dubbed the 7094 was released with a price of roughly $2 million.  Both computers were well-received, and IBM sold several hundred of them.

Despite the success of its mainframe computer business, IBM in 1960 still derived the majority of its sales from the traditional punched-card business.  While some larger organizations were drawn to the 702 and 705 business computers, their price kept them out of reach of the majority of IBM’s business customers.  Some of these organizations had embraced the low-cost 650 as a data processing solution, leading to over 800 installations of the computer by 1958, but it was actually more expensive and less reliable than IBM’s mainline 407 electric accounting machine.  The advent of the transistor, however, finally provided the opportunity for IBM to leave its tabulating business behind for good.

The impetus for a stored-program computer that could displace traditional tabulating machines initially came from Europe, where IBM did not sell its successful 407 due to import restrictions and high tooling costs.  In 1952, a competitor called the French Bull Company introduced a new calculating machine, the Bull Gamma 3, that used delay-line memory to provide greater storage capacity at a cheaper price than IBM’s electronic calculators and could be joined with a card reader to create a faster accounting machine than anything IBM offered in the European market.  Therefore, IBM’s French and German subsidiaries began lobbying for a new accounting machine to counter this threat.  This led to the launch of two projects in the mid-1950s: the modular accounting calculator (MAC) development project in Poughkeepsie that birthed the 608 electronic calculator and the expensive and relatively unsuccessful 7070 transistorized computer, and the Worldwide Accounting Machine (WWAM) project run out of France and Germany to create an improved traditional accounting machine for the European market.

While the WWAM project had been initiated in Europe, it was soon reassigned to Endicott when the European divisions proved unable to come up with an accounting machine that could meet IBM’s cost targets.  To solve this problem, Endicott engineer Francis Underwood proposed that a low-cost computer be developed instead.  Management approved this concept in early 1958 under the name SPACE — for Stored Program Accounting and Calculating Equipment — and formally announced the product in October 1959 as the IBM 1401 Data Processing System.  With a rental cost of only $2,500 a month (roughly equivalent to a purchase price of $150,000), the transitorized 1401 proved much faster and more reliable than an IBM 650 at a fraction of the cost and was only slightly more expensive than a mid-range 407 accounting machine setup.  More importantly, it shipped with a new chain printer that could output 600 lines per minute, far more than the 150 lines per minute produced by the 407, which relied on obsolete prewar technology.  First sold in 1960, IBM projected that it would sell roughly 1,000 1401 computers over its entire lifetime, but its combination of power and price proved irresistible, and by the end of 1961 over 2,000 machines had already been installed.  IBM would eventually deploy 12,000 1401 computers before it was officially withdrawn in 1971.  Powered by the success of the 1401, IBM’s computer sales finally equaled the sales of punch card products in 1962 and then quickly eclipsed them.  No computer model had ever approached the success of the 1401 before, and as IBM rode the machine to complete dominance of the mainframe industry in the early 1960s, the powder-blue casing of the machine soon inspired a new nickname for the company: Big Blue.

The Dwarfs

honeywell200

The Honeywell 200, which competed with IBM’s 1401 and threatened to destroy its low-end business

In the wake of Remington Rand’s success with the UNIVAC I, more than a dozen old-line firms flocked to the new market.  Companies like Monroe Calculating, Bendix, Royal, Underwood, and Philco rushed to provide computers to the business community, but one by one they fell by the wayside.  Of these firms, Philco probably stood the best chance of being successful due to its invention of the surface barrier transistor, but while its Transac S-1000 — which began life in 1955 as an NSA project called SOLO to build a transistorized version of the UNIVAC 1103 — and S-2000 computers were both capable machines, the company ultimately decided it could not keep up with the fast pace of technological development and abandoned the market like all the rest.  By 1960, only five established companies and one computer startup joined Sperry Rand in attempting to compete with IBM in the mainframe space.  While none of these firms ever succeeded in stealing much market share from Big Blue, most of them found their own product niches and deployed some capable machines that ultimately forced IBM to rethink some of its core computer strategies.

Of the firms that challenged IBM, electronics giants GE and RCA were the largest, with revenues far exceeding the computer industry market leader, but in a way their size worked against them.  Since neither computers nor office equipment were among either firm’s core competences, nor integral to either firm’s future success, they never fully committed to the business and therefore never experienced real success.  Unsurprisingly, they were the first of the seven dwarfs to finally call it quits, with GE selling off its computer business in 1970 and RCA following suit in 1971.  Burroughs and NCR, the companies that had long dominated the adding machine and cash register businesses respectively, both entered the market in 1956 after buying out a small startup firm — ElectroData and Computer Research Corporation respectively — and managed to remain relevant by creating computers specifically tailored to their preexisting core customers, the banking sector for Burroughs and the retail sector for NCR.  Sperry Rand ended up serving niche markets as well after failing to compete effectively with IBM, experiencing success in fields such as airline reservation systems.  The biggest threat to IBM’s dominance in this period came from two Minnesota companies: Honeywell and Control Data Corporation (CDC).

Unlike the majority of the companies that persisted in the computer industry, Honeywell came not from the office machine business, but from the electronic control industry.  In 1883, a man named Albert Butz created a device called the “damper flapper” that would sense when a house was becoming cold and cause the flapper on a coal furnace to rise, thus fanning the flames and warming the house.  Butz established a company that did business under a variety of names over the next few years to market his innovation, but he had no particular acumen for business.  In 1891, William Sweatt took over the company and increased sales through door-to-door selling and direct marketing.  In 1909 the company introduced the first controlled thermostat, sold as the “Minnesota Regulator,” and in 1912 Sweatt changed the name of the company to the Minnesota Heat Regulator Company.  In 1927, a rival firm, Mark C. Honeywell’s Honeywell Heating Specialty Company of Wabash, Indiana, bought out Minnesota Heat Regulator to form the Honeywell-Minneapolis Regulator Company with Honeywell as President and Sweatt as chairman.  The company continued to expand through acquisitions over the next decade and weathered the Great Depression relatively unscathed.

In 1941, Harold Sweatt, who had succeeded Honeywell as president in 1934, parlayed his company’s expertise in precision measuring devices into several lucrative contracts with the United States military, emerging from World War II as a major defense contractor.  Therefore, the company was approached by fellow defense contractor Raytheon to establish a joint computer subsidiary in 1954.  Incorporated as Datamatic Corporation the next year, the computer company became a wholly-owned subsidiary of Honeywell in 1957 when Raytheon followed so many other companies in exiting the computer industry.  Honeywell delivered its first mainframe, the Datamatic 1000, that same year, but the computer relied on vacuum tubes and was therefore already obsolete by the time it hit the market.  Honeywell temporarily withdrew from the business and went back to the drawing board.  After IBM debuted the 1401, Honeywell triumphantly returned to the business with the H200, which not only took advantage of the latest technology to outperform the 1401 at a comparable price, but also sported full compatibility with IBM’s wildly successful machine, meaning companies could transfer their existing 1401 programs without needing to make any adjustments.  Announced in 1963, the H200 threatened IBM’s control of the low-end of the mainframe market.

norris_cray

William Norris (l) and Seymour Cray, the principle architects of the Control Data Corporation

While Honeywell chipped away at IBM from the bottom of the market, computer startup Control Data Corporation (CDC) — the brainchild of William Norris — threatened to do the same from the top.  Born in Red Cloud, Nebraska, and raised on a farm, Norris became an electronics enthusiast at an early age, building mail-order radio kits and becoming a ham radio operator.  After graduating from the University of Nebraska in 1932 with a degree in electrical engineering, Norris was forced to work on the family farm for two years due to a lack of jobs during the Depression before joining Westinghouse in 1934 to work in the sales department of the company’s x-ray division.  Norris began doing work for the Navy’s Bureau of Ordinance as a civilian in 1940 and enjoyed the work so much that he joined the Naval Reserve and was called to duty at the end of 1941 at the rank of lieutenant commander.  Norris served as part of the CSAW codebreaking operation and became one of the principle advocates for and co-founders of Engineering Research Associates after the war.  By 1957, Norris was feeling stifled by the corporate environment at ERA parent company Sperry Rand, so he left to establish CDC in St. Paul, Minnesota.

Norris provided the business acumen at CDC, but the company’s technical genius was a fellow engineer named Seymour Cray.  Born in Chippewa Falls, Wisconsin, Cray entered the Navy directly after graduating from high school in 1943, serving first as a radio operator in Europe before being transferred to the Pacific theater to participate in code-breaking activities.  After the war, Cray attended the University of Minnesota, graduated with an electrical engineering degree in 1949, and went to work for ERA in 1951.  Cray immediately made his mark by leading the design of the UNIVAC 1103, one of the first commercially successful scientific computers, and soon gained a reputation as an engineering genius able to create simple, yet fast computer designs.  In 1957, Cray and several other engineers followed Norris to CDC.

Unlike some of the more conservative engineers at IBM, Cray understood the significance of the transistor immediately and worked to quickly incorporate it into his computer designs.  The result was CDC’s first computer, the 1604, which was first sold in 1960 and significantly outperformed IBM’s scientific computers.  Armed with Cray’s expertise in computer design Norris decided to concentrate on building the fastest computers possible and selling them to the scientific and military-industrial communities where IBM’s sales force exerted relatively little influence.  As IBM’s Project Stretch floundered — never meeting its performance targets after being released as the IBM 7030 in 1961 — Cray moved forward with his plans to build the fastest computer yet designed.  Released as the CDC 6600 in 1964, Cray’s machine could perform an astounding three million operations per second, three times as many as the 7030 and more than any other machine would be able to perform until 1969, when another CDC machine, the 7600, outpaced it.  Dubbed a supercomputer, the 6600 became the flagship product of a series of high-speed scientific computers that IBM proved unable to match.  While Big Blue was ultimately forced to cede the top of the market to CDC, however, by the time the 6600 launched the company was in the final phases of a product line that would extend the company’s dominance over the mainframe business and ensure competitors like CDC and Honeywell would be limited to only niche markets.

System/360

 system360

The System/360 family of computers, which extended IBM’s dominance of the mainframe market through the end of the 1960s.

 When Tom Watson Jr. finally assumed full control of IBM from his father, he inherited a corporate structure designed to collect as much power and authority in the hands of the CEO as possible.  Unlike Watson Sr., Watson Jr. preferred decentralized management with a small circle of trusted subordinates granted the authority to oversee the day-to-day operation of IBM’s diverse business activities.  Therefore Watson overhauled the company in November 1956, paring down the number of executives reporting directly to him from seventeen to just five, each of whom oversaw multiple divisions with the new title of “group executive.”  He also formed a Corporate Management Committee consisting of himself and the five group executives to make and execute high-level decisions.  While the responsibilities of individual group executives would change from time to time, this new management structure remained intact for decades.

Foremost among Watson’s new group executives was a vice president named Vin Learson.  A native of Boston, Massachusettes, T. Vincent Learson graduated from Harvard with a degree in mathematics in 1935 and joined IBM as a salesman, where he quickly distinguished himself. In 1949, Learson was named sales manager of IBM’s Electric Accounting Machine (EAM) Division, and he rose to general sales manager in 1953.  In April 1954, Tom Watson, Jr. named Learson the director of Electronic Data Processing Machines with a mandate to solidify IBM’s new electronic computer business.  After guiding early sales of the 702 computer and establishing an advanced technology group to incorporate core memory and other improvements into the 704 and 705 computers, Learson received another promotion to vice president of sales for the entire company before the end of the year.  During Watson’s 1956 reorganization, he named Learson group executive of the Military Products, Time Equipment, and Special Engineering Products divisions.

During the reorganization, IBM’s entire computer business fell under the new Data Processing Division overseen by group executive L.H. LaMotte.  As IBM’s computer business continued to grow and diversify in the late 1950s, however, it grew too large and unwieldy to contain within a single division, so in 1959 Watson split the operation in two by creating the Data Systems Division in Poughkeepsie, responsible for large systems, and the General Products Division, which took charge of small systems like the 650 and 1401 and incorporated IBM’s other laboratories in Endicott, San Jose, Burlington, Vermot, and Rochester, Minnesota.  Watson then placed these two divisions, along with a new Advanced Systems Development Division, under Learson’s control, believing him to be the only executive capable of propelling IBM’s computer business forward.

Learson_1

Vin Learson, the IBM executive who spearheaded the development of the System/360

When Learson inherited the Data Systems and General Products Divisions, he was thrust into the middle of an all out war for control of IBM’s computer business.  The Poughkeepsie Laboratory had been established specifically to exploit electronics after World War II and prided itself on being at the cutting edge of IBM’s technology.  The Endicott Laboratory, the oldest R&D division at the company, had often been looked down upon for clinging to older technology, yet by producing both the 650 and the 1401, Endicott was responsible for the majority of IBM’s success in the computer realm.  By 1960, both divisions were looking to update their product lines with more advanced machines.  That September, Endicott announced the 1410, an update to the 1401 that maintained backwards compatibility.  At the same time, Poughkeepsie was hard at work on a new series of four compatible machines designed to serve a variety of business and scientific customers under the 8000 series designation.  Learson, however, wanted to unify the product line from the very low end represented by the 1401 to the extreme high end represented by the 7030 and the forthcoming 8000 computers.  By achieving full compatibility in this manner, IBM could take advantage of economies of scale to drive down the price of individual computer components and software development while also standardizing peripheral devices and streamlining the sales and service organizations that would no longer have to learn multiple systems.  While Learson’s plan was sound in theory, however, forcing two organizations that prided themselves on their independence and competed with each other fiercely to work together would not be easy.

Learson relied heavily on his power as a group executive to transfer employees across both divisions to achieve project unity.  First, he moved Bob Evans, who had been the engineering manager for the 1401 and 1410, from Endicott to Poughkeepsie as the group’s new systems development manager.  Already a big proponent of compatibility, Evans unsurprisingly recommended that the 8000 project be cancelled and a cohesive product line spanning both divisions be initiated in its place.  The lead designer of the 8000 series, Frederick Brooks, vigorously opposed this move, so Learson replaced Brooks’s boss with another ally, Jerrier Haddad, who had led the design of the 701 and recently served as the head of Advanced Systems Development.  Haddad sided with Evans and terminated the 8000 project in May 1961.  Strong resistance remained in some circles, however, most notably from General Products Division head John Haanstra, so in October 1961, Learson assembled a task group called SPREAD (Systems, Planning, Review, Engineering, and Development) consisting of thirteen senior engineering and marketing managers to determine a long-term strategy for IBM’s data processing line.

On December 28, the SPREAD group delivered its final proposal to the executive management committee.  In it, they outlined a series of five compatible processors representing a 200-fold range in performance.  Rather than incorporate the new integrated circuit, the group proposed a proprietary IBM design called Solid Logic Technology (SLT), in which the discrete components of the circuit were mounted on a single ceramic substrate, but were not fully integrated.  By combining the five processors with SLT circuits and core memories of varying speeds, nineteen computer configurations would be possible that would all be fully compatible and interchangeable and could be hooked up to 40 different peripheral devices.  Furthermore, after surveying the needs of business and scientific customers, the SPREAD group realized that other than floating-point capability for scientific calculations, the needs of both customers were nearly identical, so they chose to unify the scientific and business lines rather then market different models for each.  Codenamed the New Product Line (NPL), the SPREAD proposal would allow IBM customers to buy a computer that met their current needs and then easily upgrade or swap components as their needs changed over time at a fraction of the cost of a new system without having to rewrite all their software or replace their peripheral devices.  While not everyone was convinced by the presentation, Watson ultimately authorized the NPL project.

The NPL project was perhaps the largest civilian R&D operation ever undertaken to that point.  Development costs alone were $500 million, and when tooling, manufacturing, and other expenses were taken into account, the cost was far higher.  Design of the five processor models was spread over three facilities, with Poughkeepsie developing the three high-end systems, Endicott developing the lowest-end system, and a facility in Hursley, England, developing the other system.  At the time, IBM manufactured all its own components as well, so additional facilities were charged with churning out SLT circuits, core memories, and storage systems.  To assemble all the systems, IBM invested in six new factories.  In all, IBM spent nearly $5 billion to bring the NPL to market.

To facilitate the completion of the project, Watson elevated two executives to new high level positions: Vin Learson assumed the new role of senior vice president of sales, and Watson’s younger brother, Arthur, who for years had run IBM’s international arm, the World Trade Corporation, was named senior vice president of research, development, and manufacturing.  This new role was intended to groom the younger Watson to assume the presidency of IBM one day, but the magnitude of the NPL project coupled with Watson’s inexperience in R&D and manufacturing ultimately overwhelmed him.  As the project fell further and further behind schedule, Learson ultimately had to replace Arthur Watson in order to see the project through to completion.  Therefore, it was Learson who assumed the presidency of IBM in 1966 while Watson assumed the new and largely honorary role of vice chairman.  His failure to shepherd the NPL project ended any hope Arthur Watson had of continuing the Watson family legacy of running IBM, and he ultimately left the company in 1970 to serve as the United States ambassador to France.

In late 1963, IBM began planning the announcement of its new product line,  which now went by the the name System/360 — a name chosen because it represented all the points of a compass and emphasized that the product line would fill the needs of all computer users.  Even at this late date, however, acceptance of System/360 within IBM was not assured.  John Haanstra continued to push for an SLT upgrade to the existing 1401 line to satisfy low-end users, which other managers feared would serve to perpetuate the incompatibility problem plaguing IBM’s existing product line.  Furthermore, IBM executives struggled over whether to announce all the models at once and thus risk a significant drop in orders for older systems during the transition period, or phase in each model over the course of several years.  All debate ended when Honeywell announced the H200.  Faced with losing customers to more advanced computers fully compatible with IBM’s existing line,  Watson decided in March 1964 to scrap the improved 1401 and launch the entire 360 product line at once.

On April 7, 1964, IBM held press conferences in sixty-three cities across fourteen countries to announce the System/360 to the world.  Demand soon far exceeded supply as within the first two years that System/360 was on the market IBM was only able to fill roughly 4,500 of 9,000 orders.  Headcount at the company rose rapidly as IBM rushed to bring new factories online in response.  In 1965, when actual shipments of the System/360 were just beginning, IBM controlled 65 percent of the computer market and had revenues of $2.5 billion.  By 1967, as IBM ramped up to meet insatiable 360 demand, the company employed nearly a quarter of a million people and raked in $5 billion in revenues.  By 1970, IBM had an install base of 35,000 computers and held an ironclad grip on the mainframe industry with a marketshare between seventy and eighty percent; the next year company earnings surpassed $1 billion for the first time.

As batch processing mainframes, the System/360 line and its competitors did not serve as computer game platforms or introduce technology that brought the world closer to a viable video game industry.  System/360 did, however, firmly establish the computer within corporate America and solidified IBM’s place as a computing superpower while facilitating the continuing spread of computing resources and the evolution of computer technology.  Ultimately, this process would culminate in a commercial video game industry in the early 1970s.

Historical Interlude: From the Mainframe to the Minicomputer Part 1, Transistors and Integrated Circuits

So now its time to pause again in our examination of video game history to catch up on the technological advances that would culminate in the emergence of an interactive entertainment industry.  As previously discussed, the release and subsequent spread of Spacewar! in 1962 represented the first widespread interest in computer gaming, yet no commercial products would appear before 1971.  In the meantime, computer games continued to be written throughout the 1960s (which will be discussed in a subsequent post), but none of them gained the same wide exposure or popularity as Spacewar!.  Numerous roadblocks prevented the spread of these early computer games ranging from the difficulty of porting programs between systems to the lack of reliable wide area distribution networks, but the primary inhibitor remained cost, as even a relatively cheap $120,000 PDP-1 remained an investment out of the reach of most organizations — let alone the general public — and many computers still cost ten times that amount.

The key to transforming the video game into a commercial product therefore lay in significantly reducing the cost of the hardware involved.  The primary expense in building a computer remained the switching units that defined their internal logic, which in the late 1950s were still generally the bulky, power-hungry, temperamental vacuum tubes.  In 1947, John Bardeen and Walter Brattain at Bell Labs demonstrated the solution to the vacuum tube problem in the form of the semiconducting transistor, but as with any new technology there were numerous production and cost issues that had to be overcome before it could completely displace the vacuum tube.  By the early 1960s, the transistor was finally well established in the computer industry, but while it drove down the cost and size of computers like DEC’s PDP-1, a consumer product remained out of reach.  Finally, in late 1958 and early 1959 engineers working independently at two of the most important semiconductor manufacturers in the world discovered how to integrate all of the components of a circuit on one small plate, commonly called a “chip,” paving the way for cost and size reductions that would allow the creation of the first minicomputers, which remained out of reach for the individual consumer, but could at least be deployed in a public entertainment setting like an arcade.

Note:  Once again, this is a “historical interlude” post that will provide a summary of events drawn from a few secondary sources rather than the in-depth historiographic analysis of my purely game-related posts.  The majority of the information in this post is drawn from Forbes Greatest Technology Stories: Inspiring Tales of the Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley by Leslie Berlin, The Intel Trinity: How Robert Noyce, Gordon Moore, and Andy Grove Built the World’s Most Important Company by Michael Malone, an article from the July 1982 issue of Texas Monthly called “The Texas Edison” by T.R. Reid, and The Silicon Engine, an online exhibit maintained by the Computer History Museum.

The Transistor Enters Mass Production

1951_1_2

Gordon Teal (l), whose crystal-growing techniques were crucial to mass producing the transistor

As previously discussed, on December 23, 1947, William Shockley, John Bardeen, and Walter Brattain demonstrated the transistor for the first time in front of a group of managers at Bell Labs, which is widely considered the official birthday of the device.  This transistor consisted of a lump of germanium with three wires soldered to its surface in order to introduce the electrons.  While this point-contact transistor produced the desired results, however, it was difficult to manufacture, with yield rates of only fifty percent.  Determined to create a better device — in part due to anger that Bardeen and Brattain received all the credit for the invention — William Shockley explored alternative avenues to create a less fragile transistor.

In 1940, Bell Labs researchers Russell Ohl and Jack Scaff had discovered while working on semiconductor applications for radar that semiconducting crystals could have either a positive or a negative polarity, which were classified as p-type and n-type crystals respectively.  Shockley believed that by creating a “sandwich” with a small amount of p-type material placed between n-type material on either end, he could create what he termed a junction transistor that would amplify or block a current when a charge of the appropriate polarity was applied to the p-type material in the middle.  Placing the required impurities in just the right spots in the germanium proved challenging, but by 1949, Shockley was able to demonstrate a working p-n junction transistor.  While the junction transistor was theoretically well suited for mass production, however, in reality the stringent purity and uniformity requirements of the semiconducting crystals presented great challenges.  Gordon Teal, a chemist with a Ph.D. from Brown who joined Bell Labs in 1930 and worked on radar during World War II, believed that large crystals doped with impurities at precise points would be necessary to reliably produce a working junction transistor, but he apparently garnered little support for his theories from Shockley and other managers at Bell Labs.  He finally took it upon himself to develop a suitable process for growing crystals with the help of engineer John Little and technician Ernest Buehler, which they successfully demonstrated in 1951.  That same year, another Bell Labs researcher named William Pfann developed a technique called zone refining that allowed for the creation of ultra-pure crystals with minuscule amounts of impurities, which lowered the manufacturing cost of the junction transistor significantly.  Together, the advances by Teal and Pfann provided Bell Labs with a viable fabrication process for transistors.

Part of the reason Teal could not generate much excitement about his manufacturing techniques at Bell Labs is that AT&T remained unsure about entering the transistor business.  Despite recent advances, executives remained doubtful that the transistor would ultimately replace the large and well-established vacuum tube industry.  Worse, the company was currently under investigation by the U.S Department of Justice for anti-trust violations and was therefore hesitant to enter and attempt to dominate a new field of technology.  Therefore, in 1952 the company decided to offer a royalty-free license to any company willing to research integrating the transistor into hearing aids, one of the original passions of company founder Alexander Graham Bell, and held a series of technical seminars introducing interested parties to the device.  Several large electronics companies signed up, including Raytheon, Zenith, and RCA.  They were joined by a relatively small company named Texas Instruments (TI).

74718_fig

From Left to Right, John Erik Jonsson, Henry Bates Peacock, Eugene McDermott, and Cecil Green, the men who transformed Geophysical Service, Inc. into Texas Intruments

In 1924, two physicists named Clarence Karcher and Eugene McDermott established the Geophysical Research Corporation (GRC) in Tulsa, Oklahoma, as a subsidiary of Amerada Petroleum.  The duo had been developing a reflection-seismograph process to map faults and domes beneath the earth when they realized that the same process was ideal for discovering oil deposits.  By 1930, GRC had become the leading geophysical exploration company active along the Gulf Coast, but the founders disliked working for Amerada, so they established a new laboratory in Newark, New Jersey, and with investment from geologist Everette DeGolyer formed a new independent company called Geophysical Service, Inc. (GSI).  In 1934, the company moved the laboratory to Dallas to be closer to the heart of the oil trade.

The early 1930s were not a particularly auspicious time to start a new business with the Great Depression in full swing, but GSI managed to grow by aggressively expanding its oil exploration business into international markets such as Mexico, South America, and the Middle East.  Success abroad did not fully compensate for difficulties in the US, however, so in December 1938, the company reorganized in order to exploit the untapped oil fields in the American Southwest.  A new Geophysical Service, Inc. — renamed the Coronado Corporation early the next year — was established with Karcher at the helm as an oil production business, while the original GSI, now headed solely by McDermott, became a subsidiary of Coronado and continued in the exploration business.  The company failed to flourish, however, so in 1941 Karcher negotiated a $5 million sale of Coronado to Stanolind Oil & Gas.  Not particularly interested in the exploration business, Stanolind offered the employees of GSI the opportunity to buy back the company for $300,000.  McDermott, R&D head J. Erik Jonsson, field exploration head Cecil Green, and crew chief H. Bates Peaock managed to scrape together the necessary funding and purchased GSI on December 6, 1941.  The very next day, the Japanese bombed Pearl Harbor, dragging the United States into World War II.

With so much of its business tied up in international oil exploration work that would have to be abandoned during the coming global conflict, GSI would be unable to survive by concentrating solely on its primary business and now needed to find additional sources of income.  The solution to this problem came from Jonsson, a former aluminium sales engineer who had been in charge of R&D at GSI since the company’s inception in 1930, who realized that the same technology used for locating oil could also be used to locate ships and airplanes.  A fortuitous connection between McDermott and Dr. Dana Mitchell, who was part of a group working on electronic countermeasure technology, led to a contract to manufacture a device called the magnetic anomaly detection (MAD) system.  Building on this work, GSI emerged as a major supplier of military electronics by the end of the war.

During the war, Jonsson became impressed with an electrical engineer and Navy lieutenant from North Dakota working as a procurement officer for the Navy’s Bureau of Aeronautics named Patrick Haggerty.  In 1946, GSI hired Haggerty to run its new Laboratory and Manufacturing Division, which the company established to expand its wartime electronics work in both the military and private sectors.  Haggerty was determined to transform GSI into a major player in the field and convinced management to invest in a large new manufacturing plant that would require the company to tap nearly its entire $350,000 line of credit with the Republic National Bank.  By 1950, this investment had turned into annual sales of nearly $10 million a year.  With manufacturing now a far more important part of the business than oil exploration, company executives realized the name GSI no longer fit the company.  They decided to change the name to General Instruments, which conjured up visions of the great electronics concerns of the East like General Electric.  Unfortunately, there was already a defense contractor with that name, so the Pentagon asked them to pick something else.  They chose Texas Instruments.

 46-pat_haggerty

Patrick J. Haggerty, the man who brought TI into the transistor business

When Patrick Haggerty learned AT&T was offering licenses for transistor technology, he knew immediately that TI had to be involved.  AT&T, however, disagreed.  In 1952, TI had realized a profit of $900,000 on sales of just $20 million and did not appear capable of making the necessary investment to harness the full potential of the transistor.  It took a year for TI management to finally convince AT&T to grant the firm the $25,000 license, after which Haggerty made another large financial gamble, investing over $4 million in manufacturing plants, development, new hires, and other startup costs.  Before the end of 1952, TI had its first order for 100 germanium transistors from the Gruen Watch Company, and production formally began.

Haggerty had muscled TI into an important new segment of the electronics industry, but in the end it was AT&T that was proven correct:  TI really was too small to make much of an impact in the germanium transistor market.  Haggerty therefore turned to new technology to keep his company relevant in the field.  While germanium served as a perfectly fine semiconducting material at temperatures below 100 degrees Fahrenheit, the low melting point of the element inhibited its semiconducting properties at high temperatures, rendering it unsuitable for defense projects like guided missiles.  Silicon offered both better semiconducting capability and a higher temperature tolerance, but despite the best efforts of scientists at Bell Labs and elsewhere, the element had proven impossible to dope with the necessary impurities.  This did not dissuade Haggerty, who placed an ad in the New York Times for a new chief researcher who could bring TI into silicon transistors.  That ad was answered by none other than brilliant Bell Labs chemist Gordon Teal.

Feeling unappreciated after facing such resistance to his research at Bell Labs, Teal was ready to move on, but despite answering the TI ad, he was not certain the Texas company was the right fit.  Solving the problems with silicon would require a great deal of time and money, and TI remained a relatively small concern.  Haggerty reassured him, however, by revealing that TI was preparing to merge with Intercontinental Rubber, a cash-rich firm listed on the New York Stock Exchange with a faltering tire and rubber business.  This merger, completed in October 1953, made TI a public company and guaranteed that Teal would have the funding he needed.  Haggerty promised Teal anything and anyone he needed with only one stipulation: after one year, Teal would need to have a product TI could bring to market.  Teal accepted the challenge.

1954 proved to be a trying year for TI.  While the transistor business failed to gain traction against larger competitors, the defense contracts the company depended upon as its primary source of revenue began to dry up with the end of the Korean War and a subsequent cut in military spending.  Revenues that had risen to $27 million in 1953 declined to $24 million, profits fell slightly from $1.27 million to $1.2 million, and the stock began trading in single digits.  That same year, however, Teal succeeded in developing a complicated high-temperature doping and zone refining process that yielded a viable silicon transistor.  At a conference on airborne aeronautics held in Dayton, Ohio, that spring, Teal not only proudly announced to the assembled that TI had a working silicon transistor in production, he also provided a dramatic demonstration.  A record player was produced, specially modified so that a transistor could be snapped in and out to complete a circuit.  First, Teal snapped in a germanium transistor and then dropped it into a beaker of hot oil, which destroyed the transistor and stopped the player.  Then, he performed the same action with a silicon transistor.  The music played on.  TI quickly found itself swamped with orders.

New Players

Traitorous-8-Fairchild

The “Traitorous Eight,” who left Shockley Semiconductor to establish Fairchild Semiconductor.

From left: Gordon Moore, C. Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last

In 1954 Bell Labs chemist Calvin Fuller developed a new technique called the diffusion process in which silicon could be doped at high temperatures using gasses containing the desired impurities.  By the next March, Bell Labs chemist Morris Tanenbaum had succeeded in harnessing the diffusion process to create semiconducting material so thin that a silicon wafer could be created in which each layer of the n-p-n sandwich was only a millimeter thick.  The resulting diffusion-base transistor operated at much higher frequencies than previous junction transistors and therefore performed much faster.  With Gordon Teal’s crystal-growing expertise and Patrick Haggerty’s salesmanship, TI kept pace with these advancements and enjoyed a virtual monopoly on the emerging field of silicon transistors during the next few years, with company revenues soaring to $45.7 million in 1956.  The transistor business, however, remained a relatively small part of the overall electronics industry.  Between 1954 and 1956, 17 million germanium transistors and 11 million silicon transistors were sold in the United States.  During the same period, 1.3 billion vacuum tubes were sold.

Practically speaking, the vacuum tube companies appeared to hold a distinct advantage, as they could theoretically use the enormous resources at their disposal from their vacuum tube sales to support R&D in transistors and gradually transition to the new technology.  In reality, however, while most of the major tube companies established small transistor operations, they were so accustomed to the relatively static technologies and processes associated with the tube industry that they were unable to cope with the volatile pricing and ever-changing manufacturing techniques that defined the transistor industry.  The Philco Corporation is a poster child for these difficulties.  Established in Philadelphia in 1892 as the Helios Electric Company to produce lamps, Philco became a major player in the emerging field of consumer radios in the mid-1920s and by the end of World War II was one of the largest producers of vacuum tubes in the United States.  The company seriously pursued transistor technology, creating in 1953 the high-speed surface-barrier transistor discussed in a previous post that powered the TX-0.  In 1956, Philco improved the surface-barrier transistor by employing the diffusion process, but the company soon grew leery of attempting to keep up with new transistor technologies.  The original surface-barrier transistor had been fast, but expensive, and the diffusion-based model cost even more, retailing for around $100.  As technology continued to progress, however, the price fell to $50 within six months, and then to $19 a year after that.  By the next year, lots of 1,000 Philco transistors could be had for a mere $6.75.  Spooked, the company ultimately decided to remain focused on vacuum tubes.  By 1960, Philco had entered bankruptcy, and Ford subsequently purchased the firm in 1961.

12ShockleySemiConLabs-1384982690614

The Shockley Semiconductor Laboratory in the Heart of the region that would become Silicon Valley

While the old guard in the electronics industry ultimately exerted little influence on the transistor business, TI soon faced competition from more formidable opponents.  In 1950, William Shockley paid a visit to Georges Doriot, the pioneering venture capitalist who later funded the Digital Equipment Corporation.  Surprisingly, their discussion did not focus on the transistor, but rather on another invention Shockley patented in 1948, a “Radiant Energy Control System,” essentially a feedback system using a visual sensor.  Shockley had worked on improving bomb sights during World War II and saw this system as the next step, potentially allowing a self-guided bomb to compare photographs of targets with visual data from the sensor for increased accuracy.  The same technology could also be used for facial recognition, or for automated sorting of components in manufacturing.  Since the publication of mathematician Norbert Wiener’s groundbreaking book, Cybernetics, in 1948, the Cambridge academic community had been excited by the prospect of using artificial systems to replace human labor for more mundane tasks.  Indeed, in 1952 this concept would gain the name “automation,” a term first coined by Delmar Harder at Ford and popularized by Harvard Business School Professor John Diebold in his book Automation: The Advent of the Automatic Factory.  When Doriot learned of Shockley’s control system, he urged the eminent physicist to waste no time in starting his own company.

By 1951, Shockley had refined his “Radiant Energy Control System” into an optoelectronic eye he felt could form the core of an automated robot that could replace humans on the manufacturing line.  After negotiating an exemption with Bell Labs allowing him to maintain the rights to any patents he filed related to automation for the period of one year, Shockley filed a patent for an “Electrooptical Control System” and wrote a memo to Bell Labs president Mervin Kelly urging the organization to build an “automatic trainable robot.”  When Kelly refused to consider such a project, Shockley, already stripped of most of his responsibilities regarding transistor development due to incessant conflicts with his team, took a leave of absence from Bell Labs in late 1952.  After a year as a visiting professor at CalTech, Shockley became director of the Pentagon’s Weapons Systems Evaluation Group and spent the next year or so studying methods for the U.S. to fight a nuclear war while periodically turning down offers to teach at prestigious universities or establish his own semiconductor operation.

In February 1955, Shockley met renowned chemist Arnold Beckman at a gala in Los Angeles honoring Shockley and amplifier inventor Lee DeForest.  The two bonded over their shared interest in automation and kept in touch over the following months.  Finally, in June 1955, Shockley decided he needed to radically change his life, so he resigned from both Bell Labs and his Pentagon job, divorced his wife, and began to seriously consider offers to start his own company.  The next month, he contacted Beckman to propose forming a company together to bring the new diffusion transistor to market and develop methods to automate the production of transistors.  After a period of negotiation, the Shockley Semiconductor Laboratory was established in September 1955 as a subsidiary of Beckman Instruments.  Even though Beckman was headquartered in Southern California, Shockley convinced his new partner to locate Shockley Semiconductor further north in Palo Alto, California, so he could once again remain close to his mother.

Unable to recruit personnel from Bell Labs, where his reputation as a horrible boss proceeded him, Shockley scoured technical conferences, college physics departments, and research laboratories for bright young scientists and engineers.  One of his first hires also proved to be his most important, a young physicist named Bob Noyce.  Born in 1927 in Burlington, Iowa, Robert Norton Noyce was the son of a Congregationalist minister who moved his family all over the state of Iowa as he migrated from one congregation to the next.  This itinerant life, made even more difficult by the Depression, finally ended in 1940 when Ralph Noyce took a job in the college town of Grinnell, Iowa.  Bob Noyce thrived in Grinnell, where his natural charisma and sense of adventure soon made him the leader among the neighborhood children.  A brilliant student despite a penchant for mischief and goofing off, Noyce took a college physics course at Grinnell College during his senior year of high school and graduated class valedictorian.  The Miami University Department of Physics offered to give him a job as a lab assistant if he attended the school — an honor usually reserved for graduate students — but worrying he could just be another face in the crowd at such a large institution, Noyce chose to study at Grinnell College instead.

At Grinnell, Noyce nearly lost his way at the end of his junior year.  Eager to maintain his social standing among older students returning from World War II, Noyce agreed to “procure” a pig to roast at a Hawaiian Luau dorm party.  Soon after, he learned his girlfriend was pregnant and would need an abortion.  Depressed, Noyce got drunk and with the help of a friend stole a pig from a local farmer’s field.  Feeling remorseful, they returned the next day to apologize to the farmer and pay for the pig only to learn that he was the mayor of Grinnell and did not take the prank lightly.  Noyce was almost expelled as a result, but he was saved by his physics professor, Grant Gale, who saw Noyce as a once-in-a-generation talent that should not be squandered over an ill-advised prank.  The university relented and merely suspended him for a semester.

When Noyce returned to Grinnell after working for a life insurance company in New York during his forced exile, he was introduced to the technology that would change his life.  His mentor Gale was an old friend of transistor co-inventor John Bardeen, with whom he had attended the University of Wisconsin, while the head of research at Bell Labs, Oliver Buckley, was a Grinnell graduate.  Gale therefore learned of the transistor’s invention early and was able to secure a wide array of documentation on the new device from Bell.  When Noyce saw his professor enraptured by these documents, he dove right in himself and soon resolved to learn everything he could about transistors.  After graduating from Grinnell with degrees in mathematics and physics, Noyce matriculated to the physics department at MIT, where he planned to focus his studies on solid-state physics.  As transistors were so new, most of Noyce’s classwork revolved around vacuum tubes, but his dissertation, completed in mid 1953, dealt with matters related to transistor development.  Upon earning his doctorate in physics, Noyce took a job at Philco, where in 1950 R&D executive Bill Bradley had established the 25-man research group that developed the surface-barrier transistor.  Noyce rose through the ranks quickly at Philco, but he soon became disillusioned with the layers of bureaucracy and paperwork inherent in working for a large defense contractor, especially after the company was forced to significantly curtail R&D activities due to losses.  Just as Noyce was looking for a way out, Shockley called in January 1956 after reading a paper Noyce had presented on surface-barrier transistors several months earlier at a conference.  In March, Noyce headed west to join Shockley Semiconductor.

Before long, Shockley had succeeded in recruiting a team of about twenty with expertise in a variety of fields related to transistor creation. These individuals included a Ph.D. candidate in the solid state physics program at MIT named Jay Last, a chemist at the Johns Hopkins Applied Physics Lab named Gordon Moore, a mechanical engineer at Western Electric named Julius Blank, Viennese World War II refugee and expert tool builder Eugene Kleiner, metallurgist Sheldon Roberts, Swiss theoretical physicist Jean Hoerni, and Stanford Research Institute physicist Vic Grinich.  Shockley hoped these bright young scientists would secure his company’s dominance in the semiconductor industry.

LE281L4

Sherman Fairchild, the inventor and businessman who financed Fairchild Semiconductor

On November 1, 1956, William Shockley learned that he had been awarded the Nobel Prize for Physics — shared with Walter Brattain and John Bardeen — for the invention of the transistor.  Theoretically at the height of his fame and powers, Shockley soon found his entire operation falling apart.  Always a difficult man to work for, his autocratic tendencies grew even worse now that he was a Nobel laureate in charge of his own company.  He micromanaged employees, even in areas outside of his expertise, and viciously attacked them when their work was not up to his standards.  Feeling threatened by Jean Hoerni and his pair of doctorates, he once exiled the physicist to an apartment to work alone, though he later relented.  He discouraged his employees from pursuing their own projects and insisted on adding his name to any paper they presented, whether he had any involvement in the subject or not.  Once, when a secretary cut her hand on a piece of metal protruding from a door, he insisted it must have been an act of sabotage and threatened to hire a private investigator and subject the staff to lie detector tests.  He was finally dissuaded by Roberts, who convinced him with the aid of a microscope that the piece of metal was merely a tack that had lost its plastic head.

The final straw was Shockley’s insistence on pulling staff and resources from improving upon the diffusion-base silicon transistor to work on a new four-layer diode project he believed could act as both a transistor and a resistor and was theoretically faster and cheaper than a germanium transistor.   In reality, this device proved impossible to create, and R&D costs began to spiral out of control with no sellable product to show for it.  This caused Beckman to become more involved with company operations, which in turn led several of Shockley’s disgruntled employees to feel they could effect real change.  They nominated Robert Noyce as their spokesman, both because he maintained a cordial relationship with Shockley and because he was possessed of an impressive charisma that made him both a natural team leader and an easy person to talk to.  With Beckman’s blessing, Noyce, Moore, Kleiner, Last, Hoerni, Roberts, Blank, and Grinich confronted Shockley and attempted to force him out of day-to-day operations at the company.  The octet wanted Noyce to serve as their new manager, but Shockley refused, arguing that Noyce did not have what it took to be an aggressive and decisive leader, criticisms that later events would show were completely justified.  Beckman therefore appointed an interim management committee and began an external search for an experienced manager.  Less than a month later, he reversed course and declared Shockley to be in charge, most likely influenced by colleagues at either Bell Labs or Stanford who pointed out that undermining Shockley would unduly tarnish the reputation of the Nobel laureate.  As a compromise, Noyce was placed in charge of R&D and a manager from another division of Beckman named Maurice Hanafin was installed as a buffer between Shockley and the rest of the staff.

Noyce was satisfied with this turn of events, but his seven compatriots were not, especially when it became clear that Shockley remained in complete control despite the appointment of Hanafin.  Led by Last, Hoerni, and Roberts, the seven scientists decided to leave the company.  Feeling they were more valuable as a group, however, they resolved to continue working together rather than going their separate ways, meaning they would need to convince an established company to hire them together and form a semiconductor research group around them.  To facilitate this process, Kleiner decided to write to a New York investment firm where his father had an account called Hayden, Stone, and Company, which had recently arranged financing for the first publicly held transistor firm, General Transistor.  Kleiner’s letter was addressed to the man in charge of his father’s account and asked for $750,000 in funding to start a new semiconductor group.  As it turned out, the account man was no longer there, so the letter ended up on the desk of a recent hire and Harvard MBA named Arthur Rock.  Rock liked what he saw and met with the seven along with his boss, Arthur “Bud” Coyle.  The two bankers strongly believed in the potential of the scientists and urged them to reach beyond their original plan and ask for a million dollars or more to fund an entire division.  In order to entice a company to form a semiconductor division, however, the seven scientists would need a leader, and none of them felt up to the task.  They realized they would have to recruit their former ringleader in their fight against Shockley, Bob Noyce.  It took some convincing, but Noyce ultimately came on board.  The seven were now eight.

Finding a company to shelter the eight co-conspirators proved harder than Rock and Coyle initially hoped.  The duo drew up a list of thirty companies they believed could handle the investment they were looking for, but were turned down by all of them.  Simply put, no one was interested in giving a group of scientists between the ages of 28 and 32 that had never developed a salable product yet felt they could run a division better than a Nobel Prize winner $1 million to pursue new advances in a volatile field of technology.  Running out of options, Coyle mentioned the plan to an acquaintance possessed of both a large fortune and a reputation for risk-taking:  Sherman Fairchild.  Sherman was the son of George Fairchild, a businessman and six-term Congressman who played a crucial role in the formation of the International Time Recording Company — one of the companies that merged to form C-T-R — and was the chairman and largest shareholder of C-T-R/IBM from its inception until his death in 1924.  A prolific inventor, Sherman developed a camera suitable for aerial photography for the United States Army during World War I and then established the Fairchild Aerial Camera Corporation in 1920.  Subsequently, Fairchild established several more companies based around his own inventions in fields ranging from aerial surveying to aircraft design.  In 1927, he consolidated seven of these organizations under the holding company Fairchild Aviation, which he renamed Fairchild Camera and Instrument (FCI) in 1944 after spinning back out his aviation business.  By 1957, Fairchild was no longer involved in the day-to-day running of any of his companies, but he was intrigued by the opportunity represented by Noyce and his compatriots and encouraged FCI to take a closer look.

Based in Syosset, New York, Fairchild Camera and Instrument had recently been placed under the care of John Carter, a former vice president of Corning Glass who felt that FCI had become too reliant on defense work for its profits, which had become scarcer and scarcer since the end of the Korean War.  Carter believed acquisitions would be the best way to secure a new course for FCI, so he proved extremely amenable to Noyce and company’s request for funding.  After a period of negotiation, Fairchild Semiconductor Corporation was formally established on September 19, 1957.  Officially, FCI loaned Fairchild Semiconductor $1.3 million in startup funding and in return was granted control of the company through a voting trust.  Ownership of Fairchild Semiconductor remained with the eight founding members and Hayden, Stone, but FCI had the right to purchase all outstanding shares of the company on favorable terms any time before it achieved three successive years of earnings of $300,000 or more.  When the scientists finally broke the news of their imminent departure to Shockley, the Nobel laureate was devastated, and though he never actually dubbed them the “Traitorous Eight,” a phrase invented by a reporter some years later, the phrase came to be associated with his feelings on the matter.  Shockley continued to pursue his dream of a four-layer diode until Beckman finally sold Shockley Semiconductor, which had never turned a profit, in 1960.  Shockley himself ultimately left the industry to teach at Stanford.

The Process

1959_1_2

A transistor built using the “planar process,” which revolutionized the nascent semiconductor industry

 In October 1957, Fairchild Semiconductor moved into its new facilities on Charleston Road near the southern border of Palo Alto, not far from the building that housed Shockley Semiconductor.  The Fairchild executive responsible for negotiating the final deal between FCI and the Traitorous Eight, Richard Hodgson, took on the role of chairman of the semiconductor company to look after FCI’s interests and began a search for a general manager.  Hodgson’s first choice was the charismatic Noyce, but the physicist hated confrontation and felt unready to run a whole company besides and contented himself with leading R&D.  Hodgson therefore brought in an old friend, a former physics professor that had worked as a sales manager for FCI in the 1950s named Tom Bay, to head up sales and marketing and a former paratrooper who managed the diode operation at Hughes Aircraft named Ed Baldwin as general manager.

Fairchild Semiconductor came into being at just the right time.  On October 4, 1957, the Soviet Union launched Sputnik into orbit, inaugurating a space race with the United States that greatly increased the Federal Government’s demand for transistors for use in rockets and satellites, technologies particularly unsuited to vacuum tubes due to the need for small, durable components.  At the same time, the rise of affordable silicon transistors had government agencies reevaluating the use of vacuum tubes across all their projects, particularly in computers.  This led directly to Fairchild’s first major contract.

In early 1958, Tom Bay learned that the IBM Federal Systems Division was having difficulty sourcing the parts it needed to create a navigational computer for the United States Air Force’s experimental B-70 long-range bomber.  The Air Force required particularly fast and durable silicon transistors for the project and TI, still the only major force in silicon, had been unable to provide a working model up to their specifications.  Through inheritance from his father, Sherman Fairchild was the largest shareholder at IBM and wielded some influence at the company, so Bay and Hodgson convinced him to secure a meeting with the project engineers.  IBM remained skeptical even after Noyce stated Fairchild’s engineers were up to the task, but Sherman Fairchild leaned hard on Tom Watson Jr., basically saying that if he trusted the engineers enough to invest over $1 million in their work, then Watson should trust them too.  With Sherman’s help, Fairchild Semiconductor secured a contract for 100 silicon transistors in February 1958.

Noyce knew that the project would require a type of transistor known as a mesa transistor that had been developed by Bell Labs and briefly worked on at Shockley Semiconductor, but had yet to be mass produced by any company.  Unlike previous transistors, the mesa transistor could be diffused on only one side of the wafer by taking advantage of new techniques in doping and etching.  Basically, dopants were diffused beneath a layer of silicon, after which a drop of wax was placed over the wafer.  The entire surface would then be doused in a strong acid that etched away the entire top layer except at the point protected by the wax.  This created a distinctive bump that resembled the mesas of the American Southwest, hence its name.  Fairchild decided to develop the first commercial double-diffused silicon mesa transistor, but were unsure whether an n-p-n or p-n-p configuration would perform better.  They therefore split into two teams led by Moore and Hoerni to develop both, ultimately settling on the n-p-n configuration.  Putting the transistor into production was a complete team effort.  Roberts took charge of growing the silicon crystals, Moore and Hoerni oversaw the diffusion process, Noyce and Last handled the photolithographic process to define the individual transistors on the wafer, Grinich took charge of testing, and Blank and Kleiner designed the manufacturing facility.  By May, the team had completed the design of the transistor, which they delivered to IBM in the early summer.  In August, the team presented their transistor at Wescon, an important trade show established six years before by the West Coast Electronics Manufacturers Association, and learned that their double-diffusion transistor was the only one on the market.  They maintained a monopoly on the device for about a year.

Orders soon began pouring in for double-diffused mesa transistors, most notably from defense contractor Autonetics, which wanted to use them in the Minuteman guided missile program, then the largest and most important defense project under development.  Late in 1958, however, Fairchild realized there was a serious problem with the transistor: it was exceedingly fragile.  So fragile, in fact, that even a tap from a pencil could cause one to stop working.  After testing, the team determined that when the transistor was sealed, a piece of metal would often flake off the outer can and bounce around inside, ultimately causing a short.  Fairchild would need to solve this problem quickly or risk losing its lucrative defense contracts.

During the transistor creation process, an oxide layer naturally builds up on the surface of the silicon wafer.  While this oxide layer does not interfere with the operation of the transistor, it would nevertheless be removed to prevent impurities from becoming trapped under its surface.  As early as 1957, Jean Hoerni speculated that the impurity problem was entirely imaginary and that the oxide layer could, in fact, provide a service by protecting the otherwise exposed junctions of the transistor and thus prevent just the kind of short Fairchild was now grappling with.  Hoerni did not pursue the concept at the time because Faircihld was so focused on bringing its first products to market, but in January 1959, he attacked the problem in earnest and within weeks had figured out a way to introduce an oxide mask at proper points during the diffusion process while still leaving spaces for the necessary impurities to be introduced.  On March 12, 1959, Hoerni proudly demonstrated a working transistor protected by an oxide layer, spitting on it to demonstrate it would continue working even when subjected to abuse.  Unlike the mesa transistor, a transistor created using Hoerni’s new technique resembled a bullseye with an outer layer shaped like a teardrop and was flat and smooth.  He therefore named his new technique the “planar process.”

The planar process instantly rendered all previous methods of creating transistors obsolete.  Consequently, Fairchild would not only be able to corner the market in the short term by bringing the first planar transistor to market, but it would also be able to generate income in the long term by licensing the planar process to all the other companies in the transistor business.  Complete dominance of the semiconductor industry appeared to be within Fairchild’s grasp, but then in mid-March 1959, TI announced a new product that would change the entire course of the electronics industry and, indeed, the modern world.

The Texas Edison

co1043

Jack Kilby, the inventor of the first integrated circuit

As Fairchild was just starting its transistor business in 1958, Texas Instruments continued to extend its dominance as company revenues reached $90 million and profits soared, but the company was not content to rest on its laurels.  With the space race beginning, the military, to which TI still devoted a large portion of its electronic components business, required ever more sophisticated rockets and computers that would require millions of components to function properly.  Clearly, as long as an electronic circuit continued to require discrete transistors, resistors, capacitors, diodes, etc. all connected by wires, it would be impossible to build the next generation of electronic devices.  The solution to this problem was first proposed by a British scientist named Geoffrey Dunmer in 1952, who spoke of a solid block of material without any connecting wires that would integrate all the functionality of the discrete components of a circuit.  Dunmer was never able to complete a working block circuit based on his theories, but other organizations were soon following in his footsteps, including a physical chemist at Texas Instruments named Willas Adock.  Working under an Army contract, Adcock assembled a small task force to build a simpler circuit, which included an electrical engineer named Jack Kilby.

Born in Jefferson City, Missouri, Jack St. Clair Kilby grew up in Great Bend, Kansas, where his father worked as an electrical engineer and ultimately rose to the presidency of the Kansas Power Company.  Kilby became hooked on electrical engineering during summers spent travelling across western Kansas with his father in the 1930s as the elder Kilby visited power plants and substations inspecting and fixing equipment.  A good student, Kilby planned to continue his education at MIT, but his high school did not offer all the required math courses.  Kilby was forced to travel to Cambridge to take a special entrance exam, but did not pass.  He attended the University of Illinois instead, but his education was interrupted by service during World War II.  Kilby finally graduated in 1947 with an unremarkable academic record and took a job at a Milwaukee firm called Centralab, the only company that offered him a job.

Centralab was not a particularly important company in the electronics industry, but it did experiment with an early form of integrated circuit in which company engineers attempted to place resistors, vacuum tubes, and wiring on a single ceramic base, exposing Kilby to the concept for the first time.  In May 1958, Kilby joined Adcock’s team at TI.  Adcock was attempting to create something called a “micromodule,” in which all the components of a circuit are manufactured in one size with the wiring built into each part so they could simply be snapped together, thus obviating the need for individual wiring connections.  While a circuit built in this manner would still be composed of discrete components, it would theoretically be much smaller, more durable, and easier to manufacture.  Having already tried something similar at Centralab, however, Kilby was convinced this approach would not work.

In the 1950s, Texas Instruments followed a mass vacation policy in which all employees took time off during the same few weeks in the summer.  Too new to have accrued any vacation time, Kilby therefore found himself alone in the lab in July 1958 and decided to tinker with alternate solutions to the micromodule.  Examining the problem through a wide lens, Kilby reasoned that TI was strongest in silicon and should therefore focus on working with that element.  At the time, capacitors were created using metal and ceramics and resistors were made of carbon, but there was nothing stopping a company from creating both of those components in silicon.  While the performance of these parts would suffer significantly over their traditional counterparts, by crafting everything out of silicon, it would be possible to place the circuit on a single block of material and eliminate wires entirely.  Kilby jotted down some preliminary plans in a notebook on July 24, 1958, and then received approval from Adcock to explore the concept further when everyone returned from vacation.

On September 12, 1958, Kilby successfully demonstrated a working integrated circuit to a group of executives at TI.  While Kilby’s intent had been to craft the device out of silicon, TI did not have any blocks of the element suitable for Kilby’s project on hand, so he was forced to craft his first circuit out of germanium.  Furthermore, Kilby had not yet figured out how to eliminate wiring completely, so his original hand-crafted design could not be reliably mass produced.  Therefore, while TI brought the first integrated circuit into the world, it would be Fairchild Semiconductor that actually made them practical.

In January 1959, as Hoerni was perfecting his planar process, Robert Noyce took inspiration from his colleague’s work and began theorizing how P-N junctions and oxide layers could be used to isolate and protect all the components of a circuit on a single piece of silicon, but just as Hoerni initially sat on his planar process while Fairchild focused on delivering finished products, so too did Noyce decide not to pursue his integrated circuit concept any further.  After Kilby debuted his circuit in March, however, Noyce returned to his initial notes.  While the TI announcement may have partially inspired his work, Fairchild’s patent attorney had previously asked every member of the Fairchild team to brainstorm as many applications for the new planar process as possible for the patent filing, which appears to have been Noyce’s primary motivator.  Regardless of the impetus, Noyce polished up his integrated circuit theories and tasked Jay Last with turning them into a working product.

By May 1960, Fairchild had succeeded in creating a practical and producible integrated circuit in which all of the components were etched on a single sliver of silicon with aluminum traces resting atop a protective oxide layer replacing the wiring.  Both the Minuteman missile and the Apollo moon landing projects quickly embraced the new device as the entire transistor industry became obsolete overnight.  While discrete transistors would power several important computer projects in the 1960s — and even the first home video game system in the early 1970s — the integrated circuit ultimately ushered in a new era of small yet powerful electronic devices that could sit on a small desk or, eventually, be held in the palm of one’s hand yet perform calculations that had once required equipment filling an entire room.  In short, without the integrated circuit, the video game industry as it exists today would not be possible.

Historical Interlude: The Birth of the Computer Part 4, Real-Time Computing

By 1955, computers were well on their way to becoming fixtures at government agencies, defense contractors, academic institutions, and large corporations, but their function remained limited to a small number of activities revolving around data processing and scientific calculation.  Generally speaking, the former process involved taking a series of numbers and running them through a single operation, while the latter process involved taking a single number and running it through a series of operations.  In both cases, computing was done through batch processing — i.e. the user would enter a large data set from punched cards or magnetic tape and then leave the computer to process that information based on a pre-defined program housed in memory.  For companies like IBM and Remington Rand, which had both produced electromechanical tabulating equipment for decades, this was a logical extension of their preexisting business, and there was little impetus for them to discover novel applications for computers.

In some circles, however, there was a belief that computers could move beyond data processing and actually be used to control complex systems.  This would require a completely different paradigm in computer design, however, based around a user interacting with the computer in real-time — i.e. being able to give the computer a command and have it provide feedback nearly instantaneously.  The quest for real-time computing not only expanded the capabilities of the computer, but also led to important technological breakthroughs instrumental in lowering the cost of computing and opening computer access to a greater swath of the population.  Therefore, the development of real-time computers served as the crucial final step in transforming the computer into a device capable of delivering credible interactive entertainment.

Note: This is the fourth and final post in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray,  A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh, and The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar.

Project Whirlwind

Forrester_Taylor_CutCut2

Jay Forrester (l), the leader of Project Whirlwind

The path to the first real-time computer began with a project that was never supposed to incorporate digital computing in the first place.  In 1943, the head of training at the United States Bureau of Aeronautics, a pilot and MIT graduate named Captain Luis de Florez, decided to explore the feasibility of creating a universal flight simulator for military training.  While flight simulators had been in widespread use since Edwin Link had introduced a system based around pneumatic bellows and valves called the Link Trainer in 1929 and subsequently secured an Army contract in 1934, these trainers could only simulate the act of flying generally and were not tailored to specific planes.  Captain de Florez envisioned using an analog computer to simulate the handling characteristics of any extant aircraft and turned to his alma mater to make this vision a reality.

At the time, MIT was already the foremost center in the United States for developing control systems thanks to the establishment of the Servomechanisms Laboratory in 1941, which worked closely with the military to develop electromechanical equipment for fire control, bomb sights, aircraft stabilizers, and similar projects.  The Bureau of Aeronautics therefore established Project Whirlwind within the Servomechanisms Laboratory in 1944 to create de Florez’s flight trainer.  Leadership of the Whirlwind project fell to an assistant director of the Servomechanisms Laboratory named Jay Forrester.  Born in Nebraska, Forrester had been building electrical systems since he was a teenager, when he constructed a 12-volt electrical system out of old car parts to provide his family’s ranch with electricity for the first time.  After graduating from the University of Nebraska, Forrester came to MIT as a graduate student in 1939 and joined the Servomechanisms Laboratory at its inception.  By 1944, Forrester was getting restless and considering establishing his own company, so he was given his choice of projects to oversee to prevent his defection.  Forrester chose Whirlwind.

In early 1945, Forrester drew up the specifications for a trainer consisting of a mock cockpit connected to an analog computer that would control a hydraulic transmission system to provide feedback to the cockpit.  Based on this preliminary work, MIT drafted a proposal in May 1945 for an eighteen-month project budgeted at $875,000, which was approved.  As work on Whirlwind began, the mechanical elements of the design came together quickly, but the computing element remained out of reach.  To create an accurate simulator, Forrester required a computer that updated dozens of variables constantly and reacted to user input instantaneously.  Bush’s Differential Analyzer, perhaps the most powerful analog computer of the time, was still far too slow to handle these tasks, and Forrester’s team could not figure out how to produce a more powerful machine solely through analog components.  In the summer of 1945, however, a fellow MIT graduate student named Perry Crawford that had written a master’s thesis in 1942 on using a digital device as a control system alerted Forrester to the breakthroughs being made in digital computing at the Moore School.  In October, Forrester and Crawford attended a Conference on Advanced Computational Techniques hosted by MIT and learned about the ENIAC and EDVAC in detail.  By early 1946, Forrester was convinced that the only way forward for Project Whirlwind was the construction of a digital computer that could operate in real time.

The shift from an analog computer to a digital computer for the Whirlwind project resulted in a threefold increase in cost to an estimated $1.9 million. It also created an incredible technical challenge.  In a period when the most advanced computers under development were struggling to achieve 10,000 operations a second, Whirlwind would require the capability of performing closer to 100,000 operations per second for seamless real-time operation.  Furthermore, the first stored-program computers were still three years away, so Forrester’s team also faced the prospect of integrating cutting edge memory technologies that were still under development.  By 1946, the size of the Whirlwind team had grown to over a hundred staff members spread across ten groups each focused on a particular part of the system in an attempt to meet these challenges.  All other aspects of the flight simulator were placed on hold as the entire team focused its attention on creating a working real-time computer.

200908311113506364_0

The Whirlwind I, the first real-time computer

By 1949, Forrester’s team had succeeded in designing an architecture fast enough to support real-time operation, but the computer could not operate reliably for extended periods.  With costs escalating and no end to development in sight, continued funding for the project was placed in jeopardy.  After the war, responsibility for Project Whirlwind had transferred from the Bureau of Aeronautics to the Office of Naval Research (ONR), which felt the project was not providing much value relative to a cost that had by now far surpassed $1.9 million.  By 1948, Whirlwind was consuming twenty percent of ONR’s entire research budget with little to show for it, so ONR began slowly trimming the budget.  By 1950, ONR was ready to cut funding all together, but just as the project appeared on the verge of death, it was revived to serve another function entirely.

On August 29, 1949, the Soviet Union detonated its first atomic bomb.  In the immediate aftermath of World War II, the United States had felt relatively secure from the threat of Soviet attack due to the distance between the two nations, but now the USSR had both a nuclear capability and a long range bomber capable of delivering a payload on U.S. soil.  During World War II, the U.S. had developed a primitive radar early warning system to protect against conventional attack, but it was wholly insufficient to track and interdict modern aircraft.  The United States needed a new air defense system and needed it quickly.

In December 1949, the United States Air Force formed a new Air Defense System Engineering Committee (ADSEC) chaired by MIT professor George Valley to address the inadequacies in the country’s air-defense system.  In 1950, ADSEC recommended creating a series of computerized command-and-control centers that could analyze incoming radar signals, evaluate threats, and scramble interceptors as necessary to interdict Soviet aircraft.  Such a massive and complex undertaking would require a powerful real-time computer to coordinate.  Valley contacted several computer manufacturers with his needs, but they all replied that real-time computing was impossible.

Despite being a professor at MIT, Valley knew very little about the Whirlwind project, as he was not interested in analog computing and had no idea it had morphed into a digital computer.  Fortunately, a fellow professor at the university, Jerome Wiesner, pointed him towards the project.  By early 1950, the Whirlwind I computer’s basic architecture had been completed, and it was already running its first test programs, so Forrester was able to demonstrate its real-time capabilities to Valley.  Impressed by what he saw, Valley organized a field-test of the Whirlwind as a radar control unit in September 1950 at Hanscom Field outside Bedford, Massachusettes, where a radar station connected to Whirlwind I via a phone line successfully delivered a radar signal from a passing aircraft.  Based on this positive result, the United States Air Force established Project Lincoln in conjunction with MIT in 1951 and moved Whirlwind to the new Lincoln Laboratory.

Project SAGE

IBM's_$10_Billion_Machine

A portion of an IBM AN/FSQ-7 Combat Direction Central, the heart of the SAGE system and the largest computer ever built

By April 1951, the Whirlwind I computer was operational, but still rarely worked properly due to faulty memory technology.  At Whirlwind’s inception, there were two primary forms of electronic memory in use, the delay-line storage pioneered for the EDVAC and CRT memory like the Williams Tube developed for the Manchester Mark I.  From his exposure to the EDVAC, Forrester was already familiar with delay-line memory early in Whirlwind’s development, but that medium functioned too slowly for a real-time design.  Forrester therefore turned his attention to CRT memory, which could theoretically operate at a sufficient speed, but he rejected the Williams Tube due to its low refresh rate.  Instead, Forrester incorporated an experimental tube memory under development at MIT, but this temperamental technology never achieved its promised capabilities and proved unreliable besides.  Clearly, a new storage method would be required for Whirlwind.

In 1949, Forrester saw an advertisement for a new ceramic material called Deltamax from the Arnold Engineering Company that could be magnetized or demagnetized by passing a large enough electric current through it.  Forrester believed the properties of this material could be used to create a fast and reliable form of computer memory, but he soon discovered that Deltamax could not switch states quickly at high temperatures, so he assigned a graduate student named William Papian to find an alternative.  In August 1950, Papian completed a master’s thesis entitled “A Coincident-Current Magnetic Memory Unit” laying out a system in which individual cores — small doughnut-shaped objects — with magnetic properties similar to Deltamax are threaded into a three-dimensional matrix of wires.  Two wires are passed through the center of the core to magnetize or demagnetize it by taking advantage of a property called hysteresis in which an electrical current only changes the magnetization of the material if it is above a certain threshold.  Only when currents are run through both wires and passed in the same direction will the magnetization change, making the cores a suitable form of computer memory.  A third wire is threaded through all of the cores in the matrix, allowing any portion of the memory to be read at any time.

Papian built the first small core memory matrix in October 1950, and by the end of 1951 he was able to construct a 16 x 16 array of cores.  During this period, Papian tested a wide variety of materials for his cores and settled on a silicon-steel ribbon wrapped around a ceramic bobbin, but these cores still operated too slowly and also required an unacceptably high level of current.  At this point Forrester discovered a German ceramicist in New Jersey named E. Albers-Schoenberg was attempting to create a transformer for televisions by mixing iron ore with certain oxides to create a compound called a ferrite that exhibited certain magnetic properties.  While ferrites generated a weaker output than the metallic cores Papian was experimenting with, they could switch up to ten times faster.  After experimenting with various chemical compositions, Papian finally constructed a ferrite-based core memory system in May 1952 that could switch between states in less than a microsecond and therefore serve the needs of a real-time computer.  First installed in the Whirlwind I in August 1953, ferrite core memory was smaller, cheaper, faster, and more reliable than delay-line, CRT, and magnetic drum memory and ultimately doubled the operating speed of the computer while reducing maintenance time from four hours a day to two hours a week.  Within five years, core memory had replaced all other forms of memory in mainframe computers, netting MIT a hefty profit in patent royalties.

With Whirlwind I finally fully functional the Lincoln Laboratory turned its attention to transforming the computer into a commercial command-and-control system suitable for installation in the United States Air Force’s air defense system.  This undertaking was beyond the scope of the lab itself, as it would require fabrication of multiple components on a large scale.  Lincoln Labs evaluated three companies to take on this task, defense contractor Raytheon, which had recently established a computer division, Remington Rand — through both its EMCC and ERA subsidiaries — and IBM.  At the time, Remington Rand was still the powerhouse in the new commercial computer business, while IBM was only just preparing to bring its first products to market.  Nonetheless, Forrester and his team were impressed with IBM’s manufacturing facilities, service force, integration, and experience deploying electronic products in the field and therefore chose the new kid on the block over its more established competitor.  Originally designated Project High by IBM — due to its location on the third floor of a necktie factory on High Street in Poughkeepsie — and the Whirlwind II by Lincoln Laboratory, the project eventually went by the name Semi-Automatic Ground Environment, or SAGE.

The heart of the SAGE system was a new IBM computer derived from the Whirlwind design called the AN/FSQ-7 Combat Direction Central.  By far the largest computer system ever built, the AN/FSQ-7 weighed 250 tons, consumed three megawatts of electricity, and took up roughly half an acre of floor space.  Containing 49,000 vacuum tubes and a core memory capable of storing over 65,000 33-bit words, the computer was capable of performing roughly 75,000 operations per second.  In order to insure uninterrupted operation, each SAGE installation actually consisted of two AN/FSQ-7 computers so that if one failed, the other could seamlessly assume control of the air defense center.  As the first deployed real-time computer system, it inaugurated a number of firsts in commercial computing such as the ability generate text and vector graphics on a display screen, the ability to directly enter commands via a typewriter-style keyboard, and the ability to select or draw items directly on the display using a light pen, a technology developed specifically for Whirlwind in 1955.  In order to remain in constant contact with other segments of the air defense system, the computer was also the first outfitted with a new technology called a modem developed by AT&T’s Bell Labs research division to allow data to be transmitted over a phone line.

The first SAGE system was deployed at McChord Air Force Base in November 1958, and the entire network of twenty-three Air Defense Direction Centers were online by 1963 at a total cost to the government of $8 billion.  While IBM agreed to do the entire project at cost as part of its traditional support for national defense, the project still brought the company $500 million in revenues in the late 1950s.  SAGE was perhaps the key project in IBM’s rise to dominance in the computer industry.  Through this massive undertaking, IBM became the most knowledgeable company in world at designing, fabricating, and deploying both large-scale mainframe systems and their critical components such as core memory and computer software.  In 1954, IBM upgraded its 701 computer to replace Williams Tubes memory with magnetic cores and released the system as the IBM 704.  The next year, a core-memory replacement for the 702 followed designated the IBM 705.  These new computers were instrumental in vaulting IBM past Remington Rand in the late 1950s.  SAGE, meanwhile, remained operational until 1983.

The Transistor and the TX-0

102631231-03-01

Kenneth Olsen, co-designer of the TX-0 and co-founder of the Digital Equipment Corporation (DEC)

While building a real-time computer for the SAGE air-defense system was the primary purpose of Project Whirlwind, the scope of the project grew large enough by the middle of the 1950s that staff could occasionally indulge in other activities, such as a new computer design proposed by staff member Kenneth Olsen.  Born in Bridgeport, Connecticut, Olsen began experimenting with radios as a teenager and took an eleven-month electronics course after entering the Navy during World War II.  The war was over by the time his training was complete, so after a single deployment on an admiral’s staff in the Far East, Olsen left the Navy to attend MIT in 1947, where he majored in electrical engineering.  After graduating in 1950, Olsen decided to continue his studies at MIT as a graduate student and joined Project Whirlwind.  One of Olsen’s duties on the project was the design and construction of the Memory Test Computer (MTC), a smaller version of the Whirlwind I built to test various core memory solutions.  In creating the MTC, Olsen innovated with a modular design in which each group of circuits responsible for a particular function was placed on a single plug-in unit placed on a rack that could be easily swapped out if it malfunctioned.  This was a precursor of the plug-in circuit boards still used today on computers.

One of the engineers who helped Olsen debug the MTC was Wes Clark, a physicist that came to Lincoln Laboratory in 1952 after working at the Hanford nuclear production site in Washington State.  Clark and Olsen soon bonded over their shared views on the future of computing and their desire to create a computer that would apply the lessons learned during the Whirlwind project and the construction of the MTC to the latest advances in electronics to demonstrate the potential of a fast and power-efficient computer to the defense industry.  Specifically, Olsen and Clark wanted to explore the potential of a relatively new electronic component called the transistor.

Bardeen_Shockley_Brattain_1948

John Bardeen (l), William Shockley (seated), and Walter Brattain, the team that invented the transistor

For over forty years, the backbone of all electronic equipment was the vacuum tube pioneered by John Fleming in 1904.  While this device allowed for switching at electronic speeds, however, its limitations were numerous.  Vacuum tubes generated a great deal of heat during operation, which meant that they consumed power at a prodigious rate and were prone to burnout over extended periods of use.  Furthermore, they could not be miniaturized beyond a certain point and had to be spaced relatively far apart for heat management, guaranteeing that tube-based electronics would always be large and bulky.  Unless an alternative switching device could be found, the computer would never be able to shrink below a certain size.  The solution to the vacuum tube problem came not from one of the dozen or so computer projects being funded by the U.S. government, but from the telephone industry.

In the 1920s and 1930s, AT&T, which held a monopoly on telephone service in the United States, began constructing a series of large switching facilities in nearly every town in the country to allow telephone calls to be placed between any two phones in the United States.  These facilities relied on the same electromechanical relays that powered several of the early computers, which were bulky, slow, and wore out over time.  Vacuum tubes were sometimes used as well, but the problems articulated above made them particularly unsuited for the telephone network.  As AT&T continued to expand its network, the size and speed limitations of relays became increasingly unacceptable, so the company gave a mandate to its Bell Labs research arm, one of the finest corporate R&D organizations in the world, to discover a smaller, faster, and more reliable switching device.

In 1936, the new director of research at Bell Labs, Mervin Kelly, decided to form a group to explore the possibility of creating a solid-state switching device.  Both solid-state physics, which explores the properties of solids based on the arrangement of their sub-atomic particles, and the related field of quantum mechanics, in which physical phenomena are studied on a nanoscopic scale, were in their infancy and not widely understood, so Kelly scoured the universities for the smartest chemists, metallurgists, physicists, and mathematicians he could find.  His first hire was a brilliant, but difficult physicist named William Shockley.  Born in London to a mining engineer and a geologist, William Bradford Shockley, Jr. grew up in Palo Alto, California, in the heart of the Santa Clara Valley, a region known as the “Valley of the Heart’s Delight” for its orchards and flowering plants.  Shockley’s father spent most of his time moving from mining camp to mining camp, so he grew especially close to his mother, May, who taught him the ins and outs of geology from a young age.  After attending Stanford to stay close to his mother, Shockley received a Ph.D. from MIT in 1936 and went to work for Bell.  Gruff and self-centered, Shockley never got along with his colleagues anywhere he worked, but there was no questioning his brilliance or his ability to push colleagues towards making new discoveries.

Kelly’s group began educating itself on the field of quantum mechanics through informal sessions where they would each take a chapter of the only quantum mechanics textbook in existence and teach the material to the rest of the group.  As their knowledge of the underlying science grew in the late 1930s, the group decided the most promising path to a solid-state switching device lay with a group of materials called semiconductors.   Generally speaking, most materials are either a conductor of electricity, allowing electrons to flow through them, or an insulator, halting the flow of electrons.  As early as 1826, however, Michael Faraday, the brilliant scientist whose work paved the way for electric power generation and transmission, had observed that a small number of compounds would not only act as a conductor under certain conditions and an insulator in others, but would also serve as amplifiers under certain conditions as well.  These properties allowed a semiconductor to behave like a triode under the right conditions, but for decades scientists remained unable to determine why changes in heat, light, or magnetic field would alter the conductivity of these materials and therefore could not harness this property.  It was not until the field of quantum mechanics became more developed in the 1930s that scientists gained a great enough understanding of electron behavior to attack the problem.  Kelly’s new solid-state group hoped to unlock the mystery of semiconductors once and for all, but their work was interrupted by World War II.

In 1945, Kelly revived the solid-state project under the joint supervision of William Shockley and chemist Stanley Morgan.  The key members of this new team were John Bardeen, a physicist from Wisconsin known as one of the best quantum mechanics theorists in the world, and Walter Brattain, a farm boy from Washington known for his prowess at crafting experiments.  During World War II, great progress had been made in creating crystals of the semiconducting element germanium for use in radar, so the group focused its activities on that element.  In late 1947, Bardeen and Brattain discovered that if they introduced impurities into just the right spot on a lump of germanium, the germanium could amplify a current in the same manner as a vacuum tube triode.  Shockley’s team gave an official demonstration of this phenomenon to other Bell Labs staff on December 23, 1947, which is often recognized as the official birthday of the transistor, so named because it effects the transfer of a current across a resistor — i.e. the semiconducting material.  Smaller, less power-hungry, and more durable than the vacuum tube, the transistor paved the way for the development of the entire consumer electronics and personal computer industries of the late twentieth century.

tumblr_mrf93w8XJQ1s6mxo0o1_500

The TX-0, one of the earliest transistorized computers, designed by Wes Clark and Kenneth Olsen

Despite its revolutionary potential, the transistor was not incorporated into computer designs right away, as there were still several design and production issues that had to be overcome before it could be deployed in the field in large numbers (which will be covered in a later post).  By 1954, however, Bell Labs had deployed the first fully transistorized computer, the Transistor Digital Computer or TRADIC, while electronics giant Philco had introduced a new type of transistor called a surface-barrier transistor that was expensive, but much faster than previous designs and therefore the first practical transistor for use in a computer.  It was in this environment that Clark and Olsen proposed a massive transistorized computer called the TX-1 that would be roughly the same size as a SAGE system and deploy one of the largest core memory arrays ever built, but they were turned down because Forrester did not find their design practical.  Clark therefore went back to the drawing board to create as simple a design as he could that still demonstrated the merits of transistorized computing.  As this felt like a precursor to the larger TX-1, Olsen and Clark named this machine the TX-0.

Completed in 1955 and fully operational the next year, the TX-0 — often pronounced “Tixo” — incorporated 3,600 surface-barrier transistors and was capable of performing 83,000 operations per second.  Like the Whirlwind, the TX-0 operated in real time, and it also incorporated a display with a 512×512 resolution that could be manipulated by a light pen, and a core memory that could store over 65,000 words, though Clark and Olsen settled on a relatively short 18-bit word length.  Unlike the Whirlwind I, which occupied 2,500 square feet, the TX-0 took up a paltry 200 square feet.  Both Clark and Olsen realized that the small, fast, interactive TX-0 represented something new: a (relatively) inexpensive computer that a single user could interact with in real time.  In short, it exhibited many of the hallmarks of what would become the personal computer.

With the TX-0 demonstrating the merits of high-speed transistors, Clark and Olsen returned to their goal of creating a more complex computer with a larger memory, which they dubbed the TX-2.  Completed in 1958, the TX-2 could perform a whopping 160,000 operations per second and contained a core memory of 260,000 36-bit words, far surpassing the capability of the earlier TX-0.  Olsen once again designed much of the circuitry for this follow-up computer, but before it was completed he decided to leave MIT behind.

The Digital Equipment Corporation

vs-dec-pdp-1

The PDP-1, Digital Equipment Corporation’s First Computer

Despite what Olsen saw as the nearly limitless potential of transistorized computers, the world outside MIT remained skeptical.  It was one thing to create an abstract concept in a college laboratory, people said, but another thing entirely to actually deploy an interactive transistorized system under real world conditions.  Olsen fervently desired to prove these naysayers wrong, so along with a fellow student who worked with him on the MTC named Harlan Anderson he decided to form his own computer company.  As a pair of academics with no practical real-world business experience, however, Olsen and Anderson faced difficulty securing financial backing.  They approached defense contractor General Dynamics first, but were flatly turned down.  Unsure how to proceed next, they visited the Small Business Administration office in Boston, which recommended they contact investor Georges Doriot.

Georges Doriot was a Frenchman who immigrated to the United States in the 1920s to earn an MBA from Harvard and then decided to stay on as a professor at the school.  In 1940, Doriot became an American citizen, and the next year he joined the United States Army as a lieutenant colonel and took on the role of director of the Military Planning Division for the Quartermaster General.  Promoted to brigadier general before the end of the war, Doriot returned to Harvard in 1946 and also established a private equity firm called the American Research and Development Corporation (ARD).  With a bankroll of $5 million raised largely from insurance companies and educational institutions, Doriot sought out startups in need of financial support in exchange for taking a large ownership stake in the company.  The goal was to work closely with the company founders to grow the business and then sell the stake at some point in the future for a high return on investment.  While many of the individual companies would fail, in theory the payoff from those companies that did succeed would more than make up the difference and return a profit to the individuals and groups that provided his firm the investment capital.  Before Doriot, the only outlets for a new business to raise capital were the banks, which generally required tangible assets to back a loan, or a wealthy patron like the Rockefeller or Whitney families.  After Doriot’s model proved successful, inexperienced entrepreneurs with big ideas now had a new outlet to bring their products to the world.  This outlet soon gained the name venture capital.

In 1957, Olsen and Anderson wrote a letter to Doriot detailing their plans for a new computer company.  After some back and forth and refinement of the business plan, ARD agreed to provide $70,000 to fund Olsen and Anderson’s venture in return for a 70% ownership stake, but the money came with certain conditions.  Olsen wanted to build a computer like the TX-0 for use by scientists and engineers that could benefit from a more interactive programming environment in their work, but ARD did not feel it was a good idea to go toe-to-toe with an established competitor like IBM.  Instead, ARD convinced Olsen and Anderson to produce components like power supplies and test equipment for core memory.  Olsen and Anderson had originally planned to call their new company the Digital Computer Corporation, but with their new ARD-mandated direction, they instead settled on the name Digital Equipment Corporation (DEC).

In August 1957, DEC moved into its new office space on the second floor of Building 12 of a massive woolen mill complex in Maynard, Massachusetts, originally built in 1845 and expanded many times thereafter.  At the time, the company consisted of just three people: Ken Olsen, Harlan Anderson, and Ken’s younger brother Stan, who had worked as a technician at Lincoln Lab.  Ken served as the leader and technical mastermind of the group, Anderson looked after administrative matters, and Stan focused on manufacturing.  In early 1958, the company released its first products.

DEC arrived on the scene at the perfect moment.  Core memory was in high demand and transistor prices were finally dropping, so all the major computer companies were exploring new designs, creating an insatiable demand for testing equipment.  As a result, DEC proved profitable from the outset.  In fact, Olsen and Anderson actually overpriced their stock due to their business inexperience, but with equipment in such high demand, firms bought from DEC anyway, giving the company extremely high margins and allowing it to exceed its revenue goals.  Bolstered by this success, Olsen chose to revisit the computer project with ARD, so in 1959 DEC began work on a fully transistorized interactive computer.

Designed by Ben Gurley, who had developed the display for the TX-0 at MIT, the Programmed Data Processor-1, more commonly referred to as the PDP-1, was unveiled in December 1959 at the Eastern Joint Computer Conference in Boston.  It was essentially a commercialized version of the TX-0, though it was not a direct copy.  The PDP-1 incorporated a better display than its predecessor with a resolution of 1024 x 1024 and it was also faster, capable of 100,000 operations per second.  The base setup contained only 4,096 18-bit words of core memory, but this could be upgraded to 65,536.  The primary method of inputting programs was a punched tape reader, and it was hooked up to a typewriter as well.  While not nearly as powerful as the latest computers from IBM and its competitors in the mainframe space, the PDP-1 only cost $120,000, a stunningly low price in an era where buying a computer would typically set an organization back a million dollars or more.  Lacking developed sales, manufacturing, or service organizations, DEC sold only a handful of PDP-1 computers over its first two years on the market to organizations like Bolt, Beranek, and Newman and the Lawrence Livermore Labs.  A breakthrough occurred in late 1962 when the International Telegraph and Telephone Company (ITT) decided to order fifteen PDP-1 computers to form the heart of a new telegraph message switching system designated the ADX-7300.  ITT would continue to be DEC’s most important PDP-1 customer throughout the life of the system, ultimately purchasing roughly half of the fifty-three computers sold.

While DEC only sold around fifty PDP-1’s over its lifetime, the revolutionary machine introduced interactive computing commercially and initiated the process of opening computer use to ever greater portions of the public, which culminated in the birth of the personal computer two decades later.  With its monitor and real-time operation, it also provided a perfect platform for creating engaging interactive games.  Even with these advances, the serious academics and corporate data handlers of the 1950s were unlikely to ever embrace the computer as an entertainment medium, but unlike the expensive and bulky mainframes reserved for official business, the PDP-1 and its successors soon found their way into the hands of students at college campuses around the country, beginning with the birthplace of the PDP-1 technology: MIT.

Historical Interlude: The Birth of the Computer Part 3, the Commercialization of the Computer

In the 1940s, the electronic digital computer was a new, largely unproven machine developed in response to specific needs like the code-breaking requirements of Bletchley Park or the ballistics calculations of the Aberdeen Proving Grounds.  Once these early computers proved their worth, projects like the Manchester Mark 1, EDVAC, and EDSAC implemented a stored program concept that allowed digital computers to become useful for a wide variety of scientific and business tasks.  In the early 1950s, several for-profit corporations built on this work to introduce mass-produced computers and offered them to businesses, universities, and government organizations around the world.  As previously discussed, Ferranti in the United Kingdom introduced the first such computer by taking the Manchester Mark 1 design, increasing the speed and storage capacity of the machine, and releasing it as the Ferranti Mark 1 in February 1952.  This would be one of the few times that the United Kingdom led the way in computing over the next several decades, however, as demand remained muted among the country’s conservative businesses, allowing companies in the larger U.S. market to grow rapidly and achieve world dominance in computing.

Note: This is the third of four posts in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, A History of Modern Computing by Paul Ceruzzi, Computers and Commerce: A Study of Technology and Management at Eckert-Mauchly Computer Company, Engineering Research Associates, and Remington Rand, 1946-1957 by Arthur Norberg, and IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh.

UNIVAC

1951_univac_large

The UNIVAC I, the first commercially available computer in the United States

For a brief period from 1943 to 1946, the Moore School in Philadelphia was the center of the computer world as John Mauchly and J. Presper Eckert developed ENIAC and initiated the EDVAC project.  Unlike the more accommodating MIT and Stanford, however, which nurtured the Route 128 tech corridor and Silicon Valley respectively by encouraging professors and students to apply technologies developed in academia to the private sector, the Moore School believed commercial interests had no place in an academic institution and decided to quash them entirely.  In early 1946 the entire staff of the school was ordered to sign release forms giving up the rights to all patent royalties from inventions pioneered at the school.  This was intolerable to both Eckert and Mauchly, who formally resigned on March 31, 1946 to pursue commercial opportunities.

While still at the Moore School, Mauchly met with several organizations that might be interested in the new EDVAC computer.  One of these was the Census Bureau, which once again needed to migrate to new technologies as tabulating machines were no longer sufficient to count the U.S. population in a timely manner.  After leaving the school, Eckert and Mauchly attended a series of meetings with the Census Bureau and the National Bureau of Standards (NBS) between March and May devoted to the possibility of replacing tabulating machines with computers.  After further study, the NBS entered into an agreement with Eckert and Mauchly on September 25, 1946, for them to develop a computer for the Census Bureau in return for $300,000, which Eckert and Mauchly naively believed would cover a large portion of their R&D cost.

Census contract aside, Eckert and Mauchly experienced great difficulty attempting to fund the world’s first for-profit electronic computer company.  Efforts to raise capital commenced in the summer of 1946, but Philadelphia-area investors were focused on the older industries of steel and electric power that had driven the region for decades.  In New York, there was funding available for going electronic concerns, but the concept of venture capital did not yet exist and no investment houses were willing to take a chance on a startup.  The duo were finally forced to turn to friends and family, who provided enough capital in combination with the Census contract for Eckert and Mauchly to establish a partnership called the Electric Control Company in October 1946, which later incorporated as the Eckert-Mauchly Computer Corporation (EMCC) in December 1948.

As work began on the EDVAC II computer at the new Philadelphia offices of the Electric Control Company, the founders continued to seek new contracts to alleviate chronic undercapitalization.  In early 1947 Prudential, a forward-thinking company that had a reputation as an early adopter of new technology, agreed to pay the duo $20,000 to serve as consultants, but refused to commit to ordering a computer until it was completed.  Market research film A.C. Nielsen placed an order in spring 1948 and Prudential changed its mind and followed suit late in the year, but both deals were for $150,000 as Eckert and Mauchly continued to underestimate the cost of building their computers.  To keep the company solvent, the duo completed a $100,000 deal with Northrop Aircraft in October 1947 for a smaller scientific computer called the Binary Automatic Computer (BINAC) for use in developing a new unmanned bomber.  Meanwhile, with contracts coming in Eckert and Mauchly realized that they needed a new name for their computer to avoid confusion with the EDVAC project at the Moore School and settled on UNIVAC, which stood for Universal Automatic Computer.

EMCC appeared to finally turn a corner in August 1948 when it received a $500,000 investment from the American Totalisator Company.  The automatic totalisator was a specialized counting machine originally invented by New Zealander George Julius in the early twentieth century to tally election votes and divide them properly among the candidates.  When the government rejected the device, he adapted it for use at the race track, where it could run a pari-mutual betting system by totaling all bets and assigning odds to each horse.  American Totalisator came to dominate this market after one of its founders, Henry Strauss, invented and patented an electro-mechanical totalisator first used in 1933.  Strauss realized that electronic computing was the logical next step in the totalisator field, so he convinced the company board to invest $500,000 in EMCC in return for a 40% stake in the company.  With the funding from American Totalisator, EMCC completed BINAC and delivered it to Northrop in September 1949.  Although it never worked properly, BINAC was the first commercially sold computer in the world.  Work continued on UNIVAC as well, but disaster struck on October 25, 1949, when Henry Strauss died in a plane crash.  With EMCC’s chief backer at American Totalisator gone, the company withdrew its support and demanded that its loans be repaid.  Eckert and Mauchly therefore began looking for a buyer for their company.

On February 15, 1950, office equipment giant Remington Rand purchased EMCC for $100,000 while also paying off the $438,000 owed to American Totalisator.  James Rand, Jr., the president of the company, had become enamored with the scientific advances achieved during World War II and was in the midst of a post-war expansion plan centered on high technology and electronic products.  In 1946, Rand constructed a new high-tech R&D lab in Norwalk, Connecticut, to explore products as varied as microfilm readers, xerographic copiers, and industrial television systems.  In late 1947, he hired Leslie Groves, the general who oversaw the Manhattan Project, to run the operation.  EMCC therefore fit perfectly into Rand’s plans.  Though Eckert and Mauchly were required to give up their ownership stakes and take salaries as regular employees of Remington Rand, Groves allowed them to remain in Philadelphia and generally let them run their own affairs without interference.

With Remignton Rand sorting out its financial problems, EMCC was finally able to complete its computer.  First accepted by the U.S. Census Bureau on March 31, 1951, the UNIVAC I contained 5,200 vacuum tubes and could perform 1,905 operations a second at a clock speed of 2.25 MHz.  Like the EDVAC and EDSAC, the UNIVAC I used delay line memory as its primary method of storing information, but it also pioneered the use of magnetic tape storage as a secondary memory, which was capable of storing up to a million characters.  The Census Bureau resisted attempts by Remington Rand to renegotiate the purchase price of the computer and spent only the $300,000 previously agreed upon, while both A.C. Nielsen and Prudential ultimately cancelled their orders when Remington Rand threatened to tie up delivery through a lawsuit to avoid selling the computers for $150,00 dollars; future customers were forced to pay a million dollars or more for a complete UNIVAC I.

By 1954, nineteen UNIVAC computers had been purchased and installed at such diverse organizations as the Pentagon, U.S. Steel, and General Electric.  Most of these organizations took advantage of the computer’s large tape storage capacity to employ the computer for data processing rather than calculations, where it competed with the tabulating machines that had brought IBM to prominence.

UNIVAC-1101BRL61-0901

The UNIVAC 1101, Remington Rand’s first scientific computer

To serve the scientific community, Remington Rand turned to another early computer startup, Engineering Research Associates (ERA).  ERA grew out of the code-breaking activities of the United States Navy during World War II, which were carried out primarily through an organization called the Communications Supplementary Activity – Washington (CSAW).  Like Bletchley Park in the United Kingdom, CSAW constructed a number of sophisticated electronic devices to aid in codebreaking, and the Navy wanted to maintain this technological capability after the war.  Military budget cuts made this impractical, however, so to avoid losing the assembly of talent at CSAW, the Navy helped establish ERA in St. Paul, Minnesota, in January 1946 as a private corporation.  The company was led by John Parker, a former Navy lieutenant who had become intimately involved in the airline industry in the late 1930s and 1940s while working for the D.C. investment firm Auchincloss, Parker, and Redpath, and drew most of its important technical personnel from CSAW.

Unlike EMCC, which focused on building a machine for corporate data processing, ERA devoted its activities to intelligence analysis work for the United States Navy.  Like Eckert and Mauchly, the founders of ERA realized the greatest impediment to building a useful electronic computing device was the lack of suitable storage technology, so in its first two years of existence, the company concentrated on solving this problem, ultimately settling on magnetic drum memory, a technology invented by Austrian Gustav Tauchek in 1932 in which a large metal cylinder is coated with a ferromagnetic magnetic material.  As the drum is rotated, stationary write heads can generate an electrical pulse to change the magnetic orientation on any part of the surface of the drum, while a read head can detect the orientation and recognize it in binary as either a “1” or a “0,” therefore making it suitable for computer memory.  A series of specialized cryptoanalytic machines followed with names like Goldberg and Demon, but these machines tended to become obsolete quickly since they were targeted at specific codes and were not programmable to take on new tasks.  Meanwhile, as both ERA and the Navy learned more about developments at the Moore School, they decided a general purpose computer would be a better method of addressing the Navy’s needs than specialized equipment and therefore initiated Task 13 in 1947 to build a stored program computer called Atlas.  Completed in December 1950, the Atlas contained 2,700 vacuum tubes and a drum memory that could hold just over 16,000 24-bit words.  The computer was delivered to the National Security Agency (NSA) for code-breaking operations, and the agency was so pleased with the computer that it accepted a second unit in 1953.  In December 1951, a modified version was made available as the ERA 1101 — a play on the original project name as “1101” is “13” in binary — but ERA did not furnish any manuals, so no businesses purchased the machine.

The same month ERA announced the 1101, it was purchased by Remington Rand.  ERA president John Parker realized that fully entering the commercial world would require a significant influx of capital that the company would be unlikely to raise.  Furthermore, the close relationship between ERA and the Navy had piqued the interest of government auditors and threatened the company’s ability to secure future government contracts.  Therefore, Parker saw the Remington Rand purchase as essential to ERA’s continued survival.  Remington Rand, meanwhile, gained a foothold in a new segment of the computer market.  The company began marketing an improved version of ERA’s first computer as the UNIVAC 1103 in October 1953 and ultimately installed roughly twenty of them, mostly within the military-industrial complex.

In 1952, the American public was introduced to the UNIVAC in dramatic fashion when Mauchly developed a program to predict the results of the general election between Dwight Eisenhower and Adlai Stevenson based on the returns from the previous two elections.  The results were to be aired publicly on CBS, but UNIVAC predicted a massive landslide for Eisenhower in opposition to Gallup polls that indicated a close race.  CBS refused to deliver the results, opting instead to state that the computer predicted a close victory for Eisenhower.  When it became clear that Eisenhower would actually win in a landslide, the network owned up to its deception and aired the true results, which were within just a few electoral votes of the actual total.  Before long, the term “UNIVAC” became a generic word for all computers in the same way “Kleenex” has become synonymous with tissue paper and “Xerox” with photocopying.  For a time, it appeared that Remington Rand would be the clear winner in the new field of electronic computers, but only until IBM finally hit its stride.

IBM Enters the Computer Industry

IBM701Console

Tom Watson, Sr. sits at the console of an IBM 701, the company’s first commercial computer

There is a story, oft-repeated, about Tom Watson, Sr. that claims he saw no value in computers.  According to this story, the aging president of IBM scoffed that there would never be a market for more than five computers and neglected to bring IBM into the new field.  Only after the debut of the UNIVAC I did IBM realize its mistake and hastily enter the computer market.  While there are elements of truth to this version of events, there is no truth to the claim that IBM was completely ignoring the computer market in the late 1940s.  Indeed, the company developed several electronic calculators and had no fewer than three computer projects underway when the UNIVAC I hit the market.

As previously discussed, IBM’s involvement with computers began when the company joined with Howard Aiken to develop the Automatic Sequence Controlled Calculator (ASCC).  That machine was first unveiled publicly on August 6, 1944, and Tom Watson traveled to Cambridge, Massachusetts, to speak at the dedication.  At the Boston train station, Watson was irked that no one from Harvard was there to welcome him.  Irritation turned to rage when he perused the Boston Post and saw that Harvard had not only issued a press release about the ASCC without consulting him, but also gave sole credit to Howard Aiken for inventing the machine.  When an angry and humiliated Watson returned to IBM, he ordered James Bryce and Clair Lake to develop a new machine that would make Aiken’s ASCC look like a toy.  Watson wanted to show the world that IBM could build computers without help from anyone else and to get revenge on the men he felt wronged him.

With IBM seriously engaged in war work, Bryce and Lake felt they would be unable to achieve the breakthroughs in the lab necessary to best Aiken in a reasonable time frame, so instead argued for a simpler goal of creating the world’s first automatic calculator.  To that end, an electronics enthusiast in the company named Haley Dickinson was ordered to convert the company’s electro-mechanical Model 601 Multiplying Punch into a tube-based machine.  Unveiled in September 1946 as the IBM 603 Electronic Multiplier, the machine contained only 300 vacuum tubes and no storage, but it could multiply ten times faster than existing tabulating machines and soon became a sensation.  Embarrassed by the limitations of the machine, however, Watson halted production at 100 units and ordered his engineers to develop an improved model.  Ralph Palmer, an electronics expert that joined IBM in 1932 and was recently returned from a stint in the Navy, was asked to form a new laboratory in Poughkeepsie, New York, dedicated solely to electronics.  Palmer’s group delivered the IBM 604 Electronic Calculating Punch in 1948, which contained 1,400 tubes and could be programmed to solve simple equations.  Over the next ten years, the company leased 5,600 604’s to customers, and Watson came to realize that the future of IBM’s business lay in electronics.

Meanwhile, as World War II neared its conclusion, Watson’s mandate to best Aiken’s ASCC gained momentum.  The man responsible for this project was Wallace Eckert (no relation to the ENIAC co-inventor), who as an astronomy professor at Columbia in the 1920s and 1930s had been one of the main beneficiaries of Watson’s relationship with the university in those years.  After directing the Nautical Almanac of the United States Naval Observatory during much of World War II, Eckert accepted an invitation from Watson in March 1945 to head a new division within IBM specifically concerned with the computational needs of the scientific community called the Pure Science Department.

Eckert remained at headquarters in New York while Frank Hamilton, who had been a project leader on the ASCC, took charge of defining the Aiken-beating machine’s capabilities in Endicott.  In summer 1945, Eckert made new hire Rex Seeber his personal representative to the project.  A Harvard graduate, Seeber had worked with Aiken, but fell out with him when he refused to implement the stored program concept in his forthcoming update of the ASCC.  Seeber’s knowledge of computer theory and electronics perfectly complemented Hamilton’s electrical engineering skills and resulted in the completion of the Selective Sequence Electronic Calculator (SSEC) in 1947.  The SSEC was the first machine in the world to successfully implement the stored program concept, although it is often classified as a calculator rather than a stored program computer due to its limited memory and reliance on paper tape for program control.  The majority of the calculator remained electromechanical, but the arithmetic unit, adapted from the 603, operated at electronic speeds.  Built with 21,400 relays and 12,500 vacuum tubes and assembled at a cost of $950,000, the SSEC was a strange hybrid that exerted no influence over the future of computing, but it did accomplish IBM’s objectives by operating 250 times faster than the Harvard ASCC while also gaining significant publicity for IBM’s computing endeavors by operating while on display to the public on the ground floor of the company’s corporate headquarters from 1948 to 1952.

6703PH02

Tom Watson, Jr., son and successor of Tom Watson, Sr.

The success of the IBM 603 and 604 showed Watson that IBM needed to embrace electronics, but he remained cautious regarding electronic computing.  Indeed, when given the chance to bring Eckert and Mauchly into the IBM fold in mid-1946 after they left the Moore School, Watson ultimately turned them down not because he saw no value in their work but because he did not want to meet the price they demanded to buy out their business.  When he learned that the duo’s computer company was garnering interest from the National Bureau of Standards and Prudential in 1947, he told his engineers they should explore a competing design, but he was thinking in terms of a machine tailored to the needs of specific clients rather than a general-purpose computing device.  By now Watson was in his seventies and set in his ways, and while there is no evidence that he ever uttered the famous line about world demand reaching only five computers, he could simply not envision a world in which electronic computers replaced tabulating machines entirely.  As a result, the push for computing within the company came instead from his son and heir apparent, Tom Watson, Jr.

Thomas J. Watson, Jr. was born in Dayton, Ohio, in 1914, the same year his father accepted the general manager position at C-T-R.  His relationship with his father was strained for most of his life, as the elder Watson was prone to both controlling behavior and ferocious bursts of temper.  While incredibly bright, Watson suffered from anxiety and crippling depression as a child and felt incapable of living up to his father’s standards or of succeeding him at IBM one day, which he sensed was his father’s wish.  As a result, he rebelled and performed poorly in school, only gaining admittance to Brown University as a favor to his father.  After graduating with a degree in business in 1937, he became a salesman at IBM, but grew to hate working there due to the special treatment he received as the CEO’s son and the cult of personality that had grown up around his father.  Desperate for a way out, he joined the Air National Guard shortly before the United States entered World War II and became aide-de-camp to First Air Force Commander Major General Follett Bradley in 1942.  He had no intention of ever returning to IBM.

Working for General Bradley, Watson finally realized his own potential.  He became the general’s most trusted subordinate and gained experience managing teams undertaking difficult tasks.  With the encouragement of Bradley, his inner charisma surfaced for the first time, as did a remarkable ability to focus on and explain complex problems.  Near the end of the war, Bradley asked Watson about his plans for the future and was shocked when Watson said he might become a commercial pilot and would certainly never rejoin IBM.  Bradley stated that he always assumed Watson would return to run the company.  In that moment, Watson realized he was avoiding the company because he feared he would fail, but that his war experiences had prepared him to succeed his father.  On the first business day of 1946, he returned to the fold.

Tom Jr. was not promoted to a leadership position right away.  Instead, Tom Sr. appointed him personal assistant to Charley Kirk, the executive vice president of the company and Tom Sr.’s most trusted subordinate.  Kirk generously took Tom Jr. under his wing, but he also appeared to be first in line to take over the company upon Tom Sr.’s retirement, which Tom Jr. resented.  A potential power struggle was avoided when Kirk suffered a massive heart attack and died in 1947.  Tom Sr. did not feel his son was quite ready to assume the executive vice president position, but Tom Jr. did assume many of Kirk’s responsibilities while an older loyal Watson supporter named George Phillips took on the executive VP role on a short-term basis.  In 1952, Tom Sr. finally named Tom Jr. president of IBM.

ibm-650-drum

The IBM 650, IBM’s most successful early computer

Tom Jr. first learned of the advances being made in computing in 1946 when he and Kirk traveled to the Moore School to see the ENIAC.  He became a staunch supporter of electronics and computing from that day forward.  While there was no formal division of responsibilities drawn up between father and son, it was understood from the late forties until Tom Jr. succeeded his father as IBM CEO in 1956 that Tom Jr. would be given free reign to develop IBM’s electronics and computing businesses, while Tom Sr. concentrated on the traditional tabulating machine business.  In this capacity, Tom Jr. played a significant role in overcoming bias within IBM’s engineering, sales, and future demands divisions towards new technologies and brought IBM fully into the computer age.

By 1950, IBM had two computer projects in progress.  The first had been started in 1948 when Tom Watson, Sr. ordered his engineers to adapt the SSEC into something cheaper that could be mass produced and sold to IBM’s business customers.  With James Bryce incapacitated — he would die the next year — the responsibility of shaping the new machine fell to Wallace Eckert, Frank Hamilton, and John McPherson, an IBM vice president that had been instrumental in constructing two powerful relay calculators for the Aberdeen Proving Grounds during World War II.  The trio decided to create a machine focused on scientific and engineering applications, both because this was their primary area of expertise and because with the dawn of the Cold War the United States government was funding over a dozen scientific computing projects to maintain the technological edge it had built during World War II.  There was a real fear that if IBM did not stay relevant in this area, one of these projects could birth a company capable of challenging IBM’s dominant position in business machines.

Hamilton acted as the chief engineer on the project and chose to increase the system’s memory capacity by incorporating magnetic drum storage, thus leading to the machine’s designation as the Magnetic Drum Calculator (MDC). While the MDC began life as a calculator essentially pairing an IBM 603 with a magnetic drum, the realization that drum memory was expansive enough that a paper tape reader could be discarded entirely and instructions could be read and modified directly from the drum itself caused the project to morph into a full-fledged computer.  By early 1950, engineering work had commenced on the MDC, but development soon stalled as it became the focus of fights between multiple engineering teams as well as the sales and future demands departments over its specifications, target audience, and potential commercial performance.

While work continued on the MDC in Endicott, several IBM engineers in the electronics laboratory in Poughkeepsie initiated their own experiments related to computer technology.  In 1948, an engineer named Philip Fox began studying alternate solutions to vacuum tube memory that would allow for a stored-program computer.  Learning of the Williams Tube in 1948, he decided to focus his attention on CRT memory.  Fox created a machine called the Test Assembly on which he worked to improve on the reliability of existing CRT memory solutions.  Meanwhile, in early 1949, a new employee named Nathaniel Rochester who was dismayed that IBM did not already have a stored-program computer in production began researching the capabilities of magnetic tape as a storage medium.  These disparate threads came together in October 1949 when a decision was made to focus on the development of a tape machine to challenge the UNIVAC, which appeared poised to grab a share of IBM’s data processing business.  By March 1950,  Rochester and Werner Buchholz had completed a technical outline of the Tape Processing Machine (TPM), which would incorporate both CRT and tape memory.  As with the MDC, however, sales and future demands’ inability to clearly define a market for the computer hindered its development.

A breakthrough in the stalemate between sales and engineering finally occurred with the outbreak of the Korean War.  As he had when the United States entered World War II, Tom Watson, Sr. placed the full capabilities of the company at the disposal of the United States government.  The United States Air Force quickly responded that it wanted help developing a new electro-mechanical bombsight for the B-47 Bomber, but Tom Watson, Jr., who already believed IBM was not embracing electronics fast enough, felt working on electro-mechanical projects to be a giant step backwards for the company.  Instead, he proposed developing an electronic computer suitable for scientific computation by government organizations and contractors.

Initially, IBM considered adapting the TPM for its new scientific computer project, but quickly abandoned the idea.  To save on cost, the engineering team of the TPM had decided to design the computer to process numbers serially rather than in parallel, which was sufficient for data processing, but made the machine too slow to meet the computational needs of the government.  Therefore, in September 1950 Ralph Palmer’s engineers drew up preliminary plans for a floating-point decimal computer hooked up to an array of tape readers and other auxiliary devices that would be capable of well over 10,000 operations a second and of storing 2000 thirteen-digit words in Williams Tube memory.  Watson Jr. approved this project in January 1950 under the moniker “Defense Calculator.”  With a tight deadline of Spring 1952 in place for the Defense Calculator so it would be operational in time to contribute to the war effort, Palmer realized the engineering team, led by Nathaniel Rochester and Jerrier Haddad, could not afford to start from scratch on the design of the new computer, so they decided to base the architecture on von Neumann’s IAS Machine.

ibm_702

The IBM 702, IBM’s first computer targeted at businesses

On April 29, 1952, Tom Watson, Sr. announced the existence of the Defense Calculator to IBM’s shareholders at the company’s annual meeting.  In December, the first completed model was installed at IBM headquarters in the berth occupied until then by the SSEC.  On April 7, 1953, the company staged a public unveiling of the Defense Calculator under the name IBM 701 Electronic Data Processing Machine four days after the first production model had been delivered to the Los Alamos National Laboratory in New Mexico.  By April 1955, when production ceased, IBM had completed nineteen installations of the 701 — mostly at government organizations and defense contractors like Boeing and Lockheed — at a rental cost of $15,000 a month.

The success of the 701 finally broke the computing logjam at IBM.  The TPM, which had been on the back burner as the Defense Calculator project gained steam, was redesigned for faster operation and announced in September 1953 as the IBM 702, although the first model was not installed until July 1955.  Unlike the 701, which borrowed the binary numeral system from the IAS Machine, the 702 used the decimal system as befit its descent from the 603 and 604 electronic calculators.  It also shipped with a newly developed high speed printer capable of outputting 1,000 lines per minute.  IBM positioned the 702 as a business machine to compete with the UNIVAC I and ultimately installed fourteen of them.  Meanwhile, IBM also reinstated the MDC project — which had stalled almost completely — in November 1952, which saw release in 1954 as the IBM 650.  While the drum memory used in the 650 was slower than the Williams Tube memory of the 701 and 702, it was also more reliable and cheaper, allowing IBM to lease the 650 at the relatively low cost of $3,250 a month.  As a result, it became IBM’s first breakout success in the computer field, with nearly 2,000 installed by the time the last one rolled off the assembly line in 1962.

IBM’s 700 series computers enjoyed several distinct advantages over the UNIVAC I and UNIVAC 1103 computers marketed by Remington Rand.  Technologically, Williams Tube memory was both more reliable and significantly faster than the mercury delay line memory and drum memory used in the UNIVAC machines, while the magnetic tape system developed by IBM was also superior to the one used by Remington Rand.  Furthermore, IBM designed its computers to be modular, making them far easier to ship and install than the monolithic UNIVAC system.  Finally, IBM had built one of the finest sales and product servicing organizations in the world, making it difficult for Remington Rand to compete for customers.  While UNIVAC models held a small 30 to 24 install base edge over the 700 series computers as late as August 1955, IBM continued to improve the 700 line through newly emerging technologies and just a year later moved into the lead with 66 700 series installations versus 46 UNIVAC installations.  Meanwhile, installations of the 650 far eclipsed any comparable model, giving IBM control of the low end of the computer market as well.  The company would remain the number one computer maker in the world throughout the mainframe era.

Historical Interlude: The Birth of the Computer Part 2, The Creation of the Electronic Digital Computer

In the mid-nineteenth century, Charles Babbage attempted to create a program-controlled universal calculating machine, but failed for lack of funding and the difficulty of creating the required mechanical components.  This failure spelled the end of digital computer research for several decades.  By the early twentieth century, however, fashioning small mechanical components no longer presented the same challenge, while the spread of electricity generating technologies provided a far more practical power source than the steam engines of Babbage’s day.  These advances culminated in just over a decade of sustained innovation between 1937 and 1949 out of which the electronic digital computer was born.  While both individual computer components and the manner in which the user interacts with the machine have continued to evolve, the desktops, laptops, tablets, smartphones, and video game consoles of today still function according to the same basic principles as the Manchester Mark 1, EDSAC, and EDVAC computers that first operated in 1949.  This blog post will chart the path to these three computers.

Note: This is the second of four “historical interlude” posts that will summarize the evolution of computer technology between 1830 and 1960.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, Reckoners: The Prehistory of the Digital Computer, From Relays to the Stored Program Concept, 1935-1945 by Paul Ceruzzi, The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, and the articles “Alan Turing: Father of the Modern Computer” by B. Jack Copeland and Diane Proudfoot, “Colossus: The First Large Scale Electronic Computer” by Jack Copeland, and “A Brief History of Computing,” also by Copeland.

Analog Computing

102680080-03-01

Vannevar Bush with his differential analyzer, an analog computer

While a digital computer after the example of Babbage would not appear until the early 1940s, specialized computing devices that modeled specific systems mechanically continued to be developed in the late nineteenth and early twentieth centuries.  These machines were labelled analog computers, a term derived from the word “analogy” because each machine relied on a physical model of the phenomenon being studied to perform calculations unlike a digital computer that relied purely on numbers.  The key component of these machines was the wheel-and-disc integrator, first described by James Thomson, that allowed integral calculus to be performed mechanically.  Perhaps the most important analog computer of the nineteenth century was completed by James’s brother William, better known to history as Lord Kelvin, in 1876.  Called the tide predictor, Kelvin’s device relied on a series of mechanical parts such as pulleys and gears to simulate the gravitational forces that produce the tides and measured the water depth of a harbor at any given time of day, printing the results on a roll of paper.  Before Lord Kelvin’s machine, creating tide tables was so time-consuming that only the most important ports were ever charted.  After Kelvin’s device entered general use, it was finally possible to complete tables for thousands of ports around the world.  Improved versions of Kelvin’s computer continued to be used until the 1950s.

In the United States, interest in analog computing began to take off in the 1920s as General Electric and Westinghouse raced to build regional electric power networks by supplying alternating-current generators to power plants.  At the time, the mathematical equations required to construct the power grids were both poorly understood and difficult to solve by hand, causing electrical engineers to turn to analog computing as a solution.  Using resistors, capacitors, and inducers, these computers could simulate how the network would behave in the real world.  One of the most elaborate of these computers, the AC Network Analyzer, was built at MIT in 1930 and took up an entire room.  With one of the finest electrical engineering schools in the country, MIT quickly became a center for analog computer research, which soon moved from highly specific models like the tide predictor and power grid machines to devices capable of solving a wider array of mathematical problems through the work of MIT professor Vannevar Bush.

One of the most important American scientists of the mid-twentieth century, Bush possessed a brilliant mind coupled with a folksy demeanor and strong administration skills.  These traits served him well in co-founding the American Appliance Company in 1922 — which later changed its name to Raytheon and became one of the largest defense contractors in the world — and led to his appointment in 1941 to head the new Office of Scientific Research and Development, which oversaw and coordinated all wartime scientific research by the United States government during World War II and was instrumental to the Allied victory.

Bush built his first analog computer in 1912 while a doctoral student at Tufts College.  Called the “profile tracer,” it consisted of a box hung between two bicycle wheels and would trace the contours of the ground as it was rolled.  Moving on to MIT in 1919, Bush worked on problems involving electric power transmission and in 1924 developed a device with one of his students called the “product integraph” to simplify the solving and plotting of the first-order differential equations required for that work.  Another student, Harold Hazen, suggested this machine be extended to solve second-order differential equations as well, which would make the device useful for solving a wide array of physics problems.  Bush immediately recognized the potential of this machine and worked with Hazen to build it between 1928 and 1931.  Bush called the resulting machine the “differential analyzer.”

The differential analyzer improved the operation of Thomson’s wheel-and-disc integrator through a device called a torque amplifier, allowing it to mechanically model, solve, and plot a wider array of differential equations than any analog computer that came before, but it still fell short of the Babbage ideal of a general-purpose digital device.  Nevertheless, the machine was installed at several universities, corporations, and government laboratories and demonstrated the value of using a computing device to perform advanced scientific calculations.  It was therefore an important stepping stone on the path to the digital computer.

Electo-Mechanical Digital Computers

23593-004-D5156F2C

The Automatic Sequence Controlled Calculator (ASCC), also known as the Harvard Mark I, the first proposed electro-mechanical digital computer, though not the first completed

With problems like power network construction requiring ever more complex equations and the looming threat of World War II requiring world governments to compile large numbers of ballistics tables and engage in complex code-breaking operations, the demand for computing skyrocketed in the late 1930s and early 1940s.  This led to a massive expansion of human computing and the establishment of the first for-profit calculating companies, beginning with L.J. Comrie’s Scientific Computing Services Limited in 1937.  Even as computing services were expanding, however, the armies of human computers required for wartime tasks were woefully inadequate for completing necessary computations in a timely manner, while even more advanced analog computers like the differential analyzer were still too limited to carry out many important tasks.  It was in this environment that researchers in the United States, Great Britain, and Germany began attempting to address this computing shortfall by designing digital calculating machines that worked similarly to Babbage’s Analytical Engine but made use of more advanced components not available to the British mathematician.

The earliest digital calculating machines were based on electromechanical relay technology.  First developed in the mid nineteenth century for use in the electric telegraph, a relay consists in its simplest form of a coil of wire, an armature, and a set of contacts.  When a current is passed through the coil, a magnetic field is generated that attracts the armature and therefore draws the contacts together, completing a circuit.  When the current is removed, a spring causes the armature to return to the open position.  Electromechanical relays played a crucial role in the telephone network in the United States, routing calls between different parts of the network.  Therefore, Bell Labs, the research arm of the telephone monopoly AT&T, served as a major hub for relay research and was one of the first places where the potential of relays and similar switching units for computer construction was first contemplated.

The concept of the binary digital circuit, which continues to power computers to this day, was independently articulated and applied by several scientists and mathematicians in the late 1930s.  Perhaps the most influential of these thinkers — due to his work being published and widely disseminated — was Claude Shannon.  A graduate of the University of Michigan with degrees in electrical engineering and math, Shannon matriculated to MIT, where he secured a job helping Bush run his Differential Analyzer.  In 1937, Shannon took a summer job at Bell Labs, where he gained hands on experience with the relays used in the phone network and connected their function with another interest of his — the symbolic logic system created by mathematician George Boole in the 1840s.

Basically, Boole had discovered a way to represent formal logical statements mathematically by giving a true proposition a value of 1 and a false proposition a value of 0 and then constructing mathematical equations that could represent the basic logical operations such as “and,” “or” and “not.”  Shannon realized that since a relay either existed in an “on” or an “off” state, a series of relays could be used to construct logic gates that emulated Boolean logic and therefore carry out complex instructions, which in their most basic form are a series of “yes” or “no,” “on” or “off,” “1” or “0” propositions.  When Shannon returned to MIT that fall, Bush urged him to include these findings in his master’s thesis, which was published later that year under the name “A Symbolic Analysis of Relay and Switching Circuits.”  In November 1937, a Bell Labs researcher named George Stibitz, who was aware of Shannon’s theories, applied the concept of binary circuits to a calculating device for the first time when he constructed a small relay calculator he dubbed the K-Model because he built it at his kitchen table.  Based on this prototype, Stibitz received permission to build a full-sized model at Bell Labs, which was named the Complex Number Calculator and completed in 1940.  While not a full-fledged programmable computer, Stibitz’s machine was the first to use relays to perform basic mathematical operations and demonstrated the potential of relays and binary circuits for computing devices.

One of the earliest digital computers to use electromechanical relays was proposed by Howard Aiken in 1936.  A doctoral candidate in mathematics at Harvard University, Aiken needed to solve a series of non-linear differential equations as part of his dissertation, which was beyond the capabilities of Bush’s differential analyzer at neighboring MIT.  Unenthused by the prospect of solving these equations by hand, Aiken, who was already a skilled electrical engineer, proposed that MIT build a large-scale digital calculator to do the work.  The university turned him down, so Aiken approached the Monroe Calculating Machine Company, which also failed to see any value in the project.  Monroe’s chief engineer felt the idea had merit, however, and urged Aiken to approach IBM.

When last we left IBM in 1928, the company was growing and profitable, but lagged behind several other companies in overall size and importance.  That all changed with the onset of the Great Depression.  Like nearly every other business in the country, IBM was devastated by the market crash of 1929, but Tom Watson decided to boldly soldier on without laying off workers or cutting production, keeping his faith that the economy could not continue in a tailspin for long.  He also increased the company’s emphasis on R&D, building one of the world’s first corporate research laboratories to house all his engineers in Endicott, New York in 1932-33 at a cost of $1 million.  As the Depression dragged on, machines began piling up in the factories and IBM’s growth flattened, threatening the solvency of the company.  Watson’s gambles increasingly appeared to be a mistake, but then President Franklin Roosevelt began enacting his New Deal legislation.

In 1935, the United States Congress passed the Social Security Act.  Overnight, every company in the country was required to keep detailed payroll records, while the Social Security Administration had to keep a file on every worker in the nation.  The data processing burden of the act was enormous, and IBM, with its large stock of tabulating machines and fully operational factories, was the only company able to begin filling the demand immediately.  Between 1935 and 1937, IBM’s revenues rose from $19 million to $31 million and then continued to grow for the next 45 years.  The company was never seriously challenged in tabulating equipment again.

Traditionally, data processing revolved around counting tangible objects, but by the time Aiken approached IBM Watson had begun to realize that scientific computing was a natural extension of his company’s business activities.  The man who turned Watson on to this fact was Ben Wood, a Columbia professor who pioneered standardized testing and was looking to automate the scoring of his tests using tabulating equipment.  In 1928, Wood wrote ten companies to win support for his ideas, but only Watson responded, agreeing to grant him an hour to make his pitch.  The meeting began poorly as the nervous Wood failed to hold Watson’s interest with talk of test scoring, so the professor expanded his presentation to describe how nearly anything could be represented mathematically and therefore quantified by IBM’s machines.  One hour soon stretched to over five as Watson grilled Wood and came to see the value of creating machines for the scientific community.  Watson agreed to give Wood all the equipment he needed, dropped in frequently to monitor Wood’s progress, and made the professor an IBM consultant.  As a result of this meeting, IBM began supplying equipment to scientific labs around the world.

Aiken

Howard Aiken, designer of the Automatic Sequence Control Calculator

In 1937, Watson began courting Harvard, hoping to create the same kind of relationship he had long enjoyed with Columbia.  He dispatched an executive named John Phillips to meet with deans and faculty, and Aiken used the opportunity to introduce IBM to his calculating device.  He also wrote a letter to James Bryce, IBM’s chief engineer, who sold Watson on the concept.  Bryce assigned Clair Lake to oversee the project, which would be funded and built by IBM in Endicott according to Aiken’s design and then installed at Harvard.

Aiken’s initial concept basically stitched together a card reader, a multiplying punch, and a printer, removing human intervention in the process by connecting the components through electrical wiring and incorporating relays as switching units to control the passage of information through the parts of the machine.  Aiken drew inspiration from Babbage’s Analytical Enginge, which the mathematician first learned about soon after proposing his device when a technician informed him that the university actually owned a fragment of one of Babbage’s calculating machines that had been donated by the inventor’s son in 1886. Unlike Babbage, however, Aiken did not employ separate memory and computing elements, as all calculations were performed across a series of 72 accumulators that both stored and modified the data transmitted to them by the relays.  Without something akin to a CPU, the machine was actually less advanced than the Analytical Engine in that it did not support conditional branching — the ability to modify a program on the fly to incorporate the results of previous calculations — and therefore required all calculations to be done in a set sequence while requiring complex programs to use large instruction sets and long lines of paper tape.

Work began on the Automatic Sequence Control Calculator (ASCC) Mark I in 1939, but the onset of World War II resulted in the project being placed on the back burner as IBM shifted its focus to more important war work and Aiken entered the Navy.  It was finally completed in January 1943 at a cost of $500,000 and subsequently installed at Harvard in early 1944 after undergoing a year of testing in Endicott.  Measuring 8 feet tall and 51 feet long, the machine was housed in a gleaming metal case designed by Norman Bel Geddes, known for his art deco works such as the Metropolitan Opera House in New York.  By the time of its completion, the ASCC already lagged behind several other machines technologically and therefore did not play a significant role in the further evolution of the computer.  It is notable, however, both as the earliest proposed digital computer to actually be built and as IBM’s introduction to the world of computing.

zuse

Konrad Zuse, designer of the Z1, the first completed digital computer

While Howard Aiken was still securing support for his digital computer, a German named Konrad Zuse was busy completing one of his own.  Born in Berlin, Zuse spent most of his childhood in Braunsberg, East Prussia (modern Braniewo, Poland).  Deciding on a career as an engineer, he enrolled at the Technical College of Berlin-Charlottenburg in 1927.  While not particularly interested in mathematics, Zuse did have to work with complex equations to calculate the lode-bearing capability of structures, and like Aiken across the Atlantic he was not enthused at having to perform these calculations by hand.  Therefore, in 1935 Zuse began designing a universal automatic calculator consisting of a computing element, a storage unit, and a punched tape reader, independently arriving at the same basic design that Babbage had developed a century before.

While Zuse’s basic concept did not stray far from Babbage, however, he did incorporate one crucial improvement in his design that neither Babbage nor Aiken had considered, storing the numbers in memory according to a binary rather than a decimal system.  Zuse’s reason for doing so was practical — as an accomplished mechanical engineer he preferred keeping his components as simple as possible to make the computer easier to design and build — but the implications of this decision went far beyond streamlined memory construction.  Like Shannon, Zuse realized that by recognizing data in only two states, on and off, a computing device could represent not just numbers, but also instructions.  As a result, Zuse was able to use the same basic building blocks for both his memory and computing elements, simplifying the design further.

By 1938, Zuse had completed his first computer, a mechanical binary digital machine called the Z1. (Note: Originally, Zuse called this computer the V1 and continued to use the “V” designation on his subsequent computers.  After World War II, he began referring to these machines using the “Z” designation instead to avoid confusion with Germany’s V1 and V2 rockets.)  This first prototype was fairly basic, but it proved two things for Zuse: that he could create a working automatic calculating device and that the computing element could not be mechanical, as the components were just too unreliable.  The solution to this problem came from college friend Helmut Schreyer, an electrical engineer who convinced Zuse that the electrical relays used in telephone networks would provide superior performance.  Schreyer also worked as a film projectionist and convinced Zuse to switch from paper tape to punched film stock for program control.  These improvements were incorporated into the Z2 computer, completed in 1939, which never worked reliably, but was essential for securing funding for Zuse’s next endeavor.

Z3_1

A reconstruction of Konrad Zuse’s Z3, the world’s first programmable fully automatic digital computer

In 1941, Konrad Zuse completed the Z3 for the German government, the first fully operational digital computer in the world.  The computer consisted of two cabinets containing roughly 2,600 relays — 1,800 for memory, 600 for computing, and 200 for the tape reader — and a small display/keyboard unit for inputting programs.  With a memory of only 64 characters, the computer was too limited to carry out useful work, but it served as an important proof of concept and illustrated the potential of a programmable binary computer.

Unfortunately for Zuse, the German government proved disinterested in further research.  Busy fighting a war it was convinced would be over in just a year or two, the Third Reich limited its research activities to projects that could directly impact the war effort in the short-term and ignored the potential of computing entirely.  While Zuse continued to work on the next evolution of his computer design, the Z4, between 1942 and 1945, he did so on his own without the support of the Reich, which also turned down a computer project by his friend Schreyer that would have replaced relays with electronics.  Isolated from the rest of the developed world by the war, Zuse’s theories would have little impact on subsequent developments in computing, while the Z3 itself was destroyed in an Allied bombing raid on Berlin in 1943 before it could be studied by other engineers.  That same year, Great Britain’s more enthusiastic support of computer research resulted in the next major breakthrough in computing technology.

The Birth of the Electronic Computer

Colossus

Colossus, the world’s first programmable electronic computer

Despite the best efforts of Aiken and Zuse, relays were never going to play a large role in computing, as they were both unreliable and slow due to a reliance on moving parts.  In order for complex calculations to be completed quickly, computers would need to transition from electro-mechanical components to electronic ones, which function instead by manipulating a beam of electrons.

The development of the first electronic components grew naturally out of Thomas Edison’s work with the incandescent light bulb.  In 1880, Edison was conducting experiments to determine why the filament in his new incandescent lamps would sometimes break and noticed that an electrical charge would not flow through a negatively charged plate.  Although this effect had been observed by other scientists as early as 1873, Edison was the first to patent a voltage-regulating device based on this principle in 1883, which resulted in the phenomenon being named the “Edison effect.”

Edison, who did not have a solid grasp of the underlying science, did not follow up on his discovery.  In 1904, however, John Fleming, a consultant with the Marconi Company engaged in research relating to wireless telegraphy, realized that the Edison effect could be harnessed to create a device that would only allow the flow of electric current in one direction and thus serve as a rectifier that turned a weak alternating current into a stronger direct current.  This would in turn allow a receiver to be more sensitive to radio waves, thus making reliable trans-Atlantic wireless communication possible.  Based on his research, Fleming created the first diode, the Fleming Valve, in which an electric current was passed in one direction from a negatively-charged cathode to a positively-charged anode through a vacuum-sealed glass container.  The vacuum tube concept invented by Fleming remained the primary building block of electronic devices for the next fifty years.

In 1906, an American electrical engineer named Lee DeForest working independently of Fleming began creating his own series of electron tubes, which he called Audions.  DeForest’s major breakthrough was the development of the triode, which used a third electrode called a grid that could control the voltage of the current in the tube and therefore serve as an amplifier to boost the power of a signal.  DeForest’s tube contained gas at low pressure, which inhibited reliable operation, but by 1913 the first vacuum tube triodes had been developed.  In 1918, British physicists William Eccles and F.W. Jordan used two triodes to create the Eccles-Jordan circuit, which could flip between two states like an electrical relay and therefore serve as a switching device.

Even after the invention of the Eccles-Jordan circuit, few computer pioneers considered using vacuum tubes in their devices.  Conventional wisdom held they were unsuited for large-scale projects because a triode contains a filament that generates a great deal of heat and is prone to burnout.  Consequently, the failure rate would be unacceptable in a device requiring thousands of tubes.  One of the first people to challenge this view was a British electrical engineer named Thomas Flowers.

Tommy_Flowers

Tommy Flowers, the designer of Colossus

Born in London’s East End, Flowers, the son of a bricklayer, simultaneously took an apprenticeship in mechanical engineering at the Royal Armory, Woolwich, while attending evening classes at the University of London.  After graduating with a degree in electrical engineering, Flowers took a job with the telecommunications branch of the General Post Office (GPO) in 1926.  In 1930, he was posted to the GPO Research Branch at Dollis Hill, where he established a reputation as a brilliant engineer and achieved rapid promotion.

In the early 1930s, Flowers began conducting research into the use of electronics to replace relays in telephone switchboards.  Counter to conventional wisdom, Flowers realized that vacuum tube burnout usually occurred when a device was switched on and off frequently.  In a switchboard or computer, the vacuum tubes could remain in continuous operation for extended periods once switched on, thus greatly increasing their longevity.  Before long, Flowers began experimenting with equipment containing as many as 3,000 vacuum tubes.  Flowers would make the move from switchboards to computing devices with the onset of World War II.

With the threat of Nazi Germany rising in the late 1930s, the United Kingdom began devoting more resources to cracking German military codes.  Previously, this work had been carried out in London at His Majesty’s Government Code and Cypher School, which was staffed with literary scholars rather than cryptographic experts.  In 1938, however, MI6, the British Intelligence Service, purchased a country manor called Bletchley Park, near the intersection of the rail lines connecting Oxford and Cambridge and London and Birmingham, to serve as a cryptographic and code-breaking facility.  The next year, the government began hiring mathematicians to seriously engage in code-breaking activities.  The work conducted at the manor has been credited with shortening the war in Europe and saving countless lives. It also resulted in the development of the first electronic computer.

Today, the Enigma Code, broken by a team led by Alan Turing, is the most celebrated of the German ciphers decrypted at Bletchley, but this was actually just one of several systems used by the Reich and was not even the most complicated.  In mid-1942, Germany initiated general use of the Lorenz Cipher, which was reserved for messages between the German High Command and high-level army commands, as the encryption machine — which the British code-named “Tunny” — was not easily portable like the Enigma Machine.  In 1942, Bletchley established a section dedicated to breaking the cipher, and by November a system called the “statistical method” had been developed by William Tutte to crack the code, which built on earlier work by Turing.  When Tutte presented his method, mathematician Max Newman decided to establish a new section — soon labelled the Newmanry — to apply the statistical method with electronic machines.  Newman’s first electronic codebreaking machine, the Heath Robinson, was both slow and unreliable, but it worked well enough to prove that Newman was on the right track.

Meanwhile, Flowers joined the code-breaking effort in 1941 when Alan Turing enlisted Dollis Hill to create some equipment for use in conjunction with the Bombe, his Enigma-cracking machine.  Turing was greatly impressed by Flowers, so when Dollis Hill encountered difficulty crafting a combining unit for the Heath Robison, Turing suggested that Flowers be called in to help.  Flowers, however, doubted that the Heath Robisnon would ever work properly, so in February 1943 he proposed the construction of an electronic computer to do the work instead.  Bletchley Park rejected the proposal based on existing prejudices over the unreliability of tubes, so Flowers began building the machine himself at Dollis Hill.  Once the computer was operational, Bletchley saw the value in it and accepted the machine.

Installed at Bletchley Park in January 1944, Flowers’s computer, dubbed Colossus, contained 1600 vacuum tubes and processed 5,000 characters per second, a limit imposed not by the speed of the computer itself, but rather by the speed that the reader could safely operate without risk of destroying the magnetic tape.  In June 1944, Flowers completed the first Colossus II computer, which contained 2400 tubes and used an early form of shift register to perform five simultaneous operations and therefore operated at a speed of 25,000 characters per second.  The Colossi were not general purpose computers, as they were dedicated solely to a single code-breaking operation, but they were program-controlled. Unlike electro-mechanical computers, however, electronic computers process information too quickly to accept instructions from punched cards or paper tape, so the Colossus actually had to be rewired using plugs and switches to run a different program, a time-consuming process.

As the first programmable electronic computer, Colossus was an incredibly significant advance, but it ultimately exerted virtually no influence on future computer design.  By the end of the war, Bletchley Park was operating nine Colossus II computers alongside the original Colossus to break Tunny codes, but after Germany surrendered, Prime Minister Winston Churchill ordered the majority of the machines dismantled and kept the entire project classified.  It was not until the 1970s that most people knew that Colossus had even existed, and the full function of the machine remained unknown until 1996.  Therefore, instead of Flowers being recognized as the inventor of the electronic computer, that distinction was held for decades by a group of Americans working at the Moore School of the University of Pennsylvania.

ENIAC

ENIAC_Image_2

The Electronic Numerical Integrator and Computer (ENIAC), the first widely known electronic computer

In 1935, the United States Army established a new Ballistic Research Laboratory (BRL) at the Aberdeen Proving Grounds in Maryland dedicated to calculating ballistics tables for artillery.  With modern guns capable of lofting projectiles at targets many miles away, properly aiming them required the application of complex differential equations, so the BRL assembled a staff of thirty to create trajectory tables for various ranges, which would be compiled into books for artillery officers.  Aberdeen soon installed one of Bush’s differential analyzers to help compute the tables, but the onset of World War II overwhelmed the lab’s capabilities.  Therefore, it began contracting some of its table-making work with the Moore School, the closest institution with its own differential analyzer.

The Moore School of Electrical Engineering of the University of Pennsylvania owned a fine reputation, but it carried nowhere near the prestige of MIT and therefore did not receive the same level of funding support from the War Department for military projects.  It did, however, place itself on a war footing by accelerating degree programs through the elimination of vacations and instituting a series of war-related training and research programs.  One of these was the Engineering, Science, Management, War Training (ESMWT) program, an intensive ten-week course designed to familiarize physicists and mathematicians with electronics to address a manpower shortfall in technical fields.  One of the graduates of this course was a physics instructor at a nearby college named John Mauchly.

Born in Cincinnati, Ohio, John William Mauchly grew up in Chevy Chase, Maryland, after his physicist father became the research chief for the Department of Terrestrial Magnetism of the Carnegie Insitution, a foundation established in Washington, D.C. to support scientific research around the country.  Sebastien Mauchly specialized in recording atmospheric electrical conditions to further weather research, so John became particularly interested in meteorology.  After completing a Ph.D. at Johns Hopkins University in 1932, Mauchly took a position at Ursinus College, a small Philadelphia-area institution, where he studied the effects of solar flares and sunspots on long-range weather patterns.  Like Aiken and Zuse before him, Mauchly grew tired of solving the complex equations required for his research and began to dream of building a machine to automate this process.  After viewing an IBM electric calculating machine and a vacuum tube encryption machine at the 1939 World’s Fair, Mauchly felt electronics would provide the solution, so he began taking a night course in electronics and crafting his own experimental circuits and components.  In December 1940, Moore gave a lecture articulating his hopes of building a weather prediction computer to the American Association for the Advancement of Science.  After the lecture, he met an Iowa State College professor named John Atanasoff, who would play an important role in opening Mauchly to the potential of electronics by inviting him out to Iowa State to study a computer project he had been working on for several years.

atanasoff-berry-computer

The Atanasoff-Berry Computer (ABC), the first electronic computer project, which was never completed

A graduate of Iowa State College that earned a Ph.D. in theoretical physics from the University of Wisconsin-Madison in 1930, John Atanasoff, like Howard Aiken, was drawn to computing due to the frustration of solving equations for his dissertation.  In the early 1930s, Atanasoff experimented with tabulating machines and analog computing to make solving complex equations easier, culminating in a decision in December 1937 to create a fully automatic electronic digital computer.  Like Shannon and Zuse, Atanasoff independently arrived at binary digital circuits as the most efficient way to do calculations, remembering childhood lessons by his mother, a former school teacher, on calculating in base 2.  While he planned to use vacuum tubes for his calculating circuits, however, he rejected them for storage due to cost.  Instead, he developed a system in which paper capacitors would be attached to a drum that could be rotated by a bicycle chain.  By keeping the drums rotating so that the capacitors would sweep past electrically charged brushes once per second, Atanasoff believed he would be able to keep the capacitors charged and therefore create a low-cost form of electronic storage.  Input and output would be accomplished through punch cards or paper tape.  Unlike most of the other computer pioneers profiled so far, Atanasoff was only interested in solving a specific set of equations and therefore hardwired the instructions into the machine, meaning it would not be programmable.

By May 1939, Atanasoff was ready to put his ideas into practice, but he lacked electrical engineering skills himself and therefore needed an assistant to actually build his computer.  After securing a $650 grant from the Iowa State College Research Council, Atanasoff hired a graduate student recommended by one of his colleagues named Clifford Berry.  A genius who graduated high school at sixteen, Berry had been an avid ham radio operator in his youth and worked his way through college at Iowa State as a technician for a local company called Gulliver Electric.  He graduated in 1939 at the top of his engineering school class.  The duo completed a small-scale prototype of Atanasoff’s concept in late 1939 and then secured $5,330 from a private foundation to begin construction of what they named the Atanasoff-Berry Computer (ABC), the first electronic computer to employ separate memory and computing elements and a binary system for processing instructions and storing data, predating Colossus by just a few years.  By 1942, the ABC was nearly complete, but it remained unreliable and was ultimately abandoned when Atanasoff left Iowa State for a wartime posting with the Naval Ordinance Laboratory.  With no other champion at the university, the ABC was cannibalized for parts for more important wartime projects, after which the remains were placed in a boiler room and forgotten.  Until a patent lawsuit brought renewed attention to the computer in the 1960s, few were aware the ABC had ever existed, but in June 1941 Mauchly visited Atanasoff and spent five days learning everything he could about the machine.  While there is still some dispute regarding how influential the ABC was on Mauchly’s own work, there is little doubt that at the very least the computer helped guide his own thoughts on the potential of electronics for computing.

Upon completing the ESMWT at the Moore School, Mauchly was offered a position on the school’s faculty, where he soon teamed with a young graduate student he met during the course to realize his computer ambitions.  John Presper Eckert was the only son of a wealthy real estate developer from Philadelphia and an electrical engineering genius who won a city-wide science fair at twelve years old by building a guidance system for model boats and made money in high school by building and selling radios, amplifiers, and sound systems.  Like Tommy Flowers in England, Eckert was a firm believer in the use of vacuum tubes in computing projects and worked with Mauchly to upgrade the differential analyzer by using electronic amplifiers to replace some of its components.  Meanwhile, Mauchly’s wife was running a training program for human computers, which the university was employing to work on ballistics tables for the BRL.  Even with the differential analyzer working non-stop and over two hundred human computers doing calculations by hand, a complete table of roughly 3,000 trajectories took the BRL thirty days to complete.  Mauchly was uniquely positioned in the organization to understand both the demands being placed on Moore’s computers and the technology that could greatly increase the efficiency of their work.  He therefore drafted a memorandum in August 1942 entitled “The Use of High Speed Vacuum Devices for Calculating” in an attempt to interest the BRL in greatly speeding up artillery table creation through use of an electronic computer.

Mauchly submitted his memorandum to both the Moore School and the Army Ordinance Department and was ignored by both, most likely due to the continued skepticism over the use of vacuum tubes in large-scale computing projects.  The paper did catch the attention of one important person, however, Lieutenant Herman Goldstine, a mathematics professor from the University of Chicago currently serving as the liaison between the BRL and the Moore School human computer training program.  While not one of the initial recipients of the memo, Goldstine became friendly with Mauchly in late 1942 and learned of the professor’s ideas.  Aware of the acute manpower crisis faced by the BRL for creating its ballistic tables, Goldstine urged Mauchly to resubmit his memo and promised he would use all his influence to aid its acceptance.  Therefore, in April 1943, Mauchly submitted a formal proposal for an electronic calculating machine that was quickly approved and given the codename “Project PX.”

g

John Mauchly (right) and J. Presper Eckert, the men behind ENIAC

Eckert and Mauchly began building the Electronic Numerical Integrator and Computer (ENIAC) in autumn 1943 with a team of roughly a dozen engineers.  Mauchly remained the visionary of the project and was largely responsible for defining its capabilities, while the brilliant engineer Eckert turned that vision into reality.  ENIAC was a unique construction that had more in common with tabulating machines than later electronic computers, as the team decided to store numbers in decimal rather than binary and stored and modified numbers in twenty accumulators, therefore failing to separate the memory and computing elements.  The machine was programmable, though like Colossus this could only be accomplished through rewiring, as the delay of waiting for instructions to be read from a tape reader was unacceptable in a machine operating at electronic speed.  The computer was powerful for its time, driven by 18,000 vacuum tubes, 70,000 resistors, 10,000 capacitors, 6,000 switches, and 1,500 relays, and could output a complete artillery table in just fifteen minutes.  The entire computer took up 1,800 square feet of floor space, consumed 150 kilowatts of power, and generated an enormous amount of heat.  Costing roughly $500,000, ENIAC was completed in November 1945 and successfully ran its first program the following month.

Unlike the previously discussed Z3, Colossus, and ABC computers, the ENIAC was announced to the general public with much fanfare in February 1946, was examined by many other scientists and engineers, and became the subject of a series of lectures held at the Moore School over eight weeks in the summer of 1946 in which other aspiring computer engineers could learn about the machine in detail.  While it was completed too late to have much impact on the war effort and exerted virtually no influence on future computers from a design perspective, the ENIAC stands as the most important of the early computers because it proved to the world at large that vacuum tube electronic computers were possible and served as the impetus for later computer projects.  Indeed, even before the ENIAC had been completed, Eckert and Mauchly were moving on to their next computer concept, which would finally introduce the last important piece of the computer puzzle: the stored program.

The First Stored Program Computers

Manchester_Mark2

The Manchester Small-Scale Experimental Machine (SSEM), the first stored-program computer to successfully run a program

As previously discussed, electronic computers like the Colossus and ENIAC were limited in their general utility because they could only be configured to run a different program by actually rewiring the machine, as there were no input devices capable of running at electronic speeds.  This bottleneck could be eliminated, however, if the programs themselves were also stored in memory alongside the numbers they were manipulating.  In theory, the binary numeral system made this feasible since the instructions could be represented through symbolic logic as a series of “yes or no,” “on or of,” “1 or 0” propositions, but in reality the amount of storage needed would overwhelm the current technology.  The mighty ENIAC with its 18,000 vacuum tubes could only store 200 characters in memory.  This was fine if all you needed to store were a few five or ten digit numbers at a time, but instruction sets would require thousands of characters.  By the end of World War II the early computer pioneers of both Great Britain and the United States began tackling this problem independently.

The brilliant British mathematician Alan Turing, who has already been mentioned several times in this blog for both his code breaking and early chess programming feats, first articulated the stored program concept.  In April 1936, Turing completed a paper entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” as a response to a lecture by Max Newman he attended at Cambridge in 1935.  In a time when the central computing paradigm revolved around analog computers tailored to specific problems, Turing envisioned a device called the Universal Turing Machine consisting of a scanner reading an endless roll of paper tape. The tape would be divided into individual squares that could either be blank or contain a symbol.  By reading these symbols based on a simple set of hardwired instructions and following any coded instructions conveyed by the symbols themselves, the machine would be able to carry out any calculation possible by a human computer, output the results, and even incorporate those results into a new set of calculations.  This concept of a machine reacting to data in memory that could consist of both instructions and numbers to be manipulated encapsulates the basic operation of a stored program computer.

Turing was unable to act on his theoretical machine with the technology available to him at the time, but when he first saw the Colossus computer in operation at Bletchley Park, he realized that electronics would make such a device possible.  In 1945, Turing moved from Bletchley Park to the National Physical Laboratory (NPL), where late in the year he outlined the first relatively complete design for a stored-program computer.  Called the Automatic Computing Engine (ACE), the computer defined by Turing was ambitious for its time, leading others at the NPL to fear it could not actually be built.  The organization therefore commissioned a smaller test model instead called the Pilot ACE.  Ultimately, Turing left the NPL in frustration over the slow progress of building the Pilot ACE, which was not completed until 1950 and was therefore preceded by several other stored program computers.  As a result, Turing, despite being the first to articulate the stored program concept, exerted little influence over how the stored program concept was implemented.

One of the first people to whom Turing gave a copy of his landmark 1936 paper was its principle inspiration, Max Newman.  Upon reading it, Newman became interested in building a Universal Turing Machine himself.  Indeed, he actually tried to interest Tommy Flowers in the paper while he was building his Colossi for the Newmanry at Bletchley Park, but Flowers was an engineer, not a mathematician or logician, and by his own admission did not really understand Turing’s theories.  As early as 1944, however, Newman himself was expressing his enthusiasm about taking what had been learned about electronics during the war and establishing a project to build a Universal Turing Machine at the war’s conclusion.

In September 1945, Newman took the Fielden Chair of Mathematics at Manchester University and soon after applied for a grant from the Royal Society to establish the Computing Machine Laboratory at the university.  After the grant was approved in May 1946, Newman had portions of the dismantled Colossi shipped to Manchester for reference and began assembling a team to tackle a stored-program computer project.  Perhaps the most important members of the team were electrical engineers Freddie Williams and Tom Kilburn.  While working on radar during the war, the duo developed a storage method in which a cathode ray tube can “remember” a piece of information by virtue of firing an electron “dot” onto the surface of the tube, thus creating a persistent charge well.  By placing a metal plate against the surface of the tube, this data can be “read” in the form of a voltage pulse transferred to the plate whenever a charge well is created or eliminated by drawing or erasing a dot.  Originally developed to eliminate stationary background objects from a radar display, a Williams tube could also serve as computer memory and store 1,024 characters.  As any particular dot on the tube could be read at any given time, the Williams tube was an early form of random access memory (RAM)

In June 1948, Williams and Kilburn completed the Manchester Small Scale Experimental Machine (SSEM), which was specifically built to test the viability of the Williams Tube as a computer memory device.  While this computer contained only 550 tubes and was therefore not practical for actual computing projects, the SSEM was the first device in the world with all the characteristics of a stored program computer and proved the viability of Williams Tube memory.  Building on this work, the team completed the Manchester Mark 1 computer in October 1949, which contained 4,050 tubes and used more reliable custom-built CRTs from industrial conglomerate the General Electric Company (GEC) to increase the reliability of the memory.

978

John von Neumann stands next to the IAS Machine, which he developed based on his consulting work on the Electronic Discrete Variable Automatic Computer (EDVAC), the first stored-program computer in the United States

Meanwhile, at the Moore School Eckert and Mauchly were already beginning to ponder building a computer superior to the ENIAC by the middle of 1944.  The duo felt the most serious limitation of the computer was its paltry storage, and like Newman in England, they turned to radar technology for a solution.  Before joining the ENIAC project, Eckert had devised the first practical method of eliminating stationary objects from a display called delay line memory.  Basically, rather than displaying the result of a single pulse on the screen, the radar would compare two pulses, one of which was delayed by passing it through a column of mercury, allowing both pulses to arrive at the same time, with the radar screen displaying only those objects that were in different locations between the two pulses.  Eckert realized that using additional electronic components to keep the delayed pulse trapped in the mercury would allow it to function as a form of computer memory.

The effort to create a better computer received a boost when Herman Goldstine had a chance encounter with physicist John von Neumann at the Aberdeen railroad station.  A brilliant Hungarian emigre teaching at Princeton, von Neumann was consulting on several government war programs, including the Manhattan Project, but had not been aware of the ENIAC.  When Goldstine started discussing the computer on the station platform, von Neumann took an immediate interest and asked for access to the project.  Impressed by what he saw, von Neumann not only used his influence to help gain the BRL’s approval for Project PY to create the improved machine, he also held several meetings with Eckert and Mauchly in which he helped define the basic design of the computer.

The extent of von Neumann’s contribution to the Electronic Discrete Variable Automatic Computer (EDVAC) remains controversial.  Because the eminent scientist penned the first published general overview of the computer in May 1945, entitled “First Draft of a Report on the EDVAC,” the stored program concept articulated therein came to be called the “von Neumann architecture.”  In truth, the realization that the increased memory provided by mercury delay lines would allow both instructions and numbers to be stored in memory occurred during meetings between Eckert, Mauchly, and von Neumann, and his contributions were probably not definitive.  Von Neumann did, however, play a critical role in defining the five basic elements of the computer — the input, the output, the control unit, the arithmetic unit, and the memory — which remain the basic building blocks of the modern computer.  It is also through von Neumann, who was keenly interested in the human brain, that the term “memory” entered common use in a computing context.  Previously, everyone from Babbage forward had used the term “storage” instead.

The EDVAC project commenced in April 1946, but the departure of Eckert and Mauchly with most of their senior engineers soon after disrupted the project, so the computer was not completed until August 1949 and only became fully operational in 1951 after several problems with the initial design were solved.  It contained 6,000 vacuum tubes, 12,000 diodes, and two sets of 64 mercury delay lines capable of storing eight characters per line, for a total storage capacity of 1,024 characters.  Like the ENIAC, EDVAC cost roughly $500,000 to build.

cambridge

The Electronic Delay Storage Automatic Calculator (EDSAC)

Because of the disruptions caused by Eckert and Mauchley’s departures, the EDVAC was not actually the first completed stored program computer conforming to von Neumann’s report.  In May 1946, computing entrepreneur L.J. Comrie visited the Moore School to view the ENIAC and came away with a copy of the von Neumann EDVAC report.  Upon his return to England, he brought the report to physicist Maurice Wilkes, who had established a computing laboratory at Cambridge in 1937, but had made little progress in computing before World War II.  Wilkes devoured the report in an evening and then paid his own way to the United States so he could attend the Moore School lectures.   Although he arrived late and only managed to attend the final two weeks of the course, Wilkes was inspired to initiate his own stored-program computer project at Cambridge, the Electronic Delay Storage Automatic Calculator (EDSAC).  Unlike the competing computer projects at the NPL and Manchester University, Wilkes decided that completing a computer was more important than advancing computer technology and therefore decided to create a machine of only modest capability and to use delay line memory rather than the newer Williams tubes developed at Manchester.  While this resulted in a less powerful computer than some of its contemporaries, it did allow the EDSAC to become the first practical stored-program computer when it was completed in May 1949.

Meanwhile, after concluding his consulting work at the Moore School, John von Neumann established his own stored-program computer project in late 1945 at the Institute of Advanced Study (IAS) at Princeton University.  Primarily designed by Julian Bigelow, the IAS Machine employed 3,000 vacuum tubes and could hold 4,096 40-bit words in its Williams Tube memory.  Although not completed until June 1952, the functional plan of the computer was published in the late 1940s and widely disseminated.  As a result, the IAS Machine became the template for many of the scientific computers built in the 1950s, including the MANIAC, JOHNNIAC, MIDAC, and MIDSAC machines that hosted some of the earliest computer games.

With the Moore lectures about the ENIAC and the publication of the IAS specifications helping to spread interest in electronic computers across the developed world and the EDSAC computer demonstrating that crafting a reliable stored program computer was possible, the stage was now set for the computer to spread beyond a few research laboratories at prestigious universities and become a viable commercial product.

Historical Interlude: The Birth of the Computer Part 1, the Mechanical Age

Before continuing the history of video gaming with the activities of the Tech Model Railroad Club and the creation of the first truly landmark computer game, Spacewar!, it is time to pause and present the first of what I referred to in my introductory post as “historical interludes.”  In order to understand why the video game finally began to spread in the 1960s, it is important to understand the evolution of computer technology and the spread of computing resources.  As we shall see, the giant mainframes of the 1940s and 1950s were neither particularly interactive nor particularly accessible outside of a small elite, which generally prevented the creation of programs that provided feedback quickly and seamlessly enough to create an engaging play experience while also generally discouraging projects not intended to aid serious research or corporate data processing.  By the time work on Spacewar! began in 1961, however, it was possible to occasionally divert computers away from more scholarly pursuits and design a program interesting enough to hold the attention of players for hours at a time.  The next four posts will describe how computing technology reached that point.

Note: Unlike my regular posts, historical interlude posts will focus more on summarizing events and less on critiquing sources or stating exactly where every last fact came from.  They are meant to provide context for developments in video game history, and the information within them will usually be drawn from a small number of secondary sources and not be researched as thoroughly as the video game history posts.  Much of the material in this post is drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, The Maverick and His Machine: Thomas Watson, Sr. and the Making of IBM by Kevin Maney, and The Innovaters: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson.

Defining the Computer

766px-Human_computers_-_Dryden

Human computers working at the NACA High Speed Flight Station in 1949

Before electronics, before calculating machines, even before the Industrial Revolution there were computers, but the term did not mean the same thing it does today.  Before World War II and the emergence of the first electronic digital computers, a computer was a person who performed calculations, generally for a specialized purpose.  As we shall see, most of the early computers were created specifically to perform calculations, so as they grew to function with less need for human intervention, they naturally came to be called “computers” themselves after the profession they quickly replaced.

The computer profession originated after the development of the first mathematical tables in the 16th and 17th centuries such as the logarithmic tables designed to perform complex mathematical operations solely through addition and subtraction and the trigonometric tables designed to simplify the calculation of angles for fields like surveying and astronomy.  Computers were the people who would perform the calculations necessary to produce these tables.  The first permanent table-making project was established in 1766 by Nevil Maskelyne to produce navigational tables that were updated and published annually in the Nautical Almanac, which is still issued today.

Maskelyne relied on freelance computers to perform his calculations, but with the dawning of the Industrial Revolution, a French mathematician named Gaspard de Prony established what was essentially a computing factory in 1791 modeled after the division of labor principles espoused by Adam Smith in the Wealth of Nations to compile accurate logarithmic and trigonometric tables to aid in performing a new survey of the entirety of France as part of a project to reform the property tax system.  De Prony relied on a small number of skilled mathematicians to define the mathematical formulas and a group of middle managers to organize the tables, so his computers needed only a knowledge of basic addition and subtraction to do their work, reducing the computer to an unskilled laborer.  As the Industrial Revolution progressed, unskilled workers in most fields moved from using simple tools to mechanical factory machinery to do their work, so it comes as no surprise that one enterprising individual would attempt to bring a mechanical tool to computing as well.

Charles Babbage and the Analytical Engine

charles_babbage

Charles Babbage, creator of the first computer design

Charles Babbage was born in 1791 in London.  The son of a banker, Babbage was a generally indifferent student who bounced between several academies and private tutors, but did gain a love of mathematics at an early age and attained sufficient marks to enter Trinity College, Cambridge, in 1810.  While Cambridge was the leading mathematics institution in England, the country as a whole had fallen behind the Continent in sophistication, and Babbage soon came to realize he knew more about math than his instructors.  In an attempt to rectify this situation, Babbage and a group of friends established the Analytical Society to reform the study of mathematics at the university.

After leaving Cambridge in 1814 with a degree in mathematics from Peterhouse, Babbage settled in London, where he quickly gained a reputation as an eminent mathematical philosopher but had difficulty finding steady employment.  He also made several trips to France beginning in 1819, which is where he learned of De Prony’s computer factory.  In 1820, he joined with John Herschel to establish the Astronomical Society and took work supervising the creation of star tables.  Frustrated by the tedious nature of fact-checking the calculations of the computers and preparing the tables for printing, Babbage decided to create a machine that would automate the task.

The Difference Engine would consist of columns of wheels and gears each of which represented a single decimal place.  Once the initial values were set for each column — which would be determined by setting a polynomial equation in column one and then using a series of derivatives to establish the value of the other columns — the machine would use Newton’s method of divided differences (hence its name) to perform addition and subtraction functions automatically, complete the tables, and then send them to a printing device.  Babbage presented his proposed machine to the Royal Society in 1822 and won government funding the next year by arguing that a maritime industrial nation required the most accurate navigational tables possible and that the Difference Engine would be both cheaper to operate and more accurate than an army of human computers.

The initial grant of £1,500 quickly proved insufficient for the task of creating the machine, however, which was at the very cutting edge of machine tool technology and therefore extremely difficult to fashion components for.   The government continued to fund the project for over a decade, however, ultimately providing £17,000.  By 1833, Babbage was able to construct a miniature version of the Difference Engine that lacked sufficient capacity to actually create tables but did prove the feasibility of the project.  The next year, however, he unwittingly sabotaged himself by proposing an even more grand device to the government, the Analytical Engine, thus undermining the government’s faith in Babbage’s ability to complete the original project and causing it to withdraw funding and support.  A fully working Difference Engine to Babbage’s specification would not be built until the late 1980s, by which time it was a historical curiosity rather than a useful machine.  In the meantime, Babbage turned his attention to the Analytical Engine, the first theorized device with the capabilities of a modern computer.

10303265

A portion of Charles Babbage’s Analytical Engine, which remained unfinished at his death

The Difference Engine was merely a calculating machine that performed addition and subtraction, but the proposed Analytical Engine was a different beast.  Equipped with an arithmetical unit called the “mill” that exhibited many of the features of a modern central-processing unit (CPU), the machine would be capable of performing all four basic arithmetic operations.  It would also possess a memory, able to store 1,000 numbers of up to 40 digits each.  Most importantly, it would be program controlled, able to perform a wide variety of tasks based on instructions inputted into the machine.  These programs would be entered using punched cards, a recording medium first developed in 1725 by Basile Bouchon and Jean-Baptiste Falcon to automate textile looms that was greatly improved and popularized by Joseph Marie Jacquard in 1801 for the loom that bears his name.  Results could be outputted to a printer or a curve plotter.  By employing separate memory and computing elements and establishing a method of program control, Babbage outlined the first machine to include all the basic hallmarks of the modern computer.

Babbage sketched out the design of his Analytical Engine between 1834 and 1846.  He then halted work on the project for a decade before returning to the concept in 1856 and continuing to tinker with it right up until his death in 1871.  Unlike with the Difference Engine, however, he was never successful in securing funding from a British Government that remained unconvinced of the device’s utility — as well as unimpressed by Babbage’s inability to complete the first project it had commissioned from him — and thus failed to build a complete working unit.  His project did attract attention in certain circles, however.  Luigi Manabrea, a personal friend and mathematician who later became Prime Minister of Italy, invited Babbage to give a presentation on his Analytical Engine at the University of Turin in 1842 and subsequently published a transcription of the lecture in French.  This account was translated into English over a nine month period in 1842-43 by another friend of Babbage, Ada Lovelace, the daughter of the celebrated poet Lord Byron.

Ada Lovelace has been a controversial figure in computer history circles.  Born in 1815, she never knew her celebrated father, whom her mother fled shortly after Ada’s birth.  She possessed what appears to have been a decent mathematical mind, but suffered from mental instability and delusions of grandeur that caused her to perceive greater abilities than she actually possessed.  She became a friend and student of noted mathematician Mary Somerville, who was also a friend of Babbage.  It was through this connection that she began attending Babbage’s regular Saturday evening salons in 1834 and came to know the man.  She tried unsuccessfully to convince him to tutor her, but they remained friends and he was happy to show off his machines to her.  Lovelace became a fervent champion of the Analytical Engine and attempted to convince Babbage to make her his partner and publicist for the machine.  It was in this context that she not only took on the translation of the Turin lecture in 1842, but at Babbage’s suggestion also decided to appended her own description of how the Analytical Engine differed from the earlier Difference Engine alongside some sample calculations using the machine.

In a section entitled “Notes by the Translator,” which ended up being longer than the translation itself, Lovelace articulated several important general principles of computing, including the recognition that a computer could be programmed and reprogrammed to take on a variety of different tasks and that it could be set to tasks beyond basic math through the use of symbolic logic.  She also outlined a basic structure for programming on the Analytical Engine, becoming the first person to articulate common program elements such as recursive loops and subroutines.  Finally, she included a sample program to calculate a set of Bernoulli numbers using the Analytical Engine.  This last feat has led some people to label Lovelace the first computer programmer, though in truth it appears Babbage created most of this program himself.  Conversely, some people dismiss her contributions entirely, arguing that she was being fed all of her ideas directly by Babbage and had little personal understanding of how his machine worked.  The truth is probably somewhere in the middle.  While calling her the first programmer is probably too much of a stretch, as Babbage had already devised several potential programs himself by that point and contributed significantly to Lovelace’s as well, she still deserves recognition for being the first person to articulate several important elements of computer program structure.  Sadly, she had no chance to make any further mark on computer history, succumbing to uterine cancer in 1852 at the age of thirty-six.

Towards the Modern Office

cb000184_1907_Office_with_Burroughs_Model_6_OM

An Office in the B-Logo Business Systems Department in 1907, showcasing some of the mechanical equipment revolutionizing clerical work in the period.

Ultimately, the Analytical Engine proved too ambitious, and the ideas articulated by Babbage would have to wait for the dawn of the electronics era to become practical.  In the meantime, however, the Industrial Revolution resulted in great advances in office automation that would birth some of the most important companies of the early computer age.  Unlike the human computer industry and the innovative ideas of Babbage, however, the majority of these advances came not from Europe, but from the United States.

Several explanations have been advanced to explain why the US became the leader in office automation.  Certainly, the country industrialized later than the European powers, meaning businessmen were not burdened with outmoded theories and traditions that hindered innovations in the Old World.  Furthermore, the country had a long history of interest in manufacturing efficiency, dating back as far as Eli Whitney and his concept of using interchangeable parts in firearms in 1801 (Whitney’s role in the creation of interchangeable parts is usually exaggerated, as he was not the first person to propose the method and was never actually able to implement it himself, but he was responsible for introducing the concept to the US Congress and therefore still deserves some credit for its subsequent adoption in the United States).  By the 1880s, this fascination with efficiency had evolved into the “scientific management” principles of Frederick Taylor that aimed to identify best practices through rational, empirical study and employ standardization and training to eliminate waste and inefficiency on the production line.  Before long, these ideals had penetrated the domain of the white-collar worker through the concept of “office rationalization,” in which managers introduced new technologies and systems to maximize productivity in that setting as well.

The first major advance in the drive for office automation was the invention of a practical typewriter.  While several inventors created typing machines in the early nineteenth century, none of these designs gained any traction in the marketplace because using them was slower than writing out a document by hand.  In 1867, however, a retired newspaper editor named Christopher Latham Sholes was inspired by an article in Scientific American describing a mechanical typing device to create one of his own.  By the next year Sholes, with the help of amateur mechanic Carlos Glidden and printer Samuel Soule, had created a prototype for a typing machine using a keyboard and type-basket design that finally allowed typing at a decent speed.  After Soule left the project, Sholes sent typewritten notes to several financiers in an attempt to raise capital to refine the device and prepare for mass production.  A Pennsylvania businessman named James Densmore answered the call and provided the funding necessary to make important improvements such as replacing a frame to hold the paper with a rotating drum and changing the layout of the keyboard to the familiar QWERTY orientation — still used on computer keyboards to this day — to cut down on jamming by spacing out commonly used letters in the typing basket.

After several failed attempts to mass produce the typewriter through smaller companies in the early 1870s, Densmore was able to attract the interest of Philio Remington of the small-arms manufacturer E. Remington & Sons, which had been branching out into other fields such as sewing machines and fire engines in the aftermath of the U.S. Civil War.  First introduced by Remington in 1874, the typewriter sold slowly at first, but as office rationalization took hold in the 1880s, businesses started flocking to the machine.  By 1890 Remington had a virtual monopoly on the new industry and was producing 20,000 machines a year.  In addition to establishing the typewriter in the office, Remington also pioneered the idea of providing after-market service for office products, opening branch offices in major cities where people could not only buy typewriters, but also bring them in for repairs.

With typed loose-leaf pages replacing the traditional “letter book” for office correspondence, companies soon found it necessary to adopt new methods for storing and retrieving documents.  This led to the development of vertical filing using hanging folders stored in upright cabinets, which was first publicly demonstrated by Melville Dewey at the Chicago World’s Fair in 1893.  While vertical filing proved superior to the boxes and drawers previously employed in the workplace, however, it proved woefully inefficient once companies evolved from tracking hundreds of records to tens of thousands.  This time the solution came from James Rand, Sr., a clerk from Tonawanda, New York, who patented a visible index system in which colored signal strips and tabs would allow specific file folders to be found quickly and easily.  Based on this invention, the clerk established the Rand Ledger Company in 1898.  His son, James Rand, Jr., joined the business in 1908 and then split off from his father in 1915 after a dispute over advertising spending to market his own record retrieval system based around index cards called the Kardex System.  As the elder Rand neared retirement a decade later, his wife orchestrated a reconciliation between him and his son, and their companies merged to form the Rand Kardex Company in 1925.  Two years later, Rand Kardex merged with the Remington Typewriter Company to form Remington Rand,  which became the largest business machine company in the world.

burroughs

A Burroughs “adder-lister,” one of the first commercially successful mechanical calculators

A second important invention of the late nineteenth century was the first practical calculator.  Mechanical adding machines had existed as far back as the 17th century when Blaise Pascal completed his Pascaline in 1645 and Gottfriend Liebnitz invented the first calculator capable of performing all four basic functions, the Stepped Reckoner, in 1692, but the underlying technology remained fragile and unreliable and therefore unsuited to regular use despite continued refinements over the next century.  In 1820, the calculator was commercialized for the first time by Thomas de Colmar, but production of his Arithmometer lasted only until 1822.  After making several changes, Thomas began offering his machine to the public again in 1851, but while the Arithmometer gained a reputation for both sturdiness and accuracy, production never exceeded a few dozen a year over the next three decades as the calculator remained too slow and impractical for use in a business setting.

The main speed bottleneck of the early adding machines was that they all required the setting of dials and levers to use, making them far more cumbersome for bookkeepers than just doing the sums by hand.  The man who first solved this problem was Dorr Felt, a Chicago machinist who replaced the dials with keys similar to those found on a typewriter.  Felt’s Comptometer, completed in 1885, arranged keys labelled 0 to 9 across ten columns that each corresponded to a single digit of a number, allowing figures to be entered rapidly with just one hand.  In 1887, Felt formed the Felt & Tarrant Manufacturing Company with a local manufacturer named Robert Tarrant to mass produce the Comptometer, and by 1900 they were selling over a thousand a year.

While Felt remained important in the calculator business throughout the early twentieth century, he was ultimately eclipsed by another inventor.  William S. Burroughs, the son of a St. Louis mechanic, was employed as a clerk at a bank but suffered from health problems brought on by spending hours hunched over columns adding figures.  Like Felt, he decided to create a mechanical adding machine using keys to improve this process, but he also added another key advance to his “adder-lister,” the ability to print the numbers as they were entered so there would be a permanent record of every financial transaction.  In 1886, Burroughs established the American Arithmometer Company to market his adding machine, which was specifically targeted at banks and clearing houses and was selling at a rate of several hundred a year by 1895.  Burroughs died in 1898, but the company lived on and relocated to Detroit in 1904 after it outgrew its premises in St. Louis, changing its name to the Burroughs Adding Machine Company in honor of its founder.  At the time of the move, Burroughs was selling 4,500 machines a year.  Just four years later, that number had risen to 13,000.

John H. Patterson

John H. Patterson, founder of the National Cash Register Company (NCR)

The adding machine was one of two important money management devices invented in this period, with the other being the mechanical cash register.  This device was invented in 1879 by James Ritty, a Dayton saloon owner who feared his staff was stealing from him, and constructed by his brother, John.  Inspired by a tool that counted the revolutions of the propeller on a steamship, “Ritty’s Incorruptible Cashier” required the operator to enter each transaction using a keypad, displayed each total entered for all to see, and printed the results on a roll of paper, allowing the owner to compare the cash taken in to the recorded amounts.  Ritty attempted to interest other business owners in his machine, but proved unsuccessful and ultimately sold the business to Jacob Eckert of Cincinnati in 1881.  Eckert added a cash drawer to the machine and established the National Manufacturing Company, but he was barely more successful than the Rittys.  Therefore, in 1884 he sold out to John Patterson, who established the National Cash Register Company (NCR).

John Henry Patterson was born on a farm outside Dayton, Ohio, and entered the coal trade after graduating from Dartmouth College.  While serving as the general manager of the Southern Coal and Iron Company, Patterson was tasked with running the company store and became one of Ritty’s earliest cash register customers.  After being outmaneuvered in the coal trade, Patterson sold his business interests and used the proceeds to buy NCR.  A natural salesman, Patterson created and/or popularized nearly every important modern sales practice while running NCR.  He established sales territories and quotas for his salesmen, paid them a generous commission, and rewarded those who met their quotas with an annual sales convention.  He also instituted formal sales training and produced sales literature that included sample scripts, creating the first known canned sales pitch.  Like Remington, he established a network of dealerships that provided after market services to build customer loyalty, but he also advertised through direct mailings, another unusual practice.  Understanding that NCR could only stay on top of the business by continuing to innovate, Patterson also established an “innovations department” in 1888, one of the earliest permanent corporate research & development organizations in the world.  In an era when factory work was mostly still done in crowded “sweatshops,” Patterson constructed a glass-walled factory that let in ample light set amid beautifully landscaped grounds.

While Patterson seemed to genuinely care for the welfare of his workers, however, he also had a strong desire to control every aspect of their lives.  He manipulated subordinates constantly, hired and fired individuals for unfathomable reasons, instituted a strict physical fitness regimen that all employees were expected to follow, and established rules of conduct for everything from tipping waiters to buying neckties.  For all his faults, however, his innovative sales techniques created a juggernaut.  By 1900, the company was selling 25,000 cash registers a year, and by 1910 annual sales had risen to 100,000.  By 1928, six years after Patterson’s death, NCR was the second largest office-machine supplier in the world with annual sales of $50 million, just behind Remington Rand at $60 million and comfortably ahead of number three Burroughs at $32 million.  All three companies were well ahead of the number four company, a small firm called International Business Machines, or IBM.

Computing, Tabulating, and Recording

IBM, which eventually rose to dominance in the office machine and data processing industries, cannot be traced back to a single origin, for it began as a holding company that brought together several firms specializing in measuring and processing information.  There were three key people responsible for shaping the company in its early years: Herman Hollerith, Charles Flint, and Tom Watson, Sr.

416px-Hollerith

Herman Hollerith, whose tabulating machine laid the groundwork for the company that became IBM

Born in Buffalo, New York, in 1860, Herman Hollerith pursued an education as a mining engineer, culminating in a Ph.D from Columbia University in 1890.  One of Hollerith’s professors at Columbia also served as an adviser to the Bureau of the Census in Washington, introducing Hollerith to the largest data processing organization in the United States.  At the time, the Census Bureau was in crisis as traditional methods of processing census forms failed to keep pace with a growing population.  The 1880 census, processed entirely by hand using tally sheets, took the bureau seven years to complete.  With the population of the country continuing to expand rapidly, the 1890 census appeared poised to take even longer.  To attack this problem, the new superintendent of the census, Robert Porter, held a competition to find a faster and more efficient way to count the U.S. population.

Three finalists demonstrated solutions for Porter in 1889.  Two of them created systems using colored ink or cards to allow data to be sorted more efficiently, but these were still manual systems.  Hollerith on the other hand, inspired by the ticket punches used by train conductors, developed a system in which the statistical information was recorded on punched cards that were quickly tallied by a tabulating machine of his own design.  Cards were placed in this machine one at a time and pressed with an apparatus containing 288 retractable pins.  Any pin that encountered a hole in the card would complete an electrical circuit and advance one of forty tallies.  Using Hollerith’s machines, the Census Bureau was able to complete its work in just two and a half years.

As the 1890 census began to wind down, Hollerith re-purposed his tabulating system for use by businesses and incorporated the Tabulating Machine Company in December 1896.  He remained focused on the census, however, until President McKinley’s assassination in 1901 resulted in the appointment of a new superintendent that chose to go with a different company for 1910.  In the meantime, Hollerith refined his system by implementing a three-machine setup consisting of a keypunch to put the holes in the cards, a tabulator to tally figures, and a sorting machine to place the cards in sequence.  By 1911, Hollerith had roughly one hundred customers and the business was continuing to expand, but his health was failing, leading him to entertain an offer to sell from an influential financier named Charles Flint.

Charles_Ranlett_Flint

Charles Rantlett Flint, the man who forged IBM

Charles Rantlett Flint was a self-made man born into a family of shipbuilders that started his first business at 18 on the docks of his hometown of Thomaston, Maine.  From there, he secured a job with a trader named William Grace by offering to work for free.  In 1872, Grace made Flint a partner in his new W.R. Grace & Co. shipping and trading firm, which still exists today as a chemical and construction materials conglomerate.  During this period, Flint acted as a commission agent in South America dealing in both arms and raw materials.  He also became keenly interested in new technologies such as the automobile, light bulb, and airplane.

In 1892, Flint leveraged his international trading contacts to pull together a number of rubber exporters into a trust called U.S. Rubber.  This began a period of intense monopoly building by Flint across a number of industries.  By 1901, Flint’s growing roster of trusts included the International Time Recording Company (ITR) of Endicott, New York, based around the recently invented time clock that allowed employers to easily track the hours worked by their employees, and the Computing Scale Company of America of Dayton, Ohio, based around scales that would both weigh items by the pound and compute their total cost.  While ITR proved modestly successful, however, the Computing Scale Company ended an abject failure.  In an attempt to salvage his poorly performing concern, Flint decided to define a new larger market of information recording machines for businesses and merge ITR and Computing Scale under the umbrella of a single holding company.  Feeling Hollerith’s company fit well into this scheme, Flint purchased it as well in 1911 and folded the three companies into the new Computing-Tabulating-Recording Company (C-T-R).  The holding company approach did not work, however, as C-T-R was an unwieldy organization consisting of three subsidiaries spread across five cities with managers that ignored each other at best and actively plotted against each other at worst.  Furthermore, the company was saddled with a large debt and its component parts could not leverage their positions in a trust to create superior integration or economies of scale because their products and customers were too different.  By 1914, C-T-R was worth only $3 million and carried a debt of $6.5 million.  Flint’s experiment had clearly failed, so he brought in a new general manager to turn the company around.  That man was Thomas Watson, Sr.

thomas_watson

Thomas Watson, Sr., the man who built IBM into a corporate giant

By the time Flint hired Watson for C-T-R, he already had a reputation as a stellar salesman, but was also tainted by a court case brought over monopolistic practices.  Born on a farm in south central New York State, Watson tried his hand as both a bookkeeper and a salesman with various outfits, but had trouble holding down steady employment.  After his latest venture failed in 1896, a butcher’s shop in Buffalo, Watson trudged down to the local NCR office to transfer the installment payments on the store’s cash register to the new owner.  While there, he struck up a conversation with a salesman named John Range and kept pestering him periodically until Range finally offered him a job.  Within nine months, Watson went from sales apprentice to full sales agent as he finally seemed to find his calling.  Four years later, he was transferred to the struggling NCR branch in Rochester, New York, which he managed to turn around.  This brought him to the attention of John Patterson in Dayton, who tapped Watson for a special assignment.

By 1903, when Patterson summoned Watson, NCR was experiencing fierce competition from a growing second-hand cash register market.  NCR cash registers were both durable and long-lasting, so enterprising businessmen had begun buying up used cash registers from stores that were upgrading or going out of business and then undercutting NCR’s prices on new machines.  For the controlling monopolist Patterson, this was unacceptable.  His solution was to create his own used cash register business that would buy old machines for higher prices than other outlets and sell them cheaper, making up the lost profits through funding directly from NCR.  Once the competition had been driven out of business, prices could be raised and the business would start turning a profit.  Patterson tapped Watson to control this business.  For legal reasons, Patterson kept the connection between NCR and the new Watson business a secret.

Between 1903 and 1908, Watson slowly expanded his used cash register business across the country, creating an excellent new profit-center for NCR.  His reward was a posting back at headquarters in Dayton as an assistant sales manager, where he soon became Patterson’s protégé and absorbed his innovative sales techniques.  By 1910, Watson had been promoted to sales manager, where his personable and less-controlling management style created a welcome contrast to Patterson and encouraged flexibility and creativity among the 900-strong NCR sales force, helping to double the company’s 1909 sales within two years.

As quickly as Watson rose at NCR, however, he fell even faster.  In 1912 the Taft administration, amid a general crusade against corporate trusts, brought criminal charges against Patterson, Watson, and other high-ranking NCR executives for violations of the Sherman Anti-Trust Act.  At the end of a three-month trial, Watson was found guilty along with Patterson and all but one of their co-defendants on February 13, 1913 and now faced the prospect of jail time.  Worse, the ordeal appears to have soured the ever-changeable Patterson on the executives indicted with him, as they were all chased out of the company within a year.  Watson himself departed NCR in November 1913 after 17 years of service.  Some accounts state that Watson was fired, but it appears that the separation was more by mutual agreement.  Either way, it was a humbled and disgraced Watson that Charles Flint tapped to save C-T-R in early 1914.  Things began looking up the next year, however, when an appeal resulted in an order for a new trial.  All the defendants save Watson settled with the government, which decided pursuing Watson alone was not worth the effort.  Thus cleared of all wrongdoing, Watson was elevated to the presidency of C-T-R.

Watson saved and reinvented C-T-R through a combination of Patterson’s techniques and his own charisma and personality.  He reinvigorated the sales force through quotas, generous commissions, and conventions much like Patterson.  A lover of the finer things in life, he insisted that C-T-R staff always be impeccably dressed and polite, shaping the popular image of the blue-suited IBM sales person that would last for decades.  He changed the company culture by emphasizing the importance of every individual in the corporation and building a sense of company pride and loyalty.  Finally, he was fortunate to take over at a time when the outbreak of World War I and a booming U.S. economy led to increased demand for tabulating machines both from businesses and the U.S. government.  Between 1914 and 1917, revenues doubled from $4.2 million to $8.3 million, and by 1920 they had reached $14 million.

What really set IBM apart, however, was the R&D operation Watson established based on the model of NCR’s innovations department.  At the time Watson arrived, C-T-R remained the leading seller of tabulating machines, but the competition was rapidly gaining market share on the back of superior products.  Hollerith, who remained as a consultant to C-T-R after Flint bought his company, showed little interest in developing new products, causing the company’s technology to fall further and further behind.  The company’s only other senior technical employee, Eugene Ford, occasionally came up with improvements, but he could not actually put them into practice without the approval of Hollerith, which was rarely forthcoming.  Watson moved Ford into a New York loft and ordered him to begin hiring additional engineers to develop new products.

Ford’s first hire, Clair Lake, developed the company’s first printing tabulator in the early 1920s, which gave the company a machine that could rival the competition in both technology and user friendliness.  Another early hire, Fred Carroll from NCR, developed the Carroll Press that allowed C-T-R to cheaply mass produce the punched cards used in the tabulating machines and therefore enjoy a huge profit margin on the product.  In the late 1920s, Lake created a new patentable punched-card design that would only work in IBM machines, which locked-in customers and made them unlikely to switch to a competing company and have to redo millions of cards.  Perhaps the most important hire was James Bryce, who joined the company in 1917, rose to chief engineer in 1922, and ended up with over four hundred patents to his name.

After a small hiccup in 1921-22 as the U.S. endured a small recession, C-T-R, which Watson renamed International Business Machines (IBM) in 1924, experienced rapid growth for the rest of the decade, reaching $20 million in revenue by 1928.  While this placed IBM behind Remington Rand, NCR, and Burroughs, the talented R&D group and highly effective sales force built by Watson left the company perfectly poised to rise to a dominant position in the 1930s and subsequently conquer the new computer market of the 1950s.