Philco

Historical Interlude: From the Mainframe to the Minicomputer Part 2, IBM and the Seven Dwarfs

The computer began life in the 1940s as a scientific device designed to perform complex calculations and solve difficult equations.  In the 1950s, the United States continued to fund scientific computing projects at government organizations, defense contractors, and universities, many of them based around the IAS architecture derived from the EDVAC and created by John von Neumann’s team at Princeton.  Some of the earliest for-profit computer companies emerged out of this scientific work such as the previously discussed Engineering Research Associates, the Hawthorne, California-based Computer Research Corporation, which spun out of a Northrup Aircraft project to build a computer for the Air Force in 1952, and the Pasadena-based ElectroData Corporation, which spun out of the Consolidated Engineering Corporation that same year.  All of these companies remained fairly small and did not sell many computers.

Instead, it was Remington Rand that identified the future path of computing when it launched the UNIVAC I, which was adopted by businesses to perform data processing.  Once corporate America understood the computer to be a capable business machine and not just an expensive calculator, a wide array of office equipment and electronics companies entered the computer industry in the mid 1950s, often buying out the pioneering computer startups to gain a foothold.  Remington Rand dominated this market at first, but as discussed previously, IBM soon vaulted ahead as it acquired computer design and manufacturing expertise participating in the SAGE project and unleashed its world-class sales and service organizations.  Remington Rand attempted to compensate by merging with Sperry Gyroscope, which had both a strong relationship with the military and a more robust sales force, to form Sperry Rand in 1955, but the company never seriously challenged IBM again.

While IBM maintained its lead in the computer industry, however, by the beginning of the 1960s the company faced threats to its dominance at both the low end and the high end of the market from innovative machines based around new technologies like the transistor.  Fearing these new challengers could significantly damage IBM, Tom Watson Jr. decided to bet the company on an expensive and technically complex project to offer a complete line of compatible computers that could not only be tailored to a customer’s individual’s needs, but could also be easily modified or upgraded as those needs changed over time.  This gamble paid off handsomely, and by 1970 IBM controlled well over seventy percent of the market, with most of the remainder split among a group of competitors dubbed the “seven dwarfs” due to their minuscule individual market shares.  In the process, IBM succeeded in transforming the computer from a luxury item only operated by the largest firms into a necessary business appliance as computers became an integral part of society.

Note: Yet again we have a historical interlude post that summarizes key events outside of the video game industry that nevertheless had a significant impact upon it.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of the Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh. and Building IBM: Shaping an Industry and Its Technology by Emerson Pugh.

IBM Embraces the Transistor

IBM1401_TapeSystem_Mwhite

The IBM 1401, the first mainframe to sell over 10,000 units

Throughout most of its history in computers, IBM has been known more for evolution than revolution.  Rarely first with a new concept, IBM excelled at building designs based around proven technology and then turning its sales force loose to overwhelm the competition.  Occasionally, however, IBM engineers have produced important breakthroughs in computer design.  Perhaps none of these were more significant than the company’s invention of the disk drive.

On the earliest computers, mass data storage was accomplished through two primary methods: magnetic tape or magnetic drums.  Tape could hold a large amount of data for the time, but it could only be read serially, and it was a fragile medium.  Drums were more durable and had the added benefit of being random access — that is any point of data on the drum could be read at any time — but they were low capacity and expensive.  As early as the 1940s, J. Presper Eckert had explored using magnetic disks rather than drums, which would be cheaper and feature a greater storage capacity due to a larger surface area, but there were numerous technical hurdles that needed to be ironed out.  Foremost among these was the technology to read the disks.  A drum memory array used rigid read-write heads that could be readily secured, though at high cost.  A disk system required a more delicate stylus to read the drives, and the constant spinning of the disk created a high risk that the stylus would make contact with and damage it.

The team that finally solved these problems at IBM worked not at the primary R&D labs in Endicott or Poughkeepsie, but rather a relatively new facility in San Jose, California, led by IBM veteran Reynold Johnson that had been established in 1952 as an advanced technologies research center free of the influence of the IBM sales department, which had often shut down projects with no immediate practical use.  One of the lab’s first projects was to improve storage for IBM’s existing tabulating equipment.  This task fell to a team led by Arthur Critchlow, who decided based on customer feedback to develop a new random access solution that would allow IBM’s tabulators and low-end computers to not only be useful for data processing, but also for more complicated jobs like inventory management.  After testing a wide variety of memory solutions, Critchlow’s team settled on the magnetic disk as the only viable solution, partially inspired by a similar project at the National Bureau of Standards on which an article had been published in August 1952.

To solve the stylus problem on the drive, Critchlow’s team attached a compressor to the unit that would pump a thin layer of air between the disk and the head.  Later models would take advantage of a phenomenon known as the “boundry layer” in which the fast motion of the disks would generate the air cushion themselves.  After experimenting with a variety of head types and positions throughout 1953 and 1954, the team was ready to complete a final design.  Announced in 1956 as the Model 305 Disk Storage Unit and later renamed RAMAC (for Random Access Memory Accounting Machine), IBM’s first disk drive consisted of fifty 24-inch diameter aluminum disks rotating at 1200 rpm with a storage capacity of five million characters.  Marketed as an add-on to the IBM 650, RAMAC revolutionized data processing by eliminating the time consuming process of manually sorting information and provided the first compelling reason for small and mid-sized firms to embrace computers and eliminate electro-mechanical tabulating equipment entirely.

IBM_7090_computer

The IBM 7090, the company’s first transistorized computer

In August 1958, IBM introduced its latest scientific computer, the IBM 709, which improved on the functionality of the IBM 704.  The 709 continued to depend on vacuum tubes, however, even as competitors were starting to bring the first transistorized computers to market.  While Tom Watson, Jr. and his director of engineering, Wally McDowell, were both excited by the possibilities of transistors from the moment they first learned about them and as early as 1950 charged Ralph Palmer’s Poughkeepsie laboratory to begin working with the devices, individual project managers continued to have the final authority in choosing what parts to use in their machines, and many of them continued to fall back on the more familiar vacuum tube.  In the end, Tom Watson, Jr. had to issue a company-wide mandate in October 1957 that transistors were to be incorporated into all new projects.  In the face of this resistance, Palmer felt that IBM needed a massive project to push its solid-state designs forward, something akin to what Project SAGE had done for IBM’s efforts with vacuum tubes and core memory.  He therefore teamed with Steve Dunwell, who had spent part of 1953 and 1954 in Washington D.C. assessing government computing requirements, to propose a high-speed computer tailored to the ever-increasing computational needs of the military-industrial complex.  A contract was eventually secured with the National Security Agency, and IBM approved “Project Stretch” in August 1955, which was formally established in January 1956 with Dunwell in charge.

Project Stretch experienced a long, difficult, and not completely successful development cycle, but it did achieve Palmer’s goals of greatly improving IBM’s solid-state capabilities, with particularly important innovations including a much faster core memory and a “drift transistor” that was faster than the surface-barrier transistor used in early solid-state computing projects like the TX-0.  As work on Stretch dragged on, however, these advances were first introduced commercially through another product.  In response to Sputnik, the United States Air Force quickly initiated a new Ballistic Missile Early Warning System (BMEWS) project that, like SAGE, would rely on a series of linked computers.  The Air Force mandated, however, that these computers incorporate transistors, so Palmer offered to build a transistorized version of the 709 to meet the project’s needs.  The resulting IBM 7090 Data Processing System, deployed in November 1959 as IBM’s first transistorized computer, provided a six-fold increase in performance over the 709 at only one-third additional cost.  In 1962,  an upgraded version dubbed the 7094 was released with a price of roughly $2 million.  Both computers were well-received, and IBM sold several hundred of them.

Despite the success of its mainframe computer business, IBM in 1960 still derived the majority of its sales from the traditional punched-card business.  While some larger organizations were drawn to the 702 and 705 business computers, their price kept them out of reach of the majority of IBM’s business customers.  Some of these organizations had embraced the low-cost 650 as a data processing solution, leading to over 800 installations of the computer by 1958, but it was actually more expensive and less reliable than IBM’s mainline 407 electric accounting machine.  The advent of the transistor, however, finally provided the opportunity for IBM to leave its tabulating business behind for good.

The impetus for a stored-program computer that could displace traditional tabulating machines initially came from Europe, where IBM did not sell its successful 407 due to import restrictions and high tooling costs.  In 1952, a competitor called the French Bull Company introduced a new calculating machine, the Bull Gamma 3, that used delay-line memory to provide greater storage capacity at a cheaper price than IBM’s electronic calculators and could be joined with a card reader to create a faster accounting machine than anything IBM offered in the European market.  Therefore, IBM’s French and German subsidiaries began lobbying for a new accounting machine to counter this threat.  This led to the launch of two projects in the mid-1950s: the modular accounting calculator (MAC) development project in Poughkeepsie that birthed the 608 electronic calculator and the expensive and relatively unsuccessful 7070 transistorized computer, and the Worldwide Accounting Machine (WWAM) project run out of France and Germany to create an improved traditional accounting machine for the European market.

While the WWAM project had been initiated in Europe, it was soon reassigned to Endicott when the European divisions proved unable to come up with an accounting machine that could meet IBM’s cost targets.  To solve this problem, Endicott engineer Francis Underwood proposed that a low-cost computer be developed instead.  Management approved this concept in early 1958 under the name SPACE — for Stored Program Accounting and Calculating Equipment — and formally announced the product in October 1959 as the IBM 1401 Data Processing System.  With a rental cost of only $2,500 a month (roughly equivalent to a purchase price of $150,000), the transitorized 1401 proved much faster and more reliable than an IBM 650 at a fraction of the cost and was only slightly more expensive than a mid-range 407 accounting machine setup.  More importantly, it shipped with a new chain printer that could output 600 lines per minute, far more than the 150 lines per minute produced by the 407, which relied on obsolete prewar technology.  First sold in 1960, IBM projected that it would sell roughly 1,000 1401 computers over its entire lifetime, but its combination of power and price proved irresistible, and by the end of 1961 over 2,000 machines had already been installed.  IBM would eventually deploy 12,000 1401 computers before it was officially withdrawn in 1971.  Powered by the success of the 1401, IBM’s computer sales finally equaled the sales of punch card products in 1962 and then quickly eclipsed them.  No computer model had ever approached the success of the 1401 before, and as IBM rode the machine to complete dominance of the mainframe industry in the early 1960s, the powder-blue casing of the machine soon inspired a new nickname for the company: Big Blue.

The Dwarfs

honeywell200

The Honeywell 200, which competed with IBM’s 1401 and threatened to destroy its low-end business

In the wake of Remington Rand’s success with the UNIVAC I, more than a dozen old-line firms flocked to the new market.  Companies like Monroe Calculating, Bendix, Royal, Underwood, and Philco rushed to provide computers to the business community, but one by one they fell by the wayside.  Of these firms, Philco probably stood the best chance of being successful due to its invention of the surface barrier transistor, but while its Transac S-1000 — which began life in 1955 as an NSA project called SOLO to build a transistorized version of the UNIVAC 1103 — and S-2000 computers were both capable machines, the company ultimately decided it could not keep up with the fast pace of technological development and abandoned the market like all the rest.  By 1960, only five established companies and one computer startup joined Sperry Rand in attempting to compete with IBM in the mainframe space.  While none of these firms ever succeeded in stealing much market share from Big Blue, most of them found their own product niches and deployed some capable machines that ultimately forced IBM to rethink some of its core computer strategies.

Of the firms that challenged IBM, electronics giants GE and RCA were the largest, with revenues far exceeding the computer industry market leader, but in a way their size worked against them.  Since neither computers nor office equipment were among either firm’s core competences, nor integral to either firm’s future success, they never fully committed to the business and therefore never experienced real success.  Unsurprisingly, they were the first of the seven dwarfs to finally call it quits, with GE selling off its computer business in 1970 and RCA following suit in 1971.  Burroughs and NCR, the companies that had long dominated the adding machine and cash register businesses respectively, both entered the market in 1956 after buying out a small startup firm — ElectroData and Computer Research Corporation respectively — and managed to remain relevant by creating computers specifically tailored to their preexisting core customers, the banking sector for Burroughs and the retail sector for NCR.  Sperry Rand ended up serving niche markets as well after failing to compete effectively with IBM, experiencing success in fields such as airline reservation systems.  The biggest threat to IBM’s dominance in this period came from two Minnesota companies: Honeywell and Control Data Corporation (CDC).

Unlike the majority of the companies that persisted in the computer industry, Honeywell came not from the office machine business, but from the electronic control industry.  In 1883, a man named Albert Butz created a device called the “damper flapper” that would sense when a house was becoming cold and cause the flapper on a coal furnace to rise, thus fanning the flames and warming the house.  Butz established a company that did business under a variety of names over the next few years to market his innovation, but he had no particular acumen for business.  In 1891, William Sweatt took over the company and increased sales through door-to-door selling and direct marketing.  In 1909 the company introduced the first controlled thermostat, sold as the “Minnesota Regulator,” and in 1912 Sweatt changed the name of the company to the Minnesota Heat Regulator Company.  In 1927, a rival firm, Mark C. Honeywell’s Honeywell Heating Specialty Company of Wabash, Indiana, bought out Minnesota Heat Regulator to form the Honeywell-Minneapolis Regulator Company with Honeywell as President and Sweatt as chairman.  The company continued to expand through acquisitions over the next decade and weathered the Great Depression relatively unscathed.

In 1941, Harold Sweatt, who had succeeded Honeywell as president in 1934, parlayed his company’s expertise in precision measuring devices into several lucrative contracts with the United States military, emerging from World War II as a major defense contractor.  Therefore, the company was approached by fellow defense contractor Raytheon to establish a joint computer subsidiary in 1954.  Incorporated as Datamatic Corporation the next year, the computer company became a wholly-owned subsidiary of Honeywell in 1957 when Raytheon followed so many other companies in exiting the computer industry.  Honeywell delivered its first mainframe, the Datamatic 1000, that same year, but the computer relied on vacuum tubes and was therefore already obsolete by the time it hit the market.  Honeywell temporarily withdrew from the business and went back to the drawing board.  After IBM debuted the 1401, Honeywell triumphantly returned to the business with the H200, which not only took advantage of the latest technology to outperform the 1401 at a comparable price, but also sported full compatibility with IBM’s wildly successful machine, meaning companies could transfer their existing 1401 programs without needing to make any adjustments.  Announced in 1963, the H200 threatened IBM’s control of the low-end of the mainframe market.

norris_cray

William Norris (l) and Seymour Cray, the principle architects of the Control Data Corporation

While Honeywell chipped away at IBM from the bottom of the market, computer startup Control Data Corporation (CDC) — the brainchild of William Norris — threatened to do the same from the top.  Born in Red Cloud, Nebraska, and raised on a farm, Norris became an electronics enthusiast at an early age, building mail-order radio kits and becoming a ham radio operator.  After graduating from the University of Nebraska in 1932 with a degree in electrical engineering, Norris was forced to work on the family farm for two years due to a lack of jobs during the Depression before joining Westinghouse in 1934 to work in the sales department of the company’s x-ray division.  Norris began doing work for the Navy’s Bureau of Ordinance as a civilian in 1940 and enjoyed the work so much that he joined the Naval Reserve and was called to duty at the end of 1941 at the rank of lieutenant commander.  Norris served as part of the CSAW codebreaking operation and became one of the principle advocates for and co-founders of Engineering Research Associates after the war.  By 1957, Norris was feeling stifled by the corporate environment at ERA parent company Sperry Rand, so he left to establish CDC in St. Paul, Minnesota.

Norris provided the business acumen at CDC, but the company’s technical genius was a fellow engineer named Seymour Cray.  Born in Chippewa Falls, Wisconsin, Cray entered the Navy directly after graduating from high school in 1943, serving first as a radio operator in Europe before being transferred to the Pacific theater to participate in code-breaking activities.  After the war, Cray attended the University of Minnesota, graduated with an electrical engineering degree in 1949, and went to work for ERA in 1951.  Cray immediately made his mark by leading the design of the UNIVAC 1103, one of the first commercially successful scientific computers, and soon gained a reputation as an engineering genius able to create simple, yet fast computer designs.  In 1957, Cray and several other engineers followed Norris to CDC.

Unlike some of the more conservative engineers at IBM, Cray understood the significance of the transistor immediately and worked to quickly incorporate it into his computer designs.  The result was CDC’s first computer, the 1604, which was first sold in 1960 and significantly outperformed IBM’s scientific computers.  Armed with Cray’s expertise in computer design Norris decided to concentrate on building the fastest computers possible and selling them to the scientific and military-industrial communities where IBM’s sales force exerted relatively little influence.  As IBM’s Project Stretch floundered — never meeting its performance targets after being released as the IBM 7030 in 1961 — Cray moved forward with his plans to build the fastest computer yet designed.  Released as the CDC 6600 in 1964, Cray’s machine could perform an astounding three million operations per second, three times as many as the 7030 and more than any other machine would be able to perform until 1969, when another CDC machine, the 7600, outpaced it.  Dubbed a supercomputer, the 6600 became the flagship product of a series of high-speed scientific computers that IBM proved unable to match.  While Big Blue was ultimately forced to cede the top of the market to CDC, however, by the time the 6600 launched the company was in the final phases of a product line that would extend the company’s dominance over the mainframe business and ensure competitors like CDC and Honeywell would be limited to only niche markets.

System/360

 system360

The System/360 family of computers, which extended IBM’s dominance of the mainframe market through the end of the 1960s.

 When Tom Watson Jr. finally assumed full control of IBM from his father, he inherited a corporate structure designed to collect as much power and authority in the hands of the CEO as possible.  Unlike Watson Sr., Watson Jr. preferred decentralized management with a small circle of trusted subordinates granted the authority to oversee the day-to-day operation of IBM’s diverse business activities.  Therefore Watson overhauled the company in November 1956, paring down the number of executives reporting directly to him from seventeen to just five, each of whom oversaw multiple divisions with the new title of “group executive.”  He also formed a Corporate Management Committee consisting of himself and the five group executives to make and execute high-level decisions.  While the responsibilities of individual group executives would change from time to time, this new management structure remained intact for decades.

Foremost among Watson’s new group executives was a vice president named Vin Learson.  A native of Boston, Massachusettes, T. Vincent Learson graduated from Harvard with a degree in mathematics in 1935 and joined IBM as a salesman, where he quickly distinguished himself. In 1949, Learson was named sales manager of IBM’s Electric Accounting Machine (EAM) Division, and he rose to general sales manager in 1953.  In April 1954, Tom Watson, Jr. named Learson the director of Electronic Data Processing Machines with a mandate to solidify IBM’s new electronic computer business.  After guiding early sales of the 702 computer and establishing an advanced technology group to incorporate core memory and other improvements into the 704 and 705 computers, Learson received another promotion to vice president of sales for the entire company before the end of the year.  During Watson’s 1956 reorganization, he named Learson group executive of the Military Products, Time Equipment, and Special Engineering Products divisions.

During the reorganization, IBM’s entire computer business fell under the new Data Processing Division overseen by group executive L.H. LaMotte.  As IBM’s computer business continued to grow and diversify in the late 1950s, however, it grew too large and unwieldy to contain within a single division, so in 1959 Watson split the operation in two by creating the Data Systems Division in Poughkeepsie, responsible for large systems, and the General Products Division, which took charge of small systems like the 650 and 1401 and incorporated IBM’s other laboratories in Endicott, San Jose, Burlington, Vermot, and Rochester, Minnesota.  Watson then placed these two divisions, along with a new Advanced Systems Development Division, under Learson’s control, believing him to be the only executive capable of propelling IBM’s computer business forward.

Learson_1

Vin Learson, the IBM executive who spearheaded the development of the System/360

When Learson inherited the Data Systems and General Products Divisions, he was thrust into the middle of an all out war for control of IBM’s computer business.  The Poughkeepsie Laboratory had been established specifically to exploit electronics after World War II and prided itself on being at the cutting edge of IBM’s technology.  The Endicott Laboratory, the oldest R&D division at the company, had often been looked down upon for clinging to older technology, yet by producing both the 650 and the 1401, Endicott was responsible for the majority of IBM’s success in the computer realm.  By 1960, both divisions were looking to update their product lines with more advanced machines.  That September, Endicott announced the 1410, an update to the 1401 that maintained backwards compatibility.  At the same time, Poughkeepsie was hard at work on a new series of four compatible machines designed to serve a variety of business and scientific customers under the 8000 series designation.  Learson, however, wanted to unify the product line from the very low end represented by the 1401 to the extreme high end represented by the 7030 and the forthcoming 8000 computers.  By achieving full compatibility in this manner, IBM could take advantage of economies of scale to drive down the price of individual computer components and software development while also standardizing peripheral devices and streamlining the sales and service organizations that would no longer have to learn multiple systems.  While Learson’s plan was sound in theory, however, forcing two organizations that prided themselves on their independence and competed with each other fiercely to work together would not be easy.

Learson relied heavily on his power as a group executive to transfer employees across both divisions to achieve project unity.  First, he moved Bob Evans, who had been the engineering manager for the 1401 and 1410, from Endicott to Poughkeepsie as the group’s new systems development manager.  Already a big proponent of compatibility, Evans unsurprisingly recommended that the 8000 project be cancelled and a cohesive product line spanning both divisions be initiated in its place.  The lead designer of the 8000 series, Frederick Brooks, vigorously opposed this move, so Learson replaced Brooks’s boss with another ally, Jerrier Haddad, who had led the design of the 701 and recently served as the head of Advanced Systems Development.  Haddad sided with Evans and terminated the 8000 project in May 1961.  Strong resistance remained in some circles, however, most notably from General Products Division head John Haanstra, so in October 1961, Learson assembled a task group called SPREAD (Systems, Planning, Review, Engineering, and Development) consisting of thirteen senior engineering and marketing managers to determine a long-term strategy for IBM’s data processing line.

On December 28, the SPREAD group delivered its final proposal to the executive management committee.  In it, they outlined a series of five compatible processors representing a 200-fold range in performance.  Rather than incorporate the new integrated circuit, the group proposed a proprietary IBM design called Solid Logic Technology (SLT), in which the discrete components of the circuit were mounted on a single ceramic substrate, but were not fully integrated.  By combining the five processors with SLT circuits and core memories of varying speeds, nineteen computer configurations would be possible that would all be fully compatible and interchangeable and could be hooked up to 40 different peripheral devices.  Furthermore, after surveying the needs of business and scientific customers, the SPREAD group realized that other than floating-point capability for scientific calculations, the needs of both customers were nearly identical, so they chose to unify the scientific and business lines rather then market different models for each.  Codenamed the New Product Line (NPL), the SPREAD proposal would allow IBM customers to buy a computer that met their current needs and then easily upgrade or swap components as their needs changed over time at a fraction of the cost of a new system without having to rewrite all their software or replace their peripheral devices.  While not everyone was convinced by the presentation, Watson ultimately authorized the NPL project.

The NPL project was perhaps the largest civilian R&D operation ever undertaken to that point.  Development costs alone were $500 million, and when tooling, manufacturing, and other expenses were taken into account, the cost was far higher.  Design of the five processor models was spread over three facilities, with Poughkeepsie developing the three high-end systems, Endicott developing the lowest-end system, and a facility in Hursley, England, developing the other system.  At the time, IBM manufactured all its own components as well, so additional facilities were charged with churning out SLT circuits, core memories, and storage systems.  To assemble all the systems, IBM invested in six new factories.  In all, IBM spent nearly $5 billion to bring the NPL to market.

To facilitate the completion of the project, Watson elevated two executives to new high level positions: Vin Learson assumed the new role of senior vice president of sales, and Watson’s younger brother, Arthur, who for years had run IBM’s international arm, the World Trade Corporation, was named senior vice president of research, development, and manufacturing.  This new role was intended to groom the younger Watson to assume the presidency of IBM one day, but the magnitude of the NPL project coupled with Watson’s inexperience in R&D and manufacturing ultimately overwhelmed him.  As the project fell further and further behind schedule, Learson ultimately had to replace Arthur Watson in order to see the project through to completion.  Therefore, it was Learson who assumed the presidency of IBM in 1966 while Watson assumed the new and largely honorary role of vice chairman.  His failure to shepherd the NPL project ended any hope Arthur Watson had of continuing the Watson family legacy of running IBM, and he ultimately left the company in 1970 to serve as the United States ambassador to France.

In late 1963, IBM began planning the announcement of its new product line,  which now went by the the name System/360 — a name chosen because it represented all the points of a compass and emphasized that the product line would fill the needs of all computer users.  Even at this late date, however, acceptance of System/360 within IBM was not assured.  John Haanstra continued to push for an SLT upgrade to the existing 1401 line to satisfy low-end users, which other managers feared would serve to perpetuate the incompatibility problem plaguing IBM’s existing product line.  Furthermore, IBM executives struggled over whether to announce all the models at once and thus risk a significant drop in orders for older systems during the transition period, or phase in each model over the course of several years.  All debate ended when Honeywell announced the H200.  Faced with losing customers to more advanced computers fully compatible with IBM’s existing line,  Watson decided in March 1964 to scrap the improved 1401 and launch the entire 360 product line at once.

On April 7, 1964, IBM held press conferences in sixty-three cities across fourteen countries to announce the System/360 to the world.  Demand soon far exceeded supply as within the first two years that System/360 was on the market IBM was only able to fill roughly 4,500 of 9,000 orders.  Headcount at the company rose rapidly as IBM rushed to bring new factories online in response.  In 1965, when actual shipments of the System/360 were just beginning, IBM controlled 65 percent of the computer market and had revenues of $2.5 billion.  By 1967, as IBM ramped up to meet insatiable 360 demand, the company employed nearly a quarter of a million people and raked in $5 billion in revenues.  By 1970, IBM had an install base of 35,000 computers and held an ironclad grip on the mainframe industry with a marketshare between seventy and eighty percent; the next year company earnings surpassed $1 billion for the first time.

As batch processing mainframes, the System/360 line and its competitors did not serve as computer game platforms or introduce technology that brought the world closer to a viable video game industry.  System/360 did, however, firmly establish the computer within corporate America and solidified IBM’s place as a computing superpower while facilitating the continuing spread of computing resources and the evolution of computer technology.  Ultimately, this process would culminate in a commercial video game industry in the early 1970s.

Advertisements

Historical Interlude: From the Mainframe to the Minicomputer Part 1, Transistors and Integrated Circuits

So now its time to pause again in our examination of video game history to catch up on the technological advances that would culminate in the emergence of an interactive entertainment industry.  As previously discussed, the release and subsequent spread of Spacewar! in 1962 represented the first widespread interest in computer gaming, yet no commercial products would appear before 1971.  In the meantime, computer games continued to be written throughout the 1960s (which will be discussed in a subsequent post), but none of them gained the same wide exposure or popularity as Spacewar!.  Numerous roadblocks prevented the spread of these early computer games ranging from the difficulty of porting programs between systems to the lack of reliable wide area distribution networks, but the primary inhibitor remained cost, as even a relatively cheap $120,000 PDP-1 remained an investment out of the reach of most organizations — let alone the general public — and many computers still cost ten times that amount.

The key to transforming the video game into a commercial product therefore lay in significantly reducing the cost of the hardware involved.  The primary expense in building a computer remained the switching units that defined their internal logic, which in the late 1950s were still generally the bulky, power-hungry, temperamental vacuum tubes.  In 1947, John Bardeen and Walter Brattain at Bell Labs demonstrated the solution to the vacuum tube problem in the form of the semiconducting transistor, but as with any new technology there were numerous production and cost issues that had to be overcome before it could completely displace the vacuum tube.  By the early 1960s, the transistor was finally well established in the computer industry, but while it drove down the cost and size of computers like DEC’s PDP-1, a consumer product remained out of reach.  Finally, in late 1958 and early 1959 engineers working independently at two of the most important semiconductor manufacturers in the world discovered how to integrate all of the components of a circuit on one small plate, commonly called a “chip,” paving the way for cost and size reductions that would allow the creation of the first minicomputers, which remained out of reach for the individual consumer, but could at least be deployed in a public entertainment setting like an arcade.

Note:  Once again, this is a “historical interlude” post that will provide a summary of events drawn from a few secondary sources rather than the in-depth historiographic analysis of my purely game-related posts.  The majority of the information in this post is drawn from Forbes Greatest Technology Stories: Inspiring Tales of the Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, The Man Behind the Microchip: Robert Noyce and the Invention of Silicon Valley by Leslie Berlin, The Intel Trinity: How Robert Noyce, Gordon Moore, and Andy Grove Built the World’s Most Important Company by Michael Malone, an article from the July 1982 issue of Texas Monthly called “The Texas Edison” by T.R. Reid, and The Silicon Engine, an online exhibit maintained by the Computer History Museum.

The Transistor Enters Mass Production

1951_1_2

Gordon Teal (l), whose crystal-growing techniques were crucial to mass producing the transistor

As previously discussed, on December 23, 1947, William Shockley, John Bardeen, and Walter Brattain demonstrated the transistor for the first time in front of a group of managers at Bell Labs, which is widely considered the official birthday of the device.  This transistor consisted of a lump of germanium with three wires soldered to its surface in order to introduce the electrons.  While this point-contact transistor produced the desired results, however, it was difficult to manufacture, with yield rates of only fifty percent.  Determined to create a better device — in part due to anger that Bardeen and Brattain received all the credit for the invention — William Shockley explored alternative avenues to create a less fragile transistor.

In 1940, Bell Labs researchers Russell Ohl and Jack Scaff had discovered while working on semiconductor applications for radar that semiconducting crystals could have either a positive or a negative polarity, which were classified as p-type and n-type crystals respectively.  Shockley believed that by creating a “sandwich” with a small amount of p-type material placed between n-type material on either end, he could create what he termed a junction transistor that would amplify or block a current when a charge of the appropriate polarity was applied to the p-type material in the middle.  Placing the required impurities in just the right spots in the germanium proved challenging, but by 1949, Shockley was able to demonstrate a working p-n junction transistor.  While the junction transistor was theoretically well suited for mass production, however, in reality the stringent purity and uniformity requirements of the semiconducting crystals presented great challenges.  Gordon Teal, a chemist with a Ph.D. from Brown who joined Bell Labs in 1930 and worked on radar during World War II, believed that large crystals doped with impurities at precise points would be necessary to reliably produce a working junction transistor, but he apparently garnered little support for his theories from Shockley and other managers at Bell Labs.  He finally took it upon himself to develop a suitable process for growing crystals with the help of engineer John Little and technician Ernest Buehler, which they successfully demonstrated in 1951.  That same year, another Bell Labs researcher named William Pfann developed a technique called zone refining that allowed for the creation of ultra-pure crystals with minuscule amounts of impurities, which lowered the manufacturing cost of the junction transistor significantly.  Together, the advances by Teal and Pfann provided Bell Labs with a viable fabrication process for transistors.

Part of the reason Teal could not generate much excitement about his manufacturing techniques at Bell Labs is that AT&T remained unsure about entering the transistor business.  Despite recent advances, executives remained doubtful that the transistor would ultimately replace the large and well-established vacuum tube industry.  Worse, the company was currently under investigation by the U.S Department of Justice for anti-trust violations and was therefore hesitant to enter and attempt to dominate a new field of technology.  Therefore, in 1952 the company decided to offer a royalty-free license to any company willing to research integrating the transistor into hearing aids, one of the original passions of company founder Alexander Graham Bell, and held a series of technical seminars introducing interested parties to the device.  Several large electronics companies signed up, including Raytheon, Zenith, and RCA.  They were joined by a relatively small company named Texas Instruments (TI).

74718_fig

From Left to Right, John Erik Jonsson, Henry Bates Peacock, Eugene McDermott, and Cecil Green, the men who transformed Geophysical Service, Inc. into Texas Intruments

In 1924, two physicists named Clarence Karcher and Eugene McDermott established the Geophysical Research Corporation (GRC) in Tulsa, Oklahoma, as a subsidiary of Amerada Petroleum.  The duo had been developing a reflection-seismograph process to map faults and domes beneath the earth when they realized that the same process was ideal for discovering oil deposits.  By 1930, GRC had become the leading geophysical exploration company active along the Gulf Coast, but the founders disliked working for Amerada, so they established a new laboratory in Newark, New Jersey, and with investment from geologist Everette DeGolyer formed a new independent company called Geophysical Service, Inc. (GSI).  In 1934, the company moved the laboratory to Dallas to be closer to the heart of the oil trade.

The early 1930s were not a particularly auspicious time to start a new business with the Great Depression in full swing, but GSI managed to grow by aggressively expanding its oil exploration business into international markets such as Mexico, South America, and the Middle East.  Success abroad did not fully compensate for difficulties in the US, however, so in December 1938, the company reorganized in order to exploit the untapped oil fields in the American Southwest.  A new Geophysical Service, Inc. — renamed the Coronado Corporation early the next year — was established with Karcher at the helm as an oil production business, while the original GSI, now headed solely by McDermott, became a subsidiary of Coronado and continued in the exploration business.  The company failed to flourish, however, so in 1941 Karcher negotiated a $5 million sale of Coronado to Stanolind Oil & Gas.  Not particularly interested in the exploration business, Stanolind offered the employees of GSI the opportunity to buy back the company for $300,000.  McDermott, R&D head J. Erik Jonsson, field exploration head Cecil Green, and crew chief H. Bates Peaock managed to scrape together the necessary funding and purchased GSI on December 6, 1941.  The very next day, the Japanese bombed Pearl Harbor, dragging the United States into World War II.

With so much of its business tied up in international oil exploration work that would have to be abandoned during the coming global conflict, GSI would be unable to survive by concentrating solely on its primary business and now needed to find additional sources of income.  The solution to this problem came from Jonsson, a former aluminium sales engineer who had been in charge of R&D at GSI since the company’s inception in 1930, who realized that the same technology used for locating oil could also be used to locate ships and airplanes.  A fortuitous connection between McDermott and Dr. Dana Mitchell, who was part of a group working on electronic countermeasure technology, led to a contract to manufacture a device called the magnetic anomaly detection (MAD) system.  Building on this work, GSI emerged as a major supplier of military electronics by the end of the war.

During the war, Jonsson became impressed with an electrical engineer and Navy lieutenant from North Dakota working as a procurement officer for the Navy’s Bureau of Aeronautics named Patrick Haggerty.  In 1946, GSI hired Haggerty to run its new Laboratory and Manufacturing Division, which the company established to expand its wartime electronics work in both the military and private sectors.  Haggerty was determined to transform GSI into a major player in the field and convinced management to invest in a large new manufacturing plant that would require the company to tap nearly its entire $350,000 line of credit with the Republic National Bank.  By 1950, this investment had turned into annual sales of nearly $10 million a year.  With manufacturing now a far more important part of the business than oil exploration, company executives realized the name GSI no longer fit the company.  They decided to change the name to General Instruments, which conjured up visions of the great electronics concerns of the East like General Electric.  Unfortunately, there was already a defense contractor with that name, so the Pentagon asked them to pick something else.  They chose Texas Instruments.

 46-pat_haggerty

Patrick J. Haggerty, the man who brought TI into the transistor business

When Patrick Haggerty learned AT&T was offering licenses for transistor technology, he knew immediately that TI had to be involved.  AT&T, however, disagreed.  In 1952, TI had realized a profit of $900,000 on sales of just $20 million and did not appear capable of making the necessary investment to harness the full potential of the transistor.  It took a year for TI management to finally convince AT&T to grant the firm the $25,000 license, after which Haggerty made another large financial gamble, investing over $4 million in manufacturing plants, development, new hires, and other startup costs.  Before the end of 1952, TI had its first order for 100 germanium transistors from the Gruen Watch Company, and production formally began.

Haggerty had muscled TI into an important new segment of the electronics industry, but in the end it was AT&T that was proven correct:  TI really was too small to make much of an impact in the germanium transistor market.  Haggerty therefore turned to new technology to keep his company relevant in the field.  While germanium served as a perfectly fine semiconducting material at temperatures below 100 degrees Fahrenheit, the low melting point of the element inhibited its semiconducting properties at high temperatures, rendering it unsuitable for defense projects like guided missiles.  Silicon offered both better semiconducting capability and a higher temperature tolerance, but despite the best efforts of scientists at Bell Labs and elsewhere, the element had proven impossible to dope with the necessary impurities.  This did not dissuade Haggerty, who placed an ad in the New York Times for a new chief researcher who could bring TI into silicon transistors.  That ad was answered by none other than brilliant Bell Labs chemist Gordon Teal.

Feeling unappreciated after facing such resistance to his research at Bell Labs, Teal was ready to move on, but despite answering the TI ad, he was not certain the Texas company was the right fit.  Solving the problems with silicon would require a great deal of time and money, and TI remained a relatively small concern.  Haggerty reassured him, however, by revealing that TI was preparing to merge with Intercontinental Rubber, a cash-rich firm listed on the New York Stock Exchange with a faltering tire and rubber business.  This merger, completed in October 1953, made TI a public company and guaranteed that Teal would have the funding he needed.  Haggerty promised Teal anything and anyone he needed with only one stipulation: after one year, Teal would need to have a product TI could bring to market.  Teal accepted the challenge.

1954 proved to be a trying year for TI.  While the transistor business failed to gain traction against larger competitors, the defense contracts the company depended upon as its primary source of revenue began to dry up with the end of the Korean War and a subsequent cut in military spending.  Revenues that had risen to $27 million in 1953 declined to $24 million, profits fell slightly from $1.27 million to $1.2 million, and the stock began trading in single digits.  That same year, however, Teal succeeded in developing a complicated high-temperature doping and zone refining process that yielded a viable silicon transistor.  At a conference on airborne aeronautics held in Dayton, Ohio, that spring, Teal not only proudly announced to the assembled that TI had a working silicon transistor in production, he also provided a dramatic demonstration.  A record player was produced, specially modified so that a transistor could be snapped in and out to complete a circuit.  First, Teal snapped in a germanium transistor and then dropped it into a beaker of hot oil, which destroyed the transistor and stopped the player.  Then, he performed the same action with a silicon transistor.  The music played on.  TI quickly found itself swamped with orders.

New Players

Traitorous-8-Fairchild

The “Traitorous Eight,” who left Shockley Semiconductor to establish Fairchild Semiconductor.

From left: Gordon Moore, C. Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last

In 1954 Bell Labs chemist Calvin Fuller developed a new technique called the diffusion process in which silicon could be doped at high temperatures using gasses containing the desired impurities.  By the next March, Bell Labs chemist Morris Tanenbaum had succeeded in harnessing the diffusion process to create semiconducting material so thin that a silicon wafer could be created in which each layer of the n-p-n sandwich was only a millimeter thick.  The resulting diffusion-base transistor operated at much higher frequencies than previous junction transistors and therefore performed much faster.  With Gordon Teal’s crystal-growing expertise and Patrick Haggerty’s salesmanship, TI kept pace with these advancements and enjoyed a virtual monopoly on the emerging field of silicon transistors during the next few years, with company revenues soaring to $45.7 million in 1956.  The transistor business, however, remained a relatively small part of the overall electronics industry.  Between 1954 and 1956, 17 million germanium transistors and 11 million silicon transistors were sold in the United States.  During the same period, 1.3 billion vacuum tubes were sold.

Practically speaking, the vacuum tube companies appeared to hold a distinct advantage, as they could theoretically use the enormous resources at their disposal from their vacuum tube sales to support R&D in transistors and gradually transition to the new technology.  In reality, however, while most of the major tube companies established small transistor operations, they were so accustomed to the relatively static technologies and processes associated with the tube industry that they were unable to cope with the volatile pricing and ever-changing manufacturing techniques that defined the transistor industry.  The Philco Corporation is a poster child for these difficulties.  Established in Philadelphia in 1892 as the Helios Electric Company to produce lamps, Philco became a major player in the emerging field of consumer radios in the mid-1920s and by the end of World War II was one of the largest producers of vacuum tubes in the United States.  The company seriously pursued transistor technology, creating in 1953 the high-speed surface-barrier transistor discussed in a previous post that powered the TX-0.  In 1956, Philco improved the surface-barrier transistor by employing the diffusion process, but the company soon grew leery of attempting to keep up with new transistor technologies.  The original surface-barrier transistor had been fast, but expensive, and the diffusion-based model cost even more, retailing for around $100.  As technology continued to progress, however, the price fell to $50 within six months, and then to $19 a year after that.  By the next year, lots of 1,000 Philco transistors could be had for a mere $6.75.  Spooked, the company ultimately decided to remain focused on vacuum tubes.  By 1960, Philco had entered bankruptcy, and Ford subsequently purchased the firm in 1961.

12ShockleySemiConLabs-1384982690614

The Shockley Semiconductor Laboratory in the Heart of the region that would become Silicon Valley

While the old guard in the electronics industry ultimately exerted little influence on the transistor business, TI soon faced competition from more formidable opponents.  In 1950, William Shockley paid a visit to Georges Doriot, the pioneering venture capitalist who later funded the Digital Equipment Corporation.  Surprisingly, their discussion did not focus on the transistor, but rather on another invention Shockley patented in 1948, a “Radiant Energy Control System,” essentially a feedback system using a visual sensor.  Shockley had worked on improving bomb sights during World War II and saw this system as the next step, potentially allowing a self-guided bomb to compare photographs of targets with visual data from the sensor for increased accuracy.  The same technology could also be used for facial recognition, or for automated sorting of components in manufacturing.  Since the publication of mathematician Norbert Wiener’s groundbreaking book, Cybernetics, in 1948, the Cambridge academic community had been excited by the prospect of using artificial systems to replace human labor for more mundane tasks.  Indeed, in 1952 this concept would gain the name “automation,” a term first coined by Delmar Harder at Ford and popularized by Harvard Business School Professor John Diebold in his book Automation: The Advent of the Automatic Factory.  When Doriot learned of Shockley’s control system, he urged the eminent physicist to waste no time in starting his own company.

By 1951, Shockley had refined his “Radiant Energy Control System” into an optoelectronic eye he felt could form the core of an automated robot that could replace humans on the manufacturing line.  After negotiating an exemption with Bell Labs allowing him to maintain the rights to any patents he filed related to automation for the period of one year, Shockley filed a patent for an “Electrooptical Control System” and wrote a memo to Bell Labs president Mervin Kelly urging the organization to build an “automatic trainable robot.”  When Kelly refused to consider such a project, Shockley, already stripped of most of his responsibilities regarding transistor development due to incessant conflicts with his team, took a leave of absence from Bell Labs in late 1952.  After a year as a visiting professor at CalTech, Shockley became director of the Pentagon’s Weapons Systems Evaluation Group and spent the next year or so studying methods for the U.S. to fight a nuclear war while periodically turning down offers to teach at prestigious universities or establish his own semiconductor operation.

In February 1955, Shockley met renowned chemist Arnold Beckman at a gala in Los Angeles honoring Shockley and amplifier inventor Lee DeForest.  The two bonded over their shared interest in automation and kept in touch over the following months.  Finally, in June 1955, Shockley decided he needed to radically change his life, so he resigned from both Bell Labs and his Pentagon job, divorced his wife, and began to seriously consider offers to start his own company.  The next month, he contacted Beckman to propose forming a company together to bring the new diffusion transistor to market and develop methods to automate the production of transistors.  After a period of negotiation, the Shockley Semiconductor Laboratory was established in September 1955 as a subsidiary of Beckman Instruments.  Even though Beckman was headquartered in Southern California, Shockley convinced his new partner to locate Shockley Semiconductor further north in Palo Alto, California, so he could once again remain close to his mother.

Unable to recruit personnel from Bell Labs, where his reputation as a horrible boss proceeded him, Shockley scoured technical conferences, college physics departments, and research laboratories for bright young scientists and engineers.  One of his first hires also proved to be his most important, a young physicist named Bob Noyce.  Born in 1927 in Burlington, Iowa, Robert Norton Noyce was the son of a Congregationalist minister who moved his family all over the state of Iowa as he migrated from one congregation to the next.  This itinerant life, made even more difficult by the Depression, finally ended in 1940 when Ralph Noyce took a job in the college town of Grinnell, Iowa.  Bob Noyce thrived in Grinnell, where his natural charisma and sense of adventure soon made him the leader among the neighborhood children.  A brilliant student despite a penchant for mischief and goofing off, Noyce took a college physics course at Grinnell College during his senior year of high school and graduated class valedictorian.  The Miami University Department of Physics offered to give him a job as a lab assistant if he attended the school — an honor usually reserved for graduate students — but worrying he could just be another face in the crowd at such a large institution, Noyce chose to study at Grinnell College instead.

At Grinnell, Noyce nearly lost his way at the end of his junior year.  Eager to maintain his social standing among older students returning from World War II, Noyce agreed to “procure” a pig to roast at a Hawaiian Luau dorm party.  Soon after, he learned his girlfriend was pregnant and would need an abortion.  Depressed, Noyce got drunk and with the help of a friend stole a pig from a local farmer’s field.  Feeling remorseful, they returned the next day to apologize to the farmer and pay for the pig only to learn that he was the mayor of Grinnell and did not take the prank lightly.  Noyce was almost expelled as a result, but he was saved by his physics professor, Grant Gale, who saw Noyce as a once-in-a-generation talent that should not be squandered over an ill-advised prank.  The university relented and merely suspended him for a semester.

When Noyce returned to Grinnell after working for a life insurance company in New York during his forced exile, he was introduced to the technology that would change his life.  His mentor Gale was an old friend of transistor co-inventor John Bardeen, with whom he had attended the University of Wisconsin, while the head of research at Bell Labs, Oliver Buckley, was a Grinnell graduate.  Gale therefore learned of the transistor’s invention early and was able to secure a wide array of documentation on the new device from Bell.  When Noyce saw his professor enraptured by these documents, he dove right in himself and soon resolved to learn everything he could about transistors.  After graduating from Grinnell with degrees in mathematics and physics, Noyce matriculated to the physics department at MIT, where he planned to focus his studies on solid-state physics.  As transistors were so new, most of Noyce’s classwork revolved around vacuum tubes, but his dissertation, completed in mid 1953, dealt with matters related to transistor development.  Upon earning his doctorate in physics, Noyce took a job at Philco, where in 1950 R&D executive Bill Bradley had established the 25-man research group that developed the surface-barrier transistor.  Noyce rose through the ranks quickly at Philco, but he soon became disillusioned with the layers of bureaucracy and paperwork inherent in working for a large defense contractor, especially after the company was forced to significantly curtail R&D activities due to losses.  Just as Noyce was looking for a way out, Shockley called in January 1956 after reading a paper Noyce had presented on surface-barrier transistors several months earlier at a conference.  In March, Noyce headed west to join Shockley Semiconductor.

Before long, Shockley had succeeded in recruiting a team of about twenty with expertise in a variety of fields related to transistor creation. These individuals included a Ph.D. candidate in the solid state physics program at MIT named Jay Last, a chemist at the Johns Hopkins Applied Physics Lab named Gordon Moore, a mechanical engineer at Western Electric named Julius Blank, Viennese World War II refugee and expert tool builder Eugene Kleiner, metallurgist Sheldon Roberts, Swiss theoretical physicist Jean Hoerni, and Stanford Research Institute physicist Vic Grinich.  Shockley hoped these bright young scientists would secure his company’s dominance in the semiconductor industry.

LE281L4

Sherman Fairchild, the inventor and businessman who financed Fairchild Semiconductor

On November 1, 1956, William Shockley learned that he had been awarded the Nobel Prize for Physics — shared with Walter Brattain and John Bardeen — for the invention of the transistor.  Theoretically at the height of his fame and powers, Shockley soon found his entire operation falling apart.  Always a difficult man to work for, his autocratic tendencies grew even worse now that he was a Nobel laureate in charge of his own company.  He micromanaged employees, even in areas outside of his expertise, and viciously attacked them when their work was not up to his standards.  Feeling threatened by Jean Hoerni and his pair of doctorates, he once exiled the physicist to an apartment to work alone, though he later relented.  He discouraged his employees from pursuing their own projects and insisted on adding his name to any paper they presented, whether he had any involvement in the subject or not.  Once, when a secretary cut her hand on a piece of metal protruding from a door, he insisted it must have been an act of sabotage and threatened to hire a private investigator and subject the staff to lie detector tests.  He was finally dissuaded by Roberts, who convinced him with the aid of a microscope that the piece of metal was merely a tack that had lost its plastic head.

The final straw was Shockley’s insistence on pulling staff and resources from improving upon the diffusion-base silicon transistor to work on a new four-layer diode project he believed could act as both a transistor and a resistor and was theoretically faster and cheaper than a germanium transistor.   In reality, this device proved impossible to create, and R&D costs began to spiral out of control with no sellable product to show for it.  This caused Beckman to become more involved with company operations, which in turn led several of Shockley’s disgruntled employees to feel they could effect real change.  They nominated Robert Noyce as their spokesman, both because he maintained a cordial relationship with Shockley and because he was possessed of an impressive charisma that made him both a natural team leader and an easy person to talk to.  With Beckman’s blessing, Noyce, Moore, Kleiner, Last, Hoerni, Roberts, Blank, and Grinich confronted Shockley and attempted to force him out of day-to-day operations at the company.  The octet wanted Noyce to serve as their new manager, but Shockley refused, arguing that Noyce did not have what it took to be an aggressive and decisive leader, criticisms that later events would show were completely justified.  Beckman therefore appointed an interim management committee and began an external search for an experienced manager.  Less than a month later, he reversed course and declared Shockley to be in charge, most likely influenced by colleagues at either Bell Labs or Stanford who pointed out that undermining Shockley would unduly tarnish the reputation of the Nobel laureate.  As a compromise, Noyce was placed in charge of R&D and a manager from another division of Beckman named Maurice Hanafin was installed as a buffer between Shockley and the rest of the staff.

Noyce was satisfied with this turn of events, but his seven compatriots were not, especially when it became clear that Shockley remained in complete control despite the appointment of Hanafin.  Led by Last, Hoerni, and Roberts, the seven scientists decided to leave the company.  Feeling they were more valuable as a group, however, they resolved to continue working together rather than going their separate ways, meaning they would need to convince an established company to hire them together and form a semiconductor research group around them.  To facilitate this process, Kleiner decided to write to a New York investment firm where his father had an account called Hayden, Stone, and Company, which had recently arranged financing for the first publicly held transistor firm, General Transistor.  Kleiner’s letter was addressed to the man in charge of his father’s account and asked for $750,000 in funding to start a new semiconductor group.  As it turned out, the account man was no longer there, so the letter ended up on the desk of a recent hire and Harvard MBA named Arthur Rock.  Rock liked what he saw and met with the seven along with his boss, Arthur “Bud” Coyle.  The two bankers strongly believed in the potential of the scientists and urged them to reach beyond their original plan and ask for a million dollars or more to fund an entire division.  In order to entice a company to form a semiconductor division, however, the seven scientists would need a leader, and none of them felt up to the task.  They realized they would have to recruit their former ringleader in their fight against Shockley, Bob Noyce.  It took some convincing, but Noyce ultimately came on board.  The seven were now eight.

Finding a company to shelter the eight co-conspirators proved harder than Rock and Coyle initially hoped.  The duo drew up a list of thirty companies they believed could handle the investment they were looking for, but were turned down by all of them.  Simply put, no one was interested in giving a group of scientists between the ages of 28 and 32 that had never developed a salable product yet felt they could run a division better than a Nobel Prize winner $1 million to pursue new advances in a volatile field of technology.  Running out of options, Coyle mentioned the plan to an acquaintance possessed of both a large fortune and a reputation for risk-taking:  Sherman Fairchild.  Sherman was the son of George Fairchild, a businessman and six-term Congressman who played a crucial role in the formation of the International Time Recording Company — one of the companies that merged to form C-T-R — and was the chairman and largest shareholder of C-T-R/IBM from its inception until his death in 1924.  A prolific inventor, Sherman developed a camera suitable for aerial photography for the United States Army during World War I and then established the Fairchild Aerial Camera Corporation in 1920.  Subsequently, Fairchild established several more companies based around his own inventions in fields ranging from aerial surveying to aircraft design.  In 1927, he consolidated seven of these organizations under the holding company Fairchild Aviation, which he renamed Fairchild Camera and Instrument (FCI) in 1944 after spinning back out his aviation business.  By 1957, Fairchild was no longer involved in the day-to-day running of any of his companies, but he was intrigued by the opportunity represented by Noyce and his compatriots and encouraged FCI to take a closer look.

Based in Syosset, New York, Fairchild Camera and Instrument had recently been placed under the care of John Carter, a former vice president of Corning Glass who felt that FCI had become too reliant on defense work for its profits, which had become scarcer and scarcer since the end of the Korean War.  Carter believed acquisitions would be the best way to secure a new course for FCI, so he proved extremely amenable to Noyce and company’s request for funding.  After a period of negotiation, Fairchild Semiconductor Corporation was formally established on September 19, 1957.  Officially, FCI loaned Fairchild Semiconductor $1.3 million in startup funding and in return was granted control of the company through a voting trust.  Ownership of Fairchild Semiconductor remained with the eight founding members and Hayden, Stone, but FCI had the right to purchase all outstanding shares of the company on favorable terms any time before it achieved three successive years of earnings of $300,000 or more.  When the scientists finally broke the news of their imminent departure to Shockley, the Nobel laureate was devastated, and though he never actually dubbed them the “Traitorous Eight,” a phrase invented by a reporter some years later, the phrase came to be associated with his feelings on the matter.  Shockley continued to pursue his dream of a four-layer diode until Beckman finally sold Shockley Semiconductor, which had never turned a profit, in 1960.  Shockley himself ultimately left the industry to teach at Stanford.

The Process

1959_1_2

A transistor built using the “planar process,” which revolutionized the nascent semiconductor industry

 In October 1957, Fairchild Semiconductor moved into its new facilities on Charleston Road near the southern border of Palo Alto, not far from the building that housed Shockley Semiconductor.  The Fairchild executive responsible for negotiating the final deal between FCI and the Traitorous Eight, Richard Hodgson, took on the role of chairman of the semiconductor company to look after FCI’s interests and began a search for a general manager.  Hodgson’s first choice was the charismatic Noyce, but the physicist hated confrontation and felt unready to run a whole company besides and contented himself with leading R&D.  Hodgson therefore brought in an old friend, a former physics professor that had worked as a sales manager for FCI in the 1950s named Tom Bay, to head up sales and marketing and a former paratrooper who managed the diode operation at Hughes Aircraft named Ed Baldwin as general manager.

Fairchild Semiconductor came into being at just the right time.  On October 4, 1957, the Soviet Union launched Sputnik into orbit, inaugurating a space race with the United States that greatly increased the Federal Government’s demand for transistors for use in rockets and satellites, technologies particularly unsuited to vacuum tubes due to the need for small, durable components.  At the same time, the rise of affordable silicon transistors had government agencies reevaluating the use of vacuum tubes across all their projects, particularly in computers.  This led directly to Fairchild’s first major contract.

In early 1958, Tom Bay learned that the IBM Federal Systems Division was having difficulty sourcing the parts it needed to create a navigational computer for the United States Air Force’s experimental B-70 long-range bomber.  The Air Force required particularly fast and durable silicon transistors for the project and TI, still the only major force in silicon, had been unable to provide a working model up to their specifications.  Through inheritance from his father, Sherman Fairchild was the largest shareholder at IBM and wielded some influence at the company, so Bay and Hodgson convinced him to secure a meeting with the project engineers.  IBM remained skeptical even after Noyce stated Fairchild’s engineers were up to the task, but Sherman Fairchild leaned hard on Tom Watson Jr., basically saying that if he trusted the engineers enough to invest over $1 million in their work, then Watson should trust them too.  With Sherman’s help, Fairchild Semiconductor secured a contract for 100 silicon transistors in February 1958.

Noyce knew that the project would require a type of transistor known as a mesa transistor that had been developed by Bell Labs and briefly worked on at Shockley Semiconductor, but had yet to be mass produced by any company.  Unlike previous transistors, the mesa transistor could be diffused on only one side of the wafer by taking advantage of new techniques in doping and etching.  Basically, dopants were diffused beneath a layer of silicon, after which a drop of wax was placed over the wafer.  The entire surface would then be doused in a strong acid that etched away the entire top layer except at the point protected by the wax.  This created a distinctive bump that resembled the mesas of the American Southwest, hence its name.  Fairchild decided to develop the first commercial double-diffused silicon mesa transistor, but were unsure whether an n-p-n or p-n-p configuration would perform better.  They therefore split into two teams led by Moore and Hoerni to develop both, ultimately settling on the n-p-n configuration.  Putting the transistor into production was a complete team effort.  Roberts took charge of growing the silicon crystals, Moore and Hoerni oversaw the diffusion process, Noyce and Last handled the photolithographic process to define the individual transistors on the wafer, Grinich took charge of testing, and Blank and Kleiner designed the manufacturing facility.  By May, the team had completed the design of the transistor, which they delivered to IBM in the early summer.  In August, the team presented their transistor at Wescon, an important trade show established six years before by the West Coast Electronics Manufacturers Association, and learned that their double-diffusion transistor was the only one on the market.  They maintained a monopoly on the device for about a year.

Orders soon began pouring in for double-diffused mesa transistors, most notably from defense contractor Autonetics, which wanted to use them in the Minuteman guided missile program, then the largest and most important defense project under development.  Late in 1958, however, Fairchild realized there was a serious problem with the transistor: it was exceedingly fragile.  So fragile, in fact, that even a tap from a pencil could cause one to stop working.  After testing, the team determined that when the transistor was sealed, a piece of metal would often flake off the outer can and bounce around inside, ultimately causing a short.  Fairchild would need to solve this problem quickly or risk losing its lucrative defense contracts.

During the transistor creation process, an oxide layer naturally builds up on the surface of the silicon wafer.  While this oxide layer does not interfere with the operation of the transistor, it would nevertheless be removed to prevent impurities from becoming trapped under its surface.  As early as 1957, Jean Hoerni speculated that the impurity problem was entirely imaginary and that the oxide layer could, in fact, provide a service by protecting the otherwise exposed junctions of the transistor and thus prevent just the kind of short Fairchild was now grappling with.  Hoerni did not pursue the concept at the time because Faircihld was so focused on bringing its first products to market, but in January 1959, he attacked the problem in earnest and within weeks had figured out a way to introduce an oxide mask at proper points during the diffusion process while still leaving spaces for the necessary impurities to be introduced.  On March 12, 1959, Hoerni proudly demonstrated a working transistor protected by an oxide layer, spitting on it to demonstrate it would continue working even when subjected to abuse.  Unlike the mesa transistor, a transistor created using Hoerni’s new technique resembled a bullseye with an outer layer shaped like a teardrop and was flat and smooth.  He therefore named his new technique the “planar process.”

The planar process instantly rendered all previous methods of creating transistors obsolete.  Consequently, Fairchild would not only be able to corner the market in the short term by bringing the first planar transistor to market, but it would also be able to generate income in the long term by licensing the planar process to all the other companies in the transistor business.  Complete dominance of the semiconductor industry appeared to be within Fairchild’s grasp, but then in mid-March 1959, TI announced a new product that would change the entire course of the electronics industry and, indeed, the modern world.

The Texas Edison

co1043

Jack Kilby, the inventor of the first integrated circuit

As Fairchild was just starting its transistor business in 1958, Texas Instruments continued to extend its dominance as company revenues reached $90 million and profits soared, but the company was not content to rest on its laurels.  With the space race beginning, the military, to which TI still devoted a large portion of its electronic components business, required ever more sophisticated rockets and computers that would require millions of components to function properly.  Clearly, as long as an electronic circuit continued to require discrete transistors, resistors, capacitors, diodes, etc. all connected by wires, it would be impossible to build the next generation of electronic devices.  The solution to this problem was first proposed by a British scientist named Geoffrey Dunmer in 1952, who spoke of a solid block of material without any connecting wires that would integrate all the functionality of the discrete components of a circuit.  Dunmer was never able to complete a working block circuit based on his theories, but other organizations were soon following in his footsteps, including a physical chemist at Texas Instruments named Willas Adock.  Working under an Army contract, Adcock assembled a small task force to build a simpler circuit, which included an electrical engineer named Jack Kilby.

Born in Jefferson City, Missouri, Jack St. Clair Kilby grew up in Great Bend, Kansas, where his father worked as an electrical engineer and ultimately rose to the presidency of the Kansas Power Company.  Kilby became hooked on electrical engineering during summers spent travelling across western Kansas with his father in the 1930s as the elder Kilby visited power plants and substations inspecting and fixing equipment.  A good student, Kilby planned to continue his education at MIT, but his high school did not offer all the required math courses.  Kilby was forced to travel to Cambridge to take a special entrance exam, but did not pass.  He attended the University of Illinois instead, but his education was interrupted by service during World War II.  Kilby finally graduated in 1947 with an unremarkable academic record and took a job at a Milwaukee firm called Centralab, the only company that offered him a job.

Centralab was not a particularly important company in the electronics industry, but it did experiment with an early form of integrated circuit in which company engineers attempted to place resistors, vacuum tubes, and wiring on a single ceramic base, exposing Kilby to the concept for the first time.  In May 1958, Kilby joined Adcock’s team at TI.  Adcock was attempting to create something called a “micromodule,” in which all the components of a circuit are manufactured in one size with the wiring built into each part so they could simply be snapped together, thus obviating the need for individual wiring connections.  While a circuit built in this manner would still be composed of discrete components, it would theoretically be much smaller, more durable, and easier to manufacture.  Having already tried something similar at Centralab, however, Kilby was convinced this approach would not work.

In the 1950s, Texas Instruments followed a mass vacation policy in which all employees took time off during the same few weeks in the summer.  Too new to have accrued any vacation time, Kilby therefore found himself alone in the lab in July 1958 and decided to tinker with alternate solutions to the micromodule.  Examining the problem through a wide lens, Kilby reasoned that TI was strongest in silicon and should therefore focus on working with that element.  At the time, capacitors were created using metal and ceramics and resistors were made of carbon, but there was nothing stopping a company from creating both of those components in silicon.  While the performance of these parts would suffer significantly over their traditional counterparts, by crafting everything out of silicon, it would be possible to place the circuit on a single block of material and eliminate wires entirely.  Kilby jotted down some preliminary plans in a notebook on July 24, 1958, and then received approval from Adcock to explore the concept further when everyone returned from vacation.

On September 12, 1958, Kilby successfully demonstrated a working integrated circuit to a group of executives at TI.  While Kilby’s intent had been to craft the device out of silicon, TI did not have any blocks of the element suitable for Kilby’s project on hand, so he was forced to craft his first circuit out of germanium.  Furthermore, Kilby had not yet figured out how to eliminate wiring completely, so his original hand-crafted design could not be reliably mass produced.  Therefore, while TI brought the first integrated circuit into the world, it would be Fairchild Semiconductor that actually made them practical.

In January 1959, as Hoerni was perfecting his planar process, Robert Noyce took inspiration from his colleague’s work and began theorizing how P-N junctions and oxide layers could be used to isolate and protect all the components of a circuit on a single piece of silicon, but just as Hoerni initially sat on his planar process while Fairchild focused on delivering finished products, so too did Noyce decide not to pursue his integrated circuit concept any further.  After Kilby debuted his circuit in March, however, Noyce returned to his initial notes.  While the TI announcement may have partially inspired his work, Fairchild’s patent attorney had previously asked every member of the Fairchild team to brainstorm as many applications for the new planar process as possible for the patent filing, which appears to have been Noyce’s primary motivator.  Regardless of the impetus, Noyce polished up his integrated circuit theories and tasked Jay Last with turning them into a working product.

By May 1960, Fairchild had succeeded in creating a practical and producible integrated circuit in which all of the components were etched on a single sliver of silicon with aluminum traces resting atop a protective oxide layer replacing the wiring.  Both the Minuteman missile and the Apollo moon landing projects quickly embraced the new device as the entire transistor industry became obsolete overnight.  While discrete transistors would power several important computer projects in the 1960s — and even the first home video game system in the early 1970s — the integrated circuit ultimately ushered in a new era of small yet powerful electronic devices that could sit on a small desk or, eventually, be held in the palm of one’s hand yet perform calculations that had once required equipment filling an entire room.  In short, without the integrated circuit, the video game industry as it exists today would not be possible.

Historical Interlude: The Birth of the Computer Part 4, Real-Time Computing

By 1955, computers were well on their way to becoming fixtures at government agencies, defense contractors, academic institutions, and large corporations, but their function remained limited to a small number of activities revolving around data processing and scientific calculation.  Generally speaking, the former process involved taking a series of numbers and running them through a single operation, while the latter process involved taking a single number and running it through a series of operations.  In both cases, computing was done through batch processing — i.e. the user would enter a large data set from punched cards or magnetic tape and then leave the computer to process that information based on a pre-defined program housed in memory.  For companies like IBM and Remington Rand, which had both produced electromechanical tabulating equipment for decades, this was a logical extension of their preexisting business, and there was little impetus for them to discover novel applications for computers.

In some circles, however, there was a belief that computers could move beyond data processing and actually be used to control complex systems.  This would require a completely different paradigm in computer design, however, based around a user interacting with the computer in real-time — i.e. being able to give the computer a command and have it provide feedback nearly instantaneously.  The quest for real-time computing not only expanded the capabilities of the computer, but also led to important technological breakthroughs instrumental in lowering the cost of computing and opening computer access to a greater swath of the population.  Therefore, the development of real-time computers served as the crucial final step in transforming the computer into a device capable of delivering credible interactive entertainment.

Note: This is the fourth and final post in a series of “historical interludes” summarizing the evolution of computer technology between 1830 and 1960.   The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray,  A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh, and The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar.

Project Whirlwind

Forrester_Taylor_CutCut2

Jay Forrester (l), the leader of Project Whirlwind

The path to the first real-time computer began with a project that was never supposed to incorporate digital computing in the first place.  In 1943, the head of training at the United States Bureau of Aeronautics, a pilot and MIT graduate named Captain Luis de Florez, decided to explore the feasibility of creating a universal flight simulator for military training.  While flight simulators had been in widespread use since Edwin Link had introduced a system based around pneumatic bellows and valves called the Link Trainer in 1929 and subsequently secured an Army contract in 1934, these trainers could only simulate the act of flying generally and were not tailored to specific planes.  Captain de Florez envisioned using an analog computer to simulate the handling characteristics of any extant aircraft and turned to his alma mater to make this vision a reality.

At the time, MIT was already the foremost center in the United States for developing control systems thanks to the establishment of the Servomechanisms Laboratory in 1941, which worked closely with the military to develop electromechanical equipment for fire control, bomb sights, aircraft stabilizers, and similar projects.  The Bureau of Aeronautics therefore established Project Whirlwind within the Servomechanisms Laboratory in 1944 to create de Florez’s flight trainer.  Leadership of the Whirlwind project fell to an assistant director of the Servomechanisms Laboratory named Jay Forrester.  Born in Nebraska, Forrester had been building electrical systems since he was a teenager, when he constructed a 12-volt electrical system out of old car parts to provide his family’s ranch with electricity for the first time.  After graduating from the University of Nebraska, Forrester came to MIT as a graduate student in 1939 and joined the Servomechanisms Laboratory at its inception.  By 1944, Forrester was getting restless and considering establishing his own company, so he was given his choice of projects to oversee to prevent his defection.  Forrester chose Whirlwind.

In early 1945, Forrester drew up the specifications for a trainer consisting of a mock cockpit connected to an analog computer that would control a hydraulic transmission system to provide feedback to the cockpit.  Based on this preliminary work, MIT drafted a proposal in May 1945 for an eighteen-month project budgeted at $875,000, which was approved.  As work on Whirlwind began, the mechanical elements of the design came together quickly, but the computing element remained out of reach.  To create an accurate simulator, Forrester required a computer that updated dozens of variables constantly and reacted to user input instantaneously.  Bush’s Differential Analyzer, perhaps the most powerful analog computer of the time, was still far too slow to handle these tasks, and Forrester’s team could not figure out how to produce a more powerful machine solely through analog components.  In the summer of 1945, however, a fellow MIT graduate student named Perry Crawford that had written a master’s thesis in 1942 on using a digital device as a control system alerted Forrester to the breakthroughs being made in digital computing at the Moore School.  In October, Forrester and Crawford attended a Conference on Advanced Computational Techniques hosted by MIT and learned about the ENIAC and EDVAC in detail.  By early 1946, Forrester was convinced that the only way forward for Project Whirlwind was the construction of a digital computer that could operate in real time.

The shift from an analog computer to a digital computer for the Whirlwind project resulted in a threefold increase in cost to an estimated $1.9 million. It also created an incredible technical challenge.  In a period when the most advanced computers under development were struggling to achieve 10,000 operations a second, Whirlwind would require the capability of performing closer to 100,000 operations per second for seamless real-time operation.  Furthermore, the first stored-program computers were still three years away, so Forrester’s team also faced the prospect of integrating cutting edge memory technologies that were still under development.  By 1946, the size of the Whirlwind team had grown to over a hundred staff members spread across ten groups each focused on a particular part of the system in an attempt to meet these challenges.  All other aspects of the flight simulator were placed on hold as the entire team focused its attention on creating a working real-time computer.

200908311113506364_0

The Whirlwind I, the first real-time computer

By 1949, Forrester’s team had succeeded in designing an architecture fast enough to support real-time operation, but the computer could not operate reliably for extended periods.  With costs escalating and no end to development in sight, continued funding for the project was placed in jeopardy.  After the war, responsibility for Project Whirlwind had transferred from the Bureau of Aeronautics to the Office of Naval Research (ONR), which felt the project was not providing much value relative to a cost that had by now far surpassed $1.9 million.  By 1948, Whirlwind was consuming twenty percent of ONR’s entire research budget with little to show for it, so ONR began slowly trimming the budget.  By 1950, ONR was ready to cut funding all together, but just as the project appeared on the verge of death, it was revived to serve another function entirely.

On August 29, 1949, the Soviet Union detonated its first atomic bomb.  In the immediate aftermath of World War II, the United States had felt relatively secure from the threat of Soviet attack due to the distance between the two nations, but now the USSR had both a nuclear capability and a long range bomber capable of delivering a payload on U.S. soil.  During World War II, the U.S. had developed a primitive radar early warning system to protect against conventional attack, but it was wholly insufficient to track and interdict modern aircraft.  The United States needed a new air defense system and needed it quickly.

In December 1949, the United States Air Force formed a new Air Defense System Engineering Committee (ADSEC) chaired by MIT professor George Valley to address the inadequacies in the country’s air-defense system.  In 1950, ADSEC recommended creating a series of computerized command-and-control centers that could analyze incoming radar signals, evaluate threats, and scramble interceptors as necessary to interdict Soviet aircraft.  Such a massive and complex undertaking would require a powerful real-time computer to coordinate.  Valley contacted several computer manufacturers with his needs, but they all replied that real-time computing was impossible.

Despite being a professor at MIT, Valley knew very little about the Whirlwind project, as he was not interested in analog computing and had no idea it had morphed into a digital computer.  Fortunately, a fellow professor at the university, Jerome Wiesner, pointed him towards the project.  By early 1950, the Whirlwind I computer’s basic architecture had been completed, and it was already running its first test programs, so Forrester was able to demonstrate its real-time capabilities to Valley.  Impressed by what he saw, Valley organized a field-test of the Whirlwind as a radar control unit in September 1950 at Hanscom Field outside Bedford, Massachusettes, where a radar station connected to Whirlwind I via a phone line successfully delivered a radar signal from a passing aircraft.  Based on this positive result, the United States Air Force established Project Lincoln in conjunction with MIT in 1951 and moved Whirlwind to the new Lincoln Laboratory.

Project SAGE

IBM's_$10_Billion_Machine

A portion of an IBM AN/FSQ-7 Combat Direction Central, the heart of the SAGE system and the largest computer ever built

By April 1951, the Whirlwind I computer was operational, but still rarely worked properly due to faulty memory technology.  At Whirlwind’s inception, there were two primary forms of electronic memory in use, the delay-line storage pioneered for the EDVAC and CRT memory like the Williams Tube developed for the Manchester Mark I.  From his exposure to the EDVAC, Forrester was already familiar with delay-line memory early in Whirlwind’s development, but that medium functioned too slowly for a real-time design.  Forrester therefore turned his attention to CRT memory, which could theoretically operate at a sufficient speed, but he rejected the Williams Tube due to its low refresh rate.  Instead, Forrester incorporated an experimental tube memory under development at MIT, but this temperamental technology never achieved its promised capabilities and proved unreliable besides.  Clearly, a new storage method would be required for Whirlwind.

In 1949, Forrester saw an advertisement for a new ceramic material called Deltamax from the Arnold Engineering Company that could be magnetized or demagnetized by passing a large enough electric current through it.  Forrester believed the properties of this material could be used to create a fast and reliable form of computer memory, but he soon discovered that Deltamax could not switch states quickly at high temperatures, so he assigned a graduate student named William Papian to find an alternative.  In August 1950, Papian completed a master’s thesis entitled “A Coincident-Current Magnetic Memory Unit” laying out a system in which individual cores — small doughnut-shaped objects — with magnetic properties similar to Deltamax are threaded into a three-dimensional matrix of wires.  Two wires are passed through the center of the core to magnetize or demagnetize it by taking advantage of a property called hysteresis in which an electrical current only changes the magnetization of the material if it is above a certain threshold.  Only when currents are run through both wires and passed in the same direction will the magnetization change, making the cores a suitable form of computer memory.  A third wire is threaded through all of the cores in the matrix, allowing any portion of the memory to be read at any time.

Papian built the first small core memory matrix in October 1950, and by the end of 1951 he was able to construct a 16 x 16 array of cores.  During this period, Papian tested a wide variety of materials for his cores and settled on a silicon-steel ribbon wrapped around a ceramic bobbin, but these cores still operated too slowly and also required an unacceptably high level of current.  At this point Forrester discovered a German ceramicist in New Jersey named E. Albers-Schoenberg was attempting to create a transformer for televisions by mixing iron ore with certain oxides to create a compound called a ferrite that exhibited certain magnetic properties.  While ferrites generated a weaker output than the metallic cores Papian was experimenting with, they could switch up to ten times faster.  After experimenting with various chemical compositions, Papian finally constructed a ferrite-based core memory system in May 1952 that could switch between states in less than a microsecond and therefore serve the needs of a real-time computer.  First installed in the Whirlwind I in August 1953, ferrite core memory was smaller, cheaper, faster, and more reliable than delay-line, CRT, and magnetic drum memory and ultimately doubled the operating speed of the computer while reducing maintenance time from four hours a day to two hours a week.  Within five years, core memory had replaced all other forms of memory in mainframe computers, netting MIT a hefty profit in patent royalties.

With Whirlwind I finally fully functional the Lincoln Laboratory turned its attention to transforming the computer into a commercial command-and-control system suitable for installation in the United States Air Force’s air defense system.  This undertaking was beyond the scope of the lab itself, as it would require fabrication of multiple components on a large scale.  Lincoln Labs evaluated three companies to take on this task, defense contractor Raytheon, which had recently established a computer division, Remington Rand — through both its EMCC and ERA subsidiaries — and IBM.  At the time, Remington Rand was still the powerhouse in the new commercial computer business, while IBM was only just preparing to bring its first products to market.  Nonetheless, Forrester and his team were impressed with IBM’s manufacturing facilities, service force, integration, and experience deploying electronic products in the field and therefore chose the new kid on the block over its more established competitor.  Originally designated Project High by IBM — due to its location on the third floor of a necktie factory on High Street in Poughkeepsie — and the Whirlwind II by Lincoln Laboratory, the project eventually went by the name Semi-Automatic Ground Environment, or SAGE.

The heart of the SAGE system was a new IBM computer derived from the Whirlwind design called the AN/FSQ-7 Combat Direction Central.  By far the largest computer system ever built, the AN/FSQ-7 weighed 250 tons, consumed three megawatts of electricity, and took up roughly half an acre of floor space.  Containing 49,000 vacuum tubes and a core memory capable of storing over 65,000 33-bit words, the computer was capable of performing roughly 75,000 operations per second.  In order to insure uninterrupted operation, each SAGE installation actually consisted of two AN/FSQ-7 computers so that if one failed, the other could seamlessly assume control of the air defense center.  As the first deployed real-time computer system, it inaugurated a number of firsts in commercial computing such as the ability generate text and vector graphics on a display screen, the ability to directly enter commands via a typewriter-style keyboard, and the ability to select or draw items directly on the display using a light pen, a technology developed specifically for Whirlwind in 1955.  In order to remain in constant contact with other segments of the air defense system, the computer was also the first outfitted with a new technology called a modem developed by AT&T’s Bell Labs research division to allow data to be transmitted over a phone line.

The first SAGE system was deployed at McChord Air Force Base in November 1958, and the entire network of twenty-three Air Defense Direction Centers were online by 1963 at a total cost to the government of $8 billion.  While IBM agreed to do the entire project at cost as part of its traditional support for national defense, the project still brought the company $500 million in revenues in the late 1950s.  SAGE was perhaps the key project in IBM’s rise to dominance in the computer industry.  Through this massive undertaking, IBM became the most knowledgeable company in world at designing, fabricating, and deploying both large-scale mainframe systems and their critical components such as core memory and computer software.  In 1954, IBM upgraded its 701 computer to replace Williams Tubes memory with magnetic cores and released the system as the IBM 704.  The next year, a core-memory replacement for the 702 followed designated the IBM 705.  These new computers were instrumental in vaulting IBM past Remington Rand in the late 1950s.  SAGE, meanwhile, remained operational until 1983.

The Transistor and the TX-0

102631231-03-01

Kenneth Olsen, co-designer of the TX-0 and co-founder of the Digital Equipment Corporation (DEC)

While building a real-time computer for the SAGE air-defense system was the primary purpose of Project Whirlwind, the scope of the project grew large enough by the middle of the 1950s that staff could occasionally indulge in other activities, such as a new computer design proposed by staff member Kenneth Olsen.  Born in Bridgeport, Connecticut, Olsen began experimenting with radios as a teenager and took an eleven-month electronics course after entering the Navy during World War II.  The war was over by the time his training was complete, so after a single deployment on an admiral’s staff in the Far East, Olsen left the Navy to attend MIT in 1947, where he majored in electrical engineering.  After graduating in 1950, Olsen decided to continue his studies at MIT as a graduate student and joined Project Whirlwind.  One of Olsen’s duties on the project was the design and construction of the Memory Test Computer (MTC), a smaller version of the Whirlwind I built to test various core memory solutions.  In creating the MTC, Olsen innovated with a modular design in which each group of circuits responsible for a particular function was placed on a single plug-in unit placed on a rack that could be easily swapped out if it malfunctioned.  This was a precursor of the plug-in circuit boards still used today on computers.

One of the engineers who helped Olsen debug the MTC was Wes Clark, a physicist that came to Lincoln Laboratory in 1952 after working at the Hanford nuclear production site in Washington State.  Clark and Olsen soon bonded over their shared views on the future of computing and their desire to create a computer that would apply the lessons learned during the Whirlwind project and the construction of the MTC to the latest advances in electronics to demonstrate the potential of a fast and power-efficient computer to the defense industry.  Specifically, Olsen and Clark wanted to explore the potential of a relatively new electronic component called the transistor.

Bardeen_Shockley_Brattain_1948

John Bardeen (l), William Shockley (seated), and Walter Brattain, the team that invented the transistor

For over forty years, the backbone of all electronic equipment was the vacuum tube pioneered by John Fleming in 1904.  While this device allowed for switching at electronic speeds, however, its limitations were numerous.  Vacuum tubes generated a great deal of heat during operation, which meant that they consumed power at a prodigious rate and were prone to burnout over extended periods of use.  Furthermore, they could not be miniaturized beyond a certain point and had to be spaced relatively far apart for heat management, guaranteeing that tube-based electronics would always be large and bulky.  Unless an alternative switching device could be found, the computer would never be able to shrink below a certain size.  The solution to the vacuum tube problem came not from one of the dozen or so computer projects being funded by the U.S. government, but from the telephone industry.

In the 1920s and 1930s, AT&T, which held a monopoly on telephone service in the United States, began constructing a series of large switching facilities in nearly every town in the country to allow telephone calls to be placed between any two phones in the United States.  These facilities relied on the same electromechanical relays that powered several of the early computers, which were bulky, slow, and wore out over time.  Vacuum tubes were sometimes used as well, but the problems articulated above made them particularly unsuited for the telephone network.  As AT&T continued to expand its network, the size and speed limitations of relays became increasingly unacceptable, so the company gave a mandate to its Bell Labs research arm, one of the finest corporate R&D organizations in the world, to discover a smaller, faster, and more reliable switching device.

In 1936, the new director of research at Bell Labs, Mervin Kelly, decided to form a group to explore the possibility of creating a solid-state switching device.  Both solid-state physics, which explores the properties of solids based on the arrangement of their sub-atomic particles, and the related field of quantum mechanics, in which physical phenomena are studied on a nanoscopic scale, were in their infancy and not widely understood, so Kelly scoured the universities for the smartest chemists, metallurgists, physicists, and mathematicians he could find.  His first hire was a brilliant, but difficult physicist named William Shockley.  Born in London to a mining engineer and a geologist, William Bradford Shockley, Jr. grew up in Palo Alto, California, in the heart of the Santa Clara Valley, a region known as the “Valley of the Heart’s Delight” for its orchards and flowering plants.  Shockley’s father spent most of his time moving from mining camp to mining camp, so he grew especially close to his mother, May, who taught him the ins and outs of geology from a young age.  After attending Stanford to stay close to his mother, Shockley received a Ph.D. from MIT in 1936 and went to work for Bell.  Gruff and self-centered, Shockley never got along with his colleagues anywhere he worked, but there was no questioning his brilliance or his ability to push colleagues towards making new discoveries.

Kelly’s group began educating itself on the field of quantum mechanics through informal sessions where they would each take a chapter of the only quantum mechanics textbook in existence and teach the material to the rest of the group.  As their knowledge of the underlying science grew in the late 1930s, the group decided the most promising path to a solid-state switching device lay with a group of materials called semiconductors.   Generally speaking, most materials are either a conductor of electricity, allowing electrons to flow through them, or an insulator, halting the flow of electrons.  As early as 1826, however, Michael Faraday, the brilliant scientist whose work paved the way for electric power generation and transmission, had observed that a small number of compounds would not only act as a conductor under certain conditions and an insulator in others, but would also serve as amplifiers under certain conditions as well.  These properties allowed a semiconductor to behave like a triode under the right conditions, but for decades scientists remained unable to determine why changes in heat, light, or magnetic field would alter the conductivity of these materials and therefore could not harness this property.  It was not until the field of quantum mechanics became more developed in the 1930s that scientists gained a great enough understanding of electron behavior to attack the problem.  Kelly’s new solid-state group hoped to unlock the mystery of semiconductors once and for all, but their work was interrupted by World War II.

In 1945, Kelly revived the solid-state project under the joint supervision of William Shockley and chemist Stanley Morgan.  The key members of this new team were John Bardeen, a physicist from Wisconsin known as one of the best quantum mechanics theorists in the world, and Walter Brattain, a farm boy from Washington known for his prowess at crafting experiments.  During World War II, great progress had been made in creating crystals of the semiconducting element germanium for use in radar, so the group focused its activities on that element.  In late 1947, Bardeen and Brattain discovered that if they introduced impurities into just the right spot on a lump of germanium, the germanium could amplify a current in the same manner as a vacuum tube triode.  Shockley’s team gave an official demonstration of this phenomenon to other Bell Labs staff on December 23, 1947, which is often recognized as the official birthday of the transistor, so named because it effects the transfer of a current across a resistor — i.e. the semiconducting material.  Smaller, less power-hungry, and more durable than the vacuum tube, the transistor paved the way for the development of the entire consumer electronics and personal computer industries of the late twentieth century.

tumblr_mrf93w8XJQ1s6mxo0o1_500

The TX-0, one of the earliest transistorized computers, designed by Wes Clark and Kenneth Olsen

Despite its revolutionary potential, the transistor was not incorporated into computer designs right away, as there were still several design and production issues that had to be overcome before it could be deployed in the field in large numbers (which will be covered in a later post).  By 1954, however, Bell Labs had deployed the first fully transistorized computer, the Transistor Digital Computer or TRADIC, while electronics giant Philco had introduced a new type of transistor called a surface-barrier transistor that was expensive, but much faster than previous designs and therefore the first practical transistor for use in a computer.  It was in this environment that Clark and Olsen proposed a massive transistorized computer called the TX-1 that would be roughly the same size as a SAGE system and deploy one of the largest core memory arrays ever built, but they were turned down because Forrester did not find their design practical.  Clark therefore went back to the drawing board to create as simple a design as he could that still demonstrated the merits of transistorized computing.  As this felt like a precursor to the larger TX-1, Olsen and Clark named this machine the TX-0.

Completed in 1955 and fully operational the next year, the TX-0 — often pronounced “Tixo” — incorporated 3,600 surface-barrier transistors and was capable of performing 83,000 operations per second.  Like the Whirlwind, the TX-0 operated in real time, and it also incorporated a display with a 512×512 resolution that could be manipulated by a light pen, and a core memory that could store over 65,000 words, though Clark and Olsen settled on a relatively short 18-bit word length.  Unlike the Whirlwind I, which occupied 2,500 square feet, the TX-0 took up a paltry 200 square feet.  Both Clark and Olsen realized that the small, fast, interactive TX-0 represented something new: a (relatively) inexpensive computer that a single user could interact with in real time.  In short, it exhibited many of the hallmarks of what would become the personal computer.

With the TX-0 demonstrating the merits of high-speed transistors, Clark and Olsen returned to their goal of creating a more complex computer with a larger memory, which they dubbed the TX-2.  Completed in 1958, the TX-2 could perform a whopping 160,000 operations per second and contained a core memory of 260,000 36-bit words, far surpassing the capability of the earlier TX-0.  Olsen once again designed much of the circuitry for this follow-up computer, but before it was completed he decided to leave MIT behind.

The Digital Equipment Corporation

vs-dec-pdp-1

The PDP-1, Digital Equipment Corporation’s First Computer

Despite what Olsen saw as the nearly limitless potential of transistorized computers, the world outside MIT remained skeptical.  It was one thing to create an abstract concept in a college laboratory, people said, but another thing entirely to actually deploy an interactive transistorized system under real world conditions.  Olsen fervently desired to prove these naysayers wrong, so along with a fellow student who worked with him on the MTC named Harlan Anderson he decided to form his own computer company.  As a pair of academics with no practical real-world business experience, however, Olsen and Anderson faced difficulty securing financial backing.  They approached defense contractor General Dynamics first, but were flatly turned down.  Unsure how to proceed next, they visited the Small Business Administration office in Boston, which recommended they contact investor Georges Doriot.

Georges Doriot was a Frenchman who immigrated to the United States in the 1920s to earn an MBA from Harvard and then decided to stay on as a professor at the school.  In 1940, Doriot became an American citizen, and the next year he joined the United States Army as a lieutenant colonel and took on the role of director of the Military Planning Division for the Quartermaster General.  Promoted to brigadier general before the end of the war, Doriot returned to Harvard in 1946 and also established a private equity firm called the American Research and Development Corporation (ARD).  With a bankroll of $5 million raised largely from insurance companies and educational institutions, Doriot sought out startups in need of financial support in exchange for taking a large ownership stake in the company.  The goal was to work closely with the company founders to grow the business and then sell the stake at some point in the future for a high return on investment.  While many of the individual companies would fail, in theory the payoff from those companies that did succeed would more than make up the difference and return a profit to the individuals and groups that provided his firm the investment capital.  Before Doriot, the only outlets for a new business to raise capital were the banks, which generally required tangible assets to back a loan, or a wealthy patron like the Rockefeller or Whitney families.  After Doriot’s model proved successful, inexperienced entrepreneurs with big ideas now had a new outlet to bring their products to the world.  This outlet soon gained the name venture capital.

In 1957, Olsen and Anderson wrote a letter to Doriot detailing their plans for a new computer company.  After some back and forth and refinement of the business plan, ARD agreed to provide $70,000 to fund Olsen and Anderson’s venture in return for a 70% ownership stake, but the money came with certain conditions.  Olsen wanted to build a computer like the TX-0 for use by scientists and engineers that could benefit from a more interactive programming environment in their work, but ARD did not feel it was a good idea to go toe-to-toe with an established competitor like IBM.  Instead, ARD convinced Olsen and Anderson to produce components like power supplies and test equipment for core memory.  Olsen and Anderson had originally planned to call their new company the Digital Computer Corporation, but with their new ARD-mandated direction, they instead settled on the name Digital Equipment Corporation (DEC).

In August 1957, DEC moved into its new office space on the second floor of Building 12 of a massive woolen mill complex in Maynard, Massachusetts, originally built in 1845 and expanded many times thereafter.  At the time, the company consisted of just three people: Ken Olsen, Harlan Anderson, and Ken’s younger brother Stan, who had worked as a technician at Lincoln Lab.  Ken served as the leader and technical mastermind of the group, Anderson looked after administrative matters, and Stan focused on manufacturing.  In early 1958, the company released its first products.

DEC arrived on the scene at the perfect moment.  Core memory was in high demand and transistor prices were finally dropping, so all the major computer companies were exploring new designs, creating an insatiable demand for testing equipment.  As a result, DEC proved profitable from the outset.  In fact, Olsen and Anderson actually overpriced their stock due to their business inexperience, but with equipment in such high demand, firms bought from DEC anyway, giving the company extremely high margins and allowing it to exceed its revenue goals.  Bolstered by this success, Olsen chose to revisit the computer project with ARD, so in 1959 DEC began work on a fully transistorized interactive computer.

Designed by Ben Gurley, who had developed the display for the TX-0 at MIT, the Programmed Data Processor-1, more commonly referred to as the PDP-1, was unveiled in December 1959 at the Eastern Joint Computer Conference in Boston.  It was essentially a commercialized version of the TX-0, though it was not a direct copy.  The PDP-1 incorporated a better display than its predecessor with a resolution of 1024 x 1024 and it was also faster, capable of 100,000 operations per second.  The base setup contained only 4,096 18-bit words of core memory, but this could be upgraded to 65,536.  The primary method of inputting programs was a punched tape reader, and it was hooked up to a typewriter as well.  While not nearly as powerful as the latest computers from IBM and its competitors in the mainframe space, the PDP-1 only cost $120,000, a stunningly low price in an era where buying a computer would typically set an organization back a million dollars or more.  Lacking developed sales, manufacturing, or service organizations, DEC sold only a handful of PDP-1 computers over its first two years on the market to organizations like Bolt, Beranek, and Newman and the Lawrence Livermore Labs.  A breakthrough occurred in late 1962 when the International Telegraph and Telephone Company (ITT) decided to order fifteen PDP-1 computers to form the heart of a new telegraph message switching system designated the ADX-7300.  ITT would continue to be DEC’s most important PDP-1 customer throughout the life of the system, ultimately purchasing roughly half of the fifty-three computers sold.

While DEC only sold around fifty PDP-1’s over its lifetime, the revolutionary machine introduced interactive computing commercially and initiated the process of opening computer use to ever greater portions of the public, which culminated in the birth of the personal computer two decades later.  With its monitor and real-time operation, it also provided a perfect platform for creating engaging interactive games.  Even with these advances, the serious academics and corporate data handlers of the 1950s were unlikely to ever embrace the computer as an entertainment medium, but unlike the expensive and bulky mainframes reserved for official business, the PDP-1 and its successors soon found their way into the hands of students at college campuses around the country, beginning with the birthplace of the PDP-1 technology: MIT.