Seymour Cray

Historical Interlude: From the Mainframe to the Minicomputer Part 3, DEC and Data General

While IBM was crushing its competition in the mainframe space, another computer market began opening up that IBM virtually ignored.  Following the success of the PDP-1, Ken Olsen and his Digital Equipment Corporation (DEC) continued their work in real-time computing and cultivated a new market for computerized control systems for scientific and engineering projects.  After stumbling in its attempts to build larger systems in the IBM mold, the company decided to create machines even smaller and cheaper than low-end mainframes like the 1401 and H200.  These so-called “minicomputers” could not hope to compete with mainframe systems on power and were often more difficult to program due to a comparably limited memory, but DEC’s new line of computers were also far cheaper and more interactive than any system on the market and opened up computer use to a larger swath of the population than ever before.  Building on these advances, by the end of the 1960s a DEC competitor established by a disgruntled former employee was able to introduce a minicomputer that in its most basic configuration cost just under $4,000, bringing computers tantalizingly close to a mass-market product.  The combination of lower prices and real-time operation offered by the minicomputer provided the final key element necessary to introduce computer entertainment programs like Spacewar! to the general public.

Note: Once again we have a historical interlude post discussing the technological breakthroughs in computing in the 1960s that culminated in the birth of the electronic entertainment industry.  The material in this section is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, A History of Modern Computing by Paul Ceruzzi, The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation by Glenn Rifkin and George Harrar, and oral histories conducted by the Computer History Museum with Gordon Bell, Ed de Castro, Alan Kotok, and Harlan Anderson.

The Matrix

ken-olsen-founding-Globe-File-Photo-Yunghi-Kim-ken__1297713235_7602

Ken Olsen poses outside The Mill, DEC corporate headquarters

When last we left DEC, the company had just introduced its first computer, the PDP-1, to a favorable response.  Buoyed by continuing demand for system modules and test equipment and the success of the PDP-1, DEC’s profits rose to $807,000 on sales of $6.5 million for the 1962 fiscal year.  Growing financial success, however, could not compensate for serious underlying structural problems at the company.  From his time serving as a liaison between Project Whirlwind and IBM, Ken Olsen had inherited an extreme loathing for bureaucracy and the trappings of corporate culture and preferred to encourage individual initiative and experimentation more in line with practices in the academic sector.  This atmosphere suited most of DEC’s employees, many of them transplants from MIT and Lincoln Labs eager — like Olsen — to continue their academic work in a private setting.  DEC headquarters, affectionately called “The Mill,” practically became an extension of the MIT campus as students traveled back and forth between Cambridge and Maynard to work part time or just hang out with DEC engineers and learn how the company’s computers operated.  There were no set engineering teams, so employees would organically form groups around specific projects.  While this freedom and lack of oversight spurred creative thinking, however, it left DEC without a coherent product strategy or well developed sales, manufacturing, and servicing organizations.

In 1963, DEC revenues soared to $10 million, while profits jumped to $1.2 million.  The next year, however, revenues flattened and earnings declined, coming in at $11 million and $900,000 respectively.  With little management guidance, DEC engineering teams tended to over commit and under deliver on products, while lack of communication between sales, order processing, and manufacturing resulted in difficulties delivering the company’s existing product line to customers in sufficient quantities.  Clearly, DEC needed to implement a more rigorous corporate structure to remain viable.  The struggle to reform DEC ultimately pitted the company’s two founders against each other as Olsen steadfastly refused to implement a rigid hierarchy, while Harlan Anderson backed Jay Forrester, the Whirlwind project leader turned MIT Sloan School of Business professor who served as a director of DEC, in his efforts to implement some of his own management theories at the company.  Georges Doriot, the most important director of the company due to ARD’s large stake in DEC, remained a staunch supporter of and adviser to Olsen, but preferred to stay out of the conflict, feeling directors should not tell management what to do unless a company is in dire straits.

While struggling to operate efficiently, DEC also experienced difficulty creating a successor to the PDP-1.  Initial plans to create 24- and 36-bit versions of the computer, designated the PDP-2 and PDP-3 respectively, floundered due to technical hurdles and a lack of customer interest and never entered production.  Worse, PDP-1 designer Ben Gurley announced his resignation in December 1962 to join a new startup before being tragically murdered less than a year later by a former co-worker.  With Gurley’s departure, DEC’s primary computer designer became a young engineer named Gordon Bell.

portrait_of_gordon_bell.102631243.lg

Gordon Bell, DEC’s principal computer designer after the departure of Ben Gurley

Born in Kirksville, Missouri, Gordon Bell exhibited an aptitude for electrical engineering at an early age and was earning $6/hour as an electrician by the time he was about twelve years old.  Matriculating to MIT in 1952, Bell earned his B.S. in electrical engineering from the school in 1956 and his M.S. in the same field the next year.  Originally interested in being a power engineer, Bell worked for American Electric Power and GE through a co-op program while attending MIT, but he ultimately decided not to pursue that path further.  Unsure what to do after graduation, he accepted an offer to travel to Australia to set up a new computer lab in the electrical engineering department of the University of New South Wales.  After a brief stint in the Speech Computation Laboratory at MIT, Bell Joined DEC in 1960 and did some work on the I/O subsystem of the PDP-1.  After helping with the aborted PDP-3, which had been an attempt to enter the scientific market served by the 36-bit IBM 7090, Bell initiated a project to create a cheaper, but more limited version of the PDP-1 intended for process control.  Dubbed the PDP-4, the computer sold for just $65,000 and included some updated features such as auto index registers, but a lack of compatibility with the PDP-1 coupled with reduced capabilities compared to DEC’s original computer ultimately killed interest in the product.  While DEC managed to sell fifty-four PDP-4s, one more unit than the PDP-1, it was considered a commercial disappointment.

In early 1963, Olsen and Anderson decided to return to the PDP-3 concept of a large scientific computer that could challenge IBM in the mainframe space and tapped Bell for the project, who was assisted by Alan Kotok, the noted MIT hacker who joined DEC upon graduating in 1962.  Dubbed the PDP-6, Bell’s computer was capable of performing 250,000 operations per second and came equipped with a core memory with a capacity of 32,768 36-bit words.  While not quite on par with the industry-leading IBM 7094, the computer was capable of real-time operation and incorporated native support for time sharing unlike the IBM model, and it was also far cheaper, retailing for just $300,000.  Unfortunately, the computer was poorly engineered and not thoroughly tested, leading to serious technical defects only discovered once the first computers began shipping to customers in 1964.  As a result, the computer turned out to be a disaster, with only twenty-three units sold.  Harlan Anderson, who had championed the computer heavily, bore the brunt of the blame for its failure from his co-founder Olsen.  Combined with their on-going fight over the future direction of the company, the stigma of the PDP-6 fiasco ultimately drove Anderson from the company in 1966.  The failure of the PDP-6 was the clearest indicator yet that DEC needed to reform its corporate structure to survive.

In 1965, Olsen finally hit upon a solution to the company’s organizational woes.  Rather than a divisional structure, Olsen reorganized DEC along product lines.  Each computer sold by the company, along with the company’s module and memory test equipment lines, would become its own business unit run by a single senior executive with full profit and loss responsibility and complete independence to define, develop, and market his product as he saw fit.  To actually execute their visions, each of these senior executives would have to present his plans to a central Operations Committee composed of Olsen and his most trusted managers, where they would bid for resources from the company’s functional units such as sales, manufacturing, and marketing.  In effect, each project manager became an entrepreneur and the functional managers became investors, allocating their resources based on which projects the Operations Committee felt deserved the most backing.  While DEC was not the first company to try this interconnected corporate structure — which soon gained the moniker “matrix management” — the ensuing financial success of DEC caused the matrix to become closely associated with Ken Olsen in subsequent decades.

 The Minicomputer

dec_pdp-8.pdp8.102630610.lg

The PDP-8, the first widely sold minicomputer

One of DEC’s oldest computer customers was Atomic Energy of Canada, which had purchased one of the first PDP-1 computers for its Chalk River facility.  The company proceeded to buy a PDP-4 to control the reactor at Chalk River, but the computer was not quite able to handle all the duties it had been assigned.  To solve this problem, Gordon Bell proposed in early 1963 that rather than create custom circuitry to meet Atomic Energy’s needs, DEC should build a smaller computer that could serve as a front end to interface with the PDP-4 and provide the needed functionality.  Rather than just create a system limited to Atomic Energy’s needs, however, Bell decided to design the machine so it could also function as an independent general-purpose computer.  DEC named this new computer the PDP-5.

Bell was not the first person to create a small front-end computer: in 1960 Control Data released the Seymour Cray-designed CDC 160 to serve as an I/O device to interface with its 1604 mainframe.  Soon after, CDC repurposed the machine as a stand-alone device and marketed it as the CDC 160A.  The brilliant Cray employed bank switching and other techniques to allow the relatively limited 12-bit computer to address almost as much memory as a large mainframe, though not as easily or efficiently.  While not as powerful as a full-scale mainframe, the 160A provided most of the same functionality — albeit scaled down at a speed of only 67,000 operations per second — at a price of only $60,000 and a footprint the size of a metal desk.  CDC experienced some success with the 160A, but as the company was primarily focused on supercomputers, it paid little attention to the low-end market.

While Bell planned for the 12-bit PDP-5 to be a general purpose computer, DEC essentially treated the computer as a custom solution for Atomic Energy and not as a key part of its future product line, which was then focused around the large-scale PDP-6.  As a result, DEC planned to only sell roughly ten computers, just enough to recoup its development costs.  Just as IBM had underestimated demand for the relatively cheap 1401, however, DEC did not realized how interested the market would be in a fully functional computer that sold for just $27,000, by far the cheapest core-memory computer on the market.  Orders soon began pouring in, and the company ultimately sold roughly 1,000 PDP-5s, making it the company’s best-selling computer by a factor of twenty.  With the PDP-6 floundering, Ken Olsen decided to champion smaller computers, and the company began considering a more advanced followup to the PDP-5.

_DCASTRO

Edson de Castro, the engineer who designed the PDP-8 and later established Data General

Just as Harlan Anderson was forced out of DEC due to the failure of the PDP-6, so too did Gordon Bell decide it was time to move on.  While he did not officially leave the company, he took a sabbatical in 1966 that lasted six years in which he did some work in academia and continued to serve as a DEC consultant.  In his place, the task of developing a followup to the PDP-5 fell to another engineer named Edson de Castro.

Born in Plainfield, New Jersey, Ed de Castro spent the majority of his childhood in Newton, Massachusetts.  The son of a chemical engineer, de Castro had a fascination with mechanical devices from a young age and always knew he wanted to be an engineer.  Accepted into MIT, de Castro opted instead to attend the much smaller and less prestigious Lowell Technological Institute, where he felt he would receive more attention from the school faculty.  Interested in business, de Castro applied to Harvard Business School after graduation, but the school said it would only accept him after the next academic year.  He therefore needed a job in the short term and was recruited by Stan Olsen as a systems engineer for DEC in late 1960, where he worked with customers to develop applications for DEC’s systems modules.  After just under a year at DEC, de Castro left to attend Harvard, but his grades were insufficient to qualify for the second year of the program, so he returned to DEC to work in the custom products division, which focused on memory test equipment.

After Gordon Bell and Alan Kotok outlined the PDP-5, de Castro became the primary engineer responsible for building it.  The original design called for the machine to be a 10-bit computer, but de Castro upped this to 12 bits — multiples of 6 being the standard in the industry at the time — so it could address more memory and be more useful.  When the PDP-5 became successful, de Castro went back to working as a systems engineer and helped install the computers in the field.  Soon after, he turned his attention to the computer’s successor, the PDP-8.

The PDP-8 had several advantages over the small computers that preceded it.  First of all, it used a transistor from Philco, the germanium micro-alloy diffused transistor, that operated particularly quickly and allowed the computer to perform 500,000 operations per second.  Furthermore, DEC harnessed its expertise in core memory to lower the memory cycle time to 1.6 microseconds, slightly faster than an IBM 7090 and much faster than the CDC 160A.  While the 12-bit computer could only directly address 7 bits of memory, DEC employed several techniques to allow the computer to indirectly address full 12-bit words and perform virtually any operation a larger computer could, albeit sometimes much slower.  While complex calculations might take a long time, however, many simpler operations could be performed just as quickly on a PDP-8 as on a much larger and more expensive computer.  The PDP-8 was also incredibly small, as de Castro employed an especially efficient board design that allowed the entire computer to fit into a case that occupied only eight cubic feet of volume, meaning it was small enough to place on top of a standard workbench.

In 1965, DEC introduced the PDP-8 with 4,000 words of memory and a teletype for user input for just $18,000.  Within just a few years, the price fell to under $10,000 as DEC continued to cost reduce the computer though new technologies like integrated circuits, which were first used in the PDP-8 in 1969.  Thanks to de Castro, organizations could now purchase a computer that fit on top of a desk yet provided nearly all the same functionality at nearly the same speed (for most operations, at least) as a million dollar computer taking up half a room.  The limitations of the PDP-8 guaranteed it would not displace mainframes entirely, but the low price helped it become a massive success with over 50,000 units sold over a fifteen year period.  Many of these machines were sold under a new business model in which DEC would act as an original equipment manufacturer (OEM) by selling a PDP-8 to another company that would add its own software and peripheral hardware.  This company would then sell the package under its own name and take responsibility for service and maintenance.  Before long, OEM arrangements grew to represent fifty percent of DEC’s computer sales while allowing DEC to keep its costs down by farming out labor intensive tasks like software creation.  As DEC rode the success of the PDP-8, revenues climbed from $15 million in 1965 to almost $23 million in 1966 to $39 million in 1967, while profits increased sixfold between 1965 and 1967 to $4.5 million.

The Nova

dg-nova

The Data General Nova, a minicomputer that combined an incredibly small size with an incredibly cheap price

The success of the PDP-8 opened up a whole new market for small, cheap machines that soon gained the designation “minicomputers.”  With IBM and most of its competitors remaining focused on full-sized mainframes, however, this market was largely populated by newcomers to the computer industry.  Hewlett-Packard, the large West Coast electronics firm, first offered to buy DEC and then went into competition with its own minicomputer line.  Another west-coast electronics firm, Varian Associates, also entered the fray, as did an array of start-ups like Wang Laboratories and Computer Control Company, which was quickly purchased by Honeywell.  By 1970, over seventy companies were manufacturing minicomputers, and a thriving high-technology sector had emerged along Route 128 in the suburbs of Boston.  DEC continued to be the leader in the field, but soon faced some of its most serious competition from within the company itself.

Ed de Castro had brought great success to DEC by designing the PDP-8, but he was not particularly happy at the company.  The Silicon Valley concept of rewarding engineering talent with generous stock options did not yet exist, so while DEC had gone public in 1966, only senior executives reaped the benefits while de Castro, for all the value he added to the company, had to make do with an engineer’s salary of around $12,000 a year.  Furthermore, de Castro had hoped to be placed in charge of the PDP-8 product line, but Ken Olsen refused him.  Sensing de Castro was unhappy and not wanting to lose such a talent, DEC executive Nick Mazzarese hoped to placate de Castro by giving him charge of a new project to define the company’s next-generation successor to the PDP-8.

Although the PDP-8 was only two years old by the time de Castro turned to designing a followup in 1967, the computer market had changed drastically.  The integrated circuit was by now well established and promised significant increases in performance alongside simultaneous reductions in size and cost.  Furthermore, the dominance of the System/360 had caused a shift from a computer architecture based on multiples of six bits to one based on multiples of the 8-bit byte, which remains the standard in the computer industry to this day.  DEC’s competitors in the minicomputer space were therefore focusing on creating 16-bit machines, and the 12-bit PDP-8 looked increasingly obsolete in comparison.

In late 1967, de Castro and fellow engineers Henry Burkhardt and Dick Sogge unveiled an ambitious computer architecture designed to keep DEC on top of the minicomputer market well into the 1970s.  Dubbed the PDP-X, de Castro’s system was built around medium-scale integration circuits and — like the System/360 — would offer a range of power and price options all enjoying software and peripheral compatibility.  Furthermore, while the base architecture would be 16-bit, the PDP-X was designed to be easily configurable for 32-bit technology, allowing customers to upgrade as their needs grew over time without having to redo all their software or buy all new hardware.  Rather than being just a replacement for the PDP-8, the PDP-X was positioned as a product that could supplant DEC’s entire existing computer line.

But the PDP-X was too ambitious for DEC.  Olsen still remembered the failure of the PDP-6 project, and he was horrified when de Castro told him that the PDP-X would be an even bigger undertaking than that computer.  Worse, de Castro was known for bucking DEC management practices and doing things his own way, so he had butted heads with nearly everyone on the company’s Operations Committee while simultaneously alienating nearly every product line manager by proposing to replace all of their products.  Unlike Tom Watson Jr., who bet his company on an integrated product line and came to dominate the mainframe industry as a result, Olsen could not bring himself to pledge so many resources to a single project.  DEC turned the PDP-X down.

This was the last straw for de Castro.  He had long been interested in business — witness his brief stint at Harvard — and he had long chafed under DEC management.  He had also toyed with the idea of establishing his own company in the past, and with the Route 128 tech corridor taking off, there was plenty of venture money to be had for a computer startup.  Therefore, de Castro brought in his former boss in custom products, Pat Greene, to run his prospective company and a Fairchild salesman named Herb Richman that he had purchased circuits from to run marketing and began designing a new 8-bit computer with Burkhardt and Sogge before actually leaving DEC.  After initially garnering little interest from venture capitalists, Richman placed de Castro in touch with George Cogar, co-founder of a company called Mohawk Data Sciences, who agreed to become the lead investor in what turned out to be $800,000 in financing.

In early 1968, the group was finally ready to leave DEC, but Pat Greene got cold feet and appeared ready to back out, uncomfortable with the work the group was doing behind Ken Olsen’s back.  Therefore, de Castro, Burkhardt, and Sogge waited until April 15, when Greene was out of the country on a business trip to Japan, to resign and officially establish Data General.  When Greene returned from Japan, he turned over all materials he had related to the new company to Olsen, including the plans for the 8-bit computer the three engineers had been secretly building at DEC.  Olsen felt betrayed and carried an enmity for Data General for decades, convinced de Castro had stolen DEC technology when he departed.  Despite this belief, however, DEC never sued.

In 1969, de Castro, Burkhardt, and Sogge released their first computer, the Data General Nova.  Quickly abandoning their 8-bit plans once leaving DEC, the trio designed the Nova using medium-scale integration circuits so that the entire computer fit on just two printed circuit boards: one containing the 16-bit CPU and the other containing various support systems.  By fitting all the circuitry on only two boards with minimal wiring, Data General was able to significantly undercut the PDP-8 on cost while simultaneously making the system easier to manufacture and therefore more reliable.  With these savings, Data General was able to offer the Nova at the extremely low price of $3,995, though practically speaking, the computer was essentially useless without also buying a 4K core memory expansion, which pushed the price up to around $7,995.  Still this was an unheard of price for a fully functional computer and spurred brisk sales.  It also piqued the interest of a young engineer recently graduated from the University of Utah who thought it just might be possible to use the Nova to introduce the Spacewar! game so popular in certain university computer labs to the wider world.

Advertisements

Historical Interlude: From the Mainframe to the Minicomputer Part 2, IBM and the Seven Dwarfs

The computer began life in the 1940s as a scientific device designed to perform complex calculations and solve difficult equations.  In the 1950s, the United States continued to fund scientific computing projects at government organizations, defense contractors, and universities, many of them based around the IAS architecture derived from the EDVAC and created by John von Neumann’s team at Princeton.  Some of the earliest for-profit computer companies emerged out of this scientific work such as the previously discussed Engineering Research Associates, the Hawthorne, California-based Computer Research Corporation, which spun out of a Northrup Aircraft project to build a computer for the Air Force in 1952, and the Pasadena-based ElectroData Corporation, which spun out of the Consolidated Engineering Corporation that same year.  All of these companies remained fairly small and did not sell many computers.

Instead, it was Remington Rand that identified the future path of computing when it launched the UNIVAC I, which was adopted by businesses to perform data processing.  Once corporate America understood the computer to be a capable business machine and not just an expensive calculator, a wide array of office equipment and electronics companies entered the computer industry in the mid 1950s, often buying out the pioneering computer startups to gain a foothold.  Remington Rand dominated this market at first, but as discussed previously, IBM soon vaulted ahead as it acquired computer design and manufacturing expertise participating in the SAGE project and unleashed its world-class sales and service organizations.  Remington Rand attempted to compensate by merging with Sperry Gyroscope, which had both a strong relationship with the military and a more robust sales force, to form Sperry Rand in 1955, but the company never seriously challenged IBM again.

While IBM maintained its lead in the computer industry, however, by the beginning of the 1960s the company faced threats to its dominance at both the low end and the high end of the market from innovative machines based around new technologies like the transistor.  Fearing these new challengers could significantly damage IBM, Tom Watson Jr. decided to bet the company on an expensive and technically complex project to offer a complete line of compatible computers that could not only be tailored to a customer’s individual’s needs, but could also be easily modified or upgraded as those needs changed over time.  This gamble paid off handsomely, and by 1970 IBM controlled well over seventy percent of the market, with most of the remainder split among a group of competitors dubbed the “seven dwarfs” due to their minuscule individual market shares.  In the process, IBM succeeded in transforming the computer from a luxury item only operated by the largest firms into a necessary business appliance as computers became an integral part of society.

Note: Yet again we have a historical interlude post that summarizes key events outside of the video game industry that nevertheless had a significant impact upon it.  The information in this post is largely drawn from Computer: A History of the Information Machine by Martin Campbell-Kelly and William Aspray, A History of Modern Computing by Paul Ceruzzi, Forbes Greatest Technology Stories: Inspiring Tales of the Entrepreneurs and Inventors Who Revolutionized Modern Business by Jeffrey Young, IBM’s Early Computers by Charles Bashe, Lyle Johnson, John Palmer, and Emerson Pugh. and Building IBM: Shaping an Industry and Its Technology by Emerson Pugh.

IBM Embraces the Transistor

IBM1401_TapeSystem_Mwhite

The IBM 1401, the first mainframe to sell over 10,000 units

Throughout most of its history in computers, IBM has been known more for evolution than revolution.  Rarely first with a new concept, IBM excelled at building designs based around proven technology and then turning its sales force loose to overwhelm the competition.  Occasionally, however, IBM engineers have produced important breakthroughs in computer design.  Perhaps none of these were more significant than the company’s invention of the disk drive.

On the earliest computers, mass data storage was accomplished through two primary methods: magnetic tape or magnetic drums.  Tape could hold a large amount of data for the time, but it could only be read serially, and it was a fragile medium.  Drums were more durable and had the added benefit of being random access — that is any point of data on the drum could be read at any time — but they were low capacity and expensive.  As early as the 1940s, J. Presper Eckert had explored using magnetic disks rather than drums, which would be cheaper and feature a greater storage capacity due to a larger surface area, but there were numerous technical hurdles that needed to be ironed out.  Foremost among these was the technology to read the disks.  A drum memory array used rigid read-write heads that could be readily secured, though at high cost.  A disk system required a more delicate stylus to read the drives, and the constant spinning of the disk created a high risk that the stylus would make contact with and damage it.

The team that finally solved these problems at IBM worked not at the primary R&D labs in Endicott or Poughkeepsie, but rather a relatively new facility in San Jose, California, led by IBM veteran Reynold Johnson that had been established in 1952 as an advanced technologies research center free of the influence of the IBM sales department, which had often shut down projects with no immediate practical use.  One of the lab’s first projects was to improve storage for IBM’s existing tabulating equipment.  This task fell to a team led by Arthur Critchlow, who decided based on customer feedback to develop a new random access solution that would allow IBM’s tabulators and low-end computers to not only be useful for data processing, but also for more complicated jobs like inventory management.  After testing a wide variety of memory solutions, Critchlow’s team settled on the magnetic disk as the only viable solution, partially inspired by a similar project at the National Bureau of Standards on which an article had been published in August 1952.

To solve the stylus problem on the drive, Critchlow’s team attached a compressor to the unit that would pump a thin layer of air between the disk and the head.  Later models would take advantage of a phenomenon known as the “boundry layer” in which the fast motion of the disks would generate the air cushion themselves.  After experimenting with a variety of head types and positions throughout 1953 and 1954, the team was ready to complete a final design.  Announced in 1956 as the Model 305 Disk Storage Unit and later renamed RAMAC (for Random Access Memory Accounting Machine), IBM’s first disk drive consisted of fifty 24-inch diameter aluminum disks rotating at 1200 rpm with a storage capacity of five million characters.  Marketed as an add-on to the IBM 650, RAMAC revolutionized data processing by eliminating the time consuming process of manually sorting information and provided the first compelling reason for small and mid-sized firms to embrace computers and eliminate electro-mechanical tabulating equipment entirely.

IBM_7090_computer

The IBM 7090, the company’s first transistorized computer

In August 1958, IBM introduced its latest scientific computer, the IBM 709, which improved on the functionality of the IBM 704.  The 709 continued to depend on vacuum tubes, however, even as competitors were starting to bring the first transistorized computers to market.  While Tom Watson, Jr. and his director of engineering, Wally McDowell, were both excited by the possibilities of transistors from the moment they first learned about them and as early as 1950 charged Ralph Palmer’s Poughkeepsie laboratory to begin working with the devices, individual project managers continued to have the final authority in choosing what parts to use in their machines, and many of them continued to fall back on the more familiar vacuum tube.  In the end, Tom Watson, Jr. had to issue a company-wide mandate in October 1957 that transistors were to be incorporated into all new projects.  In the face of this resistance, Palmer felt that IBM needed a massive project to push its solid-state designs forward, something akin to what Project SAGE had done for IBM’s efforts with vacuum tubes and core memory.  He therefore teamed with Steve Dunwell, who had spent part of 1953 and 1954 in Washington D.C. assessing government computing requirements, to propose a high-speed computer tailored to the ever-increasing computational needs of the military-industrial complex.  A contract was eventually secured with the National Security Agency, and IBM approved “Project Stretch” in August 1955, which was formally established in January 1956 with Dunwell in charge.

Project Stretch experienced a long, difficult, and not completely successful development cycle, but it did achieve Palmer’s goals of greatly improving IBM’s solid-state capabilities, with particularly important innovations including a much faster core memory and a “drift transistor” that was faster than the surface-barrier transistor used in early solid-state computing projects like the TX-0.  As work on Stretch dragged on, however, these advances were first introduced commercially through another product.  In response to Sputnik, the United States Air Force quickly initiated a new Ballistic Missile Early Warning System (BMEWS) project that, like SAGE, would rely on a series of linked computers.  The Air Force mandated, however, that these computers incorporate transistors, so Palmer offered to build a transistorized version of the 709 to meet the project’s needs.  The resulting IBM 7090 Data Processing System, deployed in November 1959 as IBM’s first transistorized computer, provided a six-fold increase in performance over the 709 at only one-third additional cost.  In 1962,  an upgraded version dubbed the 7094 was released with a price of roughly $2 million.  Both computers were well-received, and IBM sold several hundred of them.

Despite the success of its mainframe computer business, IBM in 1960 still derived the majority of its sales from the traditional punched-card business.  While some larger organizations were drawn to the 702 and 705 business computers, their price kept them out of reach of the majority of IBM’s business customers.  Some of these organizations had embraced the low-cost 650 as a data processing solution, leading to over 800 installations of the computer by 1958, but it was actually more expensive and less reliable than IBM’s mainline 407 electric accounting machine.  The advent of the transistor, however, finally provided the opportunity for IBM to leave its tabulating business behind for good.

The impetus for a stored-program computer that could displace traditional tabulating machines initially came from Europe, where IBM did not sell its successful 407 due to import restrictions and high tooling costs.  In 1952, a competitor called the French Bull Company introduced a new calculating machine, the Bull Gamma 3, that used delay-line memory to provide greater storage capacity at a cheaper price than IBM’s electronic calculators and could be joined with a card reader to create a faster accounting machine than anything IBM offered in the European market.  Therefore, IBM’s French and German subsidiaries began lobbying for a new accounting machine to counter this threat.  This led to the launch of two projects in the mid-1950s: the modular accounting calculator (MAC) development project in Poughkeepsie that birthed the 608 electronic calculator and the expensive and relatively unsuccessful 7070 transistorized computer, and the Worldwide Accounting Machine (WWAM) project run out of France and Germany to create an improved traditional accounting machine for the European market.

While the WWAM project had been initiated in Europe, it was soon reassigned to Endicott when the European divisions proved unable to come up with an accounting machine that could meet IBM’s cost targets.  To solve this problem, Endicott engineer Francis Underwood proposed that a low-cost computer be developed instead.  Management approved this concept in early 1958 under the name SPACE — for Stored Program Accounting and Calculating Equipment — and formally announced the product in October 1959 as the IBM 1401 Data Processing System.  With a rental cost of only $2,500 a month (roughly equivalent to a purchase price of $150,000), the transitorized 1401 proved much faster and more reliable than an IBM 650 at a fraction of the cost and was only slightly more expensive than a mid-range 407 accounting machine setup.  More importantly, it shipped with a new chain printer that could output 600 lines per minute, far more than the 150 lines per minute produced by the 407, which relied on obsolete prewar technology.  First sold in 1960, IBM projected that it would sell roughly 1,000 1401 computers over its entire lifetime, but its combination of power and price proved irresistible, and by the end of 1961 over 2,000 machines had already been installed.  IBM would eventually deploy 12,000 1401 computers before it was officially withdrawn in 1971.  Powered by the success of the 1401, IBM’s computer sales finally equaled the sales of punch card products in 1962 and then quickly eclipsed them.  No computer model had ever approached the success of the 1401 before, and as IBM rode the machine to complete dominance of the mainframe industry in the early 1960s, the powder-blue casing of the machine soon inspired a new nickname for the company: Big Blue.

The Dwarfs

honeywell200

The Honeywell 200, which competed with IBM’s 1401 and threatened to destroy its low-end business

In the wake of Remington Rand’s success with the UNIVAC I, more than a dozen old-line firms flocked to the new market.  Companies like Monroe Calculating, Bendix, Royal, Underwood, and Philco rushed to provide computers to the business community, but one by one they fell by the wayside.  Of these firms, Philco probably stood the best chance of being successful due to its invention of the surface barrier transistor, but while its Transac S-1000 — which began life in 1955 as an NSA project called SOLO to build a transistorized version of the UNIVAC 1103 — and S-2000 computers were both capable machines, the company ultimately decided it could not keep up with the fast pace of technological development and abandoned the market like all the rest.  By 1960, only five established companies and one computer startup joined Sperry Rand in attempting to compete with IBM in the mainframe space.  While none of these firms ever succeeded in stealing much market share from Big Blue, most of them found their own product niches and deployed some capable machines that ultimately forced IBM to rethink some of its core computer strategies.

Of the firms that challenged IBM, electronics giants GE and RCA were the largest, with revenues far exceeding the computer industry market leader, but in a way their size worked against them.  Since neither computers nor office equipment were among either firm’s core competences, nor integral to either firm’s future success, they never fully committed to the business and therefore never experienced real success.  Unsurprisingly, they were the first of the seven dwarfs to finally call it quits, with GE selling off its computer business in 1970 and RCA following suit in 1971.  Burroughs and NCR, the companies that had long dominated the adding machine and cash register businesses respectively, both entered the market in 1956 after buying out a small startup firm — ElectroData and Computer Research Corporation respectively — and managed to remain relevant by creating computers specifically tailored to their preexisting core customers, the banking sector for Burroughs and the retail sector for NCR.  Sperry Rand ended up serving niche markets as well after failing to compete effectively with IBM, experiencing success in fields such as airline reservation systems.  The biggest threat to IBM’s dominance in this period came from two Minnesota companies: Honeywell and Control Data Corporation (CDC).

Unlike the majority of the companies that persisted in the computer industry, Honeywell came not from the office machine business, but from the electronic control industry.  In 1883, a man named Albert Butz created a device called the “damper flapper” that would sense when a house was becoming cold and cause the flapper on a coal furnace to rise, thus fanning the flames and warming the house.  Butz established a company that did business under a variety of names over the next few years to market his innovation, but he had no particular acumen for business.  In 1891, William Sweatt took over the company and increased sales through door-to-door selling and direct marketing.  In 1909 the company introduced the first controlled thermostat, sold as the “Minnesota Regulator,” and in 1912 Sweatt changed the name of the company to the Minnesota Heat Regulator Company.  In 1927, a rival firm, Mark C. Honeywell’s Honeywell Heating Specialty Company of Wabash, Indiana, bought out Minnesota Heat Regulator to form the Honeywell-Minneapolis Regulator Company with Honeywell as President and Sweatt as chairman.  The company continued to expand through acquisitions over the next decade and weathered the Great Depression relatively unscathed.

In 1941, Harold Sweatt, who had succeeded Honeywell as president in 1934, parlayed his company’s expertise in precision measuring devices into several lucrative contracts with the United States military, emerging from World War II as a major defense contractor.  Therefore, the company was approached by fellow defense contractor Raytheon to establish a joint computer subsidiary in 1954.  Incorporated as Datamatic Corporation the next year, the computer company became a wholly-owned subsidiary of Honeywell in 1957 when Raytheon followed so many other companies in exiting the computer industry.  Honeywell delivered its first mainframe, the Datamatic 1000, that same year, but the computer relied on vacuum tubes and was therefore already obsolete by the time it hit the market.  Honeywell temporarily withdrew from the business and went back to the drawing board.  After IBM debuted the 1401, Honeywell triumphantly returned to the business with the H200, which not only took advantage of the latest technology to outperform the 1401 at a comparable price, but also sported full compatibility with IBM’s wildly successful machine, meaning companies could transfer their existing 1401 programs without needing to make any adjustments.  Announced in 1963, the H200 threatened IBM’s control of the low-end of the mainframe market.

norris_cray

William Norris (l) and Seymour Cray, the principle architects of the Control Data Corporation

While Honeywell chipped away at IBM from the bottom of the market, computer startup Control Data Corporation (CDC) — the brainchild of William Norris — threatened to do the same from the top.  Born in Red Cloud, Nebraska, and raised on a farm, Norris became an electronics enthusiast at an early age, building mail-order radio kits and becoming a ham radio operator.  After graduating from the University of Nebraska in 1932 with a degree in electrical engineering, Norris was forced to work on the family farm for two years due to a lack of jobs during the Depression before joining Westinghouse in 1934 to work in the sales department of the company’s x-ray division.  Norris began doing work for the Navy’s Bureau of Ordinance as a civilian in 1940 and enjoyed the work so much that he joined the Naval Reserve and was called to duty at the end of 1941 at the rank of lieutenant commander.  Norris served as part of the CSAW codebreaking operation and became one of the principle advocates for and co-founders of Engineering Research Associates after the war.  By 1957, Norris was feeling stifled by the corporate environment at ERA parent company Sperry Rand, so he left to establish CDC in St. Paul, Minnesota.

Norris provided the business acumen at CDC, but the company’s technical genius was a fellow engineer named Seymour Cray.  Born in Chippewa Falls, Wisconsin, Cray entered the Navy directly after graduating from high school in 1943, serving first as a radio operator in Europe before being transferred to the Pacific theater to participate in code-breaking activities.  After the war, Cray attended the University of Minnesota, graduated with an electrical engineering degree in 1949, and went to work for ERA in 1951.  Cray immediately made his mark by leading the design of the UNIVAC 1103, one of the first commercially successful scientific computers, and soon gained a reputation as an engineering genius able to create simple, yet fast computer designs.  In 1957, Cray and several other engineers followed Norris to CDC.

Unlike some of the more conservative engineers at IBM, Cray understood the significance of the transistor immediately and worked to quickly incorporate it into his computer designs.  The result was CDC’s first computer, the 1604, which was first sold in 1960 and significantly outperformed IBM’s scientific computers.  Armed with Cray’s expertise in computer design Norris decided to concentrate on building the fastest computers possible and selling them to the scientific and military-industrial communities where IBM’s sales force exerted relatively little influence.  As IBM’s Project Stretch floundered — never meeting its performance targets after being released as the IBM 7030 in 1961 — Cray moved forward with his plans to build the fastest computer yet designed.  Released as the CDC 6600 in 1964, Cray’s machine could perform an astounding three million operations per second, three times as many as the 7030 and more than any other machine would be able to perform until 1969, when another CDC machine, the 7600, outpaced it.  Dubbed a supercomputer, the 6600 became the flagship product of a series of high-speed scientific computers that IBM proved unable to match.  While Big Blue was ultimately forced to cede the top of the market to CDC, however, by the time the 6600 launched the company was in the final phases of a product line that would extend the company’s dominance over the mainframe business and ensure competitors like CDC and Honeywell would be limited to only niche markets.

System/360

 system360

The System/360 family of computers, which extended IBM’s dominance of the mainframe market through the end of the 1960s.

 When Tom Watson Jr. finally assumed full control of IBM from his father, he inherited a corporate structure designed to collect as much power and authority in the hands of the CEO as possible.  Unlike Watson Sr., Watson Jr. preferred decentralized management with a small circle of trusted subordinates granted the authority to oversee the day-to-day operation of IBM’s diverse business activities.  Therefore Watson overhauled the company in November 1956, paring down the number of executives reporting directly to him from seventeen to just five, each of whom oversaw multiple divisions with the new title of “group executive.”  He also formed a Corporate Management Committee consisting of himself and the five group executives to make and execute high-level decisions.  While the responsibilities of individual group executives would change from time to time, this new management structure remained intact for decades.

Foremost among Watson’s new group executives was a vice president named Vin Learson.  A native of Boston, Massachusettes, T. Vincent Learson graduated from Harvard with a degree in mathematics in 1935 and joined IBM as a salesman, where he quickly distinguished himself. In 1949, Learson was named sales manager of IBM’s Electric Accounting Machine (EAM) Division, and he rose to general sales manager in 1953.  In April 1954, Tom Watson, Jr. named Learson the director of Electronic Data Processing Machines with a mandate to solidify IBM’s new electronic computer business.  After guiding early sales of the 702 computer and establishing an advanced technology group to incorporate core memory and other improvements into the 704 and 705 computers, Learson received another promotion to vice president of sales for the entire company before the end of the year.  During Watson’s 1956 reorganization, he named Learson group executive of the Military Products, Time Equipment, and Special Engineering Products divisions.

During the reorganization, IBM’s entire computer business fell under the new Data Processing Division overseen by group executive L.H. LaMotte.  As IBM’s computer business continued to grow and diversify in the late 1950s, however, it grew too large and unwieldy to contain within a single division, so in 1959 Watson split the operation in two by creating the Data Systems Division in Poughkeepsie, responsible for large systems, and the General Products Division, which took charge of small systems like the 650 and 1401 and incorporated IBM’s other laboratories in Endicott, San Jose, Burlington, Vermot, and Rochester, Minnesota.  Watson then placed these two divisions, along with a new Advanced Systems Development Division, under Learson’s control, believing him to be the only executive capable of propelling IBM’s computer business forward.

Learson_1

Vin Learson, the IBM executive who spearheaded the development of the System/360

When Learson inherited the Data Systems and General Products Divisions, he was thrust into the middle of an all out war for control of IBM’s computer business.  The Poughkeepsie Laboratory had been established specifically to exploit electronics after World War II and prided itself on being at the cutting edge of IBM’s technology.  The Endicott Laboratory, the oldest R&D division at the company, had often been looked down upon for clinging to older technology, yet by producing both the 650 and the 1401, Endicott was responsible for the majority of IBM’s success in the computer realm.  By 1960, both divisions were looking to update their product lines with more advanced machines.  That September, Endicott announced the 1410, an update to the 1401 that maintained backwards compatibility.  At the same time, Poughkeepsie was hard at work on a new series of four compatible machines designed to serve a variety of business and scientific customers under the 8000 series designation.  Learson, however, wanted to unify the product line from the very low end represented by the 1401 to the extreme high end represented by the 7030 and the forthcoming 8000 computers.  By achieving full compatibility in this manner, IBM could take advantage of economies of scale to drive down the price of individual computer components and software development while also standardizing peripheral devices and streamlining the sales and service organizations that would no longer have to learn multiple systems.  While Learson’s plan was sound in theory, however, forcing two organizations that prided themselves on their independence and competed with each other fiercely to work together would not be easy.

Learson relied heavily on his power as a group executive to transfer employees across both divisions to achieve project unity.  First, he moved Bob Evans, who had been the engineering manager for the 1401 and 1410, from Endicott to Poughkeepsie as the group’s new systems development manager.  Already a big proponent of compatibility, Evans unsurprisingly recommended that the 8000 project be cancelled and a cohesive product line spanning both divisions be initiated in its place.  The lead designer of the 8000 series, Frederick Brooks, vigorously opposed this move, so Learson replaced Brooks’s boss with another ally, Jerrier Haddad, who had led the design of the 701 and recently served as the head of Advanced Systems Development.  Haddad sided with Evans and terminated the 8000 project in May 1961.  Strong resistance remained in some circles, however, most notably from General Products Division head John Haanstra, so in October 1961, Learson assembled a task group called SPREAD (Systems, Planning, Review, Engineering, and Development) consisting of thirteen senior engineering and marketing managers to determine a long-term strategy for IBM’s data processing line.

On December 28, the SPREAD group delivered its final proposal to the executive management committee.  In it, they outlined a series of five compatible processors representing a 200-fold range in performance.  Rather than incorporate the new integrated circuit, the group proposed a proprietary IBM design called Solid Logic Technology (SLT), in which the discrete components of the circuit were mounted on a single ceramic substrate, but were not fully integrated.  By combining the five processors with SLT circuits and core memories of varying speeds, nineteen computer configurations would be possible that would all be fully compatible and interchangeable and could be hooked up to 40 different peripheral devices.  Furthermore, after surveying the needs of business and scientific customers, the SPREAD group realized that other than floating-point capability for scientific calculations, the needs of both customers were nearly identical, so they chose to unify the scientific and business lines rather then market different models for each.  Codenamed the New Product Line (NPL), the SPREAD proposal would allow IBM customers to buy a computer that met their current needs and then easily upgrade or swap components as their needs changed over time at a fraction of the cost of a new system without having to rewrite all their software or replace their peripheral devices.  While not everyone was convinced by the presentation, Watson ultimately authorized the NPL project.

The NPL project was perhaps the largest civilian R&D operation ever undertaken to that point.  Development costs alone were $500 million, and when tooling, manufacturing, and other expenses were taken into account, the cost was far higher.  Design of the five processor models was spread over three facilities, with Poughkeepsie developing the three high-end systems, Endicott developing the lowest-end system, and a facility in Hursley, England, developing the other system.  At the time, IBM manufactured all its own components as well, so additional facilities were charged with churning out SLT circuits, core memories, and storage systems.  To assemble all the systems, IBM invested in six new factories.  In all, IBM spent nearly $5 billion to bring the NPL to market.

To facilitate the completion of the project, Watson elevated two executives to new high level positions: Vin Learson assumed the new role of senior vice president of sales, and Watson’s younger brother, Arthur, who for years had run IBM’s international arm, the World Trade Corporation, was named senior vice president of research, development, and manufacturing.  This new role was intended to groom the younger Watson to assume the presidency of IBM one day, but the magnitude of the NPL project coupled with Watson’s inexperience in R&D and manufacturing ultimately overwhelmed him.  As the project fell further and further behind schedule, Learson ultimately had to replace Arthur Watson in order to see the project through to completion.  Therefore, it was Learson who assumed the presidency of IBM in 1966 while Watson assumed the new and largely honorary role of vice chairman.  His failure to shepherd the NPL project ended any hope Arthur Watson had of continuing the Watson family legacy of running IBM, and he ultimately left the company in 1970 to serve as the United States ambassador to France.

In late 1963, IBM began planning the announcement of its new product line,  which now went by the the name System/360 — a name chosen because it represented all the points of a compass and emphasized that the product line would fill the needs of all computer users.  Even at this late date, however, acceptance of System/360 within IBM was not assured.  John Haanstra continued to push for an SLT upgrade to the existing 1401 line to satisfy low-end users, which other managers feared would serve to perpetuate the incompatibility problem plaguing IBM’s existing product line.  Furthermore, IBM executives struggled over whether to announce all the models at once and thus risk a significant drop in orders for older systems during the transition period, or phase in each model over the course of several years.  All debate ended when Honeywell announced the H200.  Faced with losing customers to more advanced computers fully compatible with IBM’s existing line,  Watson decided in March 1964 to scrap the improved 1401 and launch the entire 360 product line at once.

On April 7, 1964, IBM held press conferences in sixty-three cities across fourteen countries to announce the System/360 to the world.  Demand soon far exceeded supply as within the first two years that System/360 was on the market IBM was only able to fill roughly 4,500 of 9,000 orders.  Headcount at the company rose rapidly as IBM rushed to bring new factories online in response.  In 1965, when actual shipments of the System/360 were just beginning, IBM controlled 65 percent of the computer market and had revenues of $2.5 billion.  By 1967, as IBM ramped up to meet insatiable 360 demand, the company employed nearly a quarter of a million people and raked in $5 billion in revenues.  By 1970, IBM had an install base of 35,000 computers and held an ironclad grip on the mainframe industry with a marketshare between seventy and eighty percent; the next year company earnings surpassed $1 billion for the first time.

As batch processing mainframes, the System/360 line and its competitors did not serve as computer game platforms or introduce technology that brought the world closer to a viable video game industry.  System/360 did, however, firmly establish the computer within corporate America and solidified IBM’s place as a computing superpower while facilitating the continuing spread of computing resources and the evolution of computer technology.  Ultimately, this process would culminate in a commercial video game industry in the early 1970s.