Computer graphics changed our lives and the course of history; can it save the PC?

The history of computer graphics is part of America’s legacy of invention and innovation. Yet, the enthusiasm for lightweight computing platforms like tablets is reducing the demand for high-end processors across the computer industry. What does this mean for the future of computer graphics?

The threat from above

Computer graphics got its start in the late 1950s due to the Soviets setting off their first atomic bomb and scaring the hell out of the US military—something they still haven’t gotten over. In order to detect as early as possible when nuclear-armed Russian bombers would come over the North Pole to bomb US cities — and if they had an extra bomb or two to pulverize Canada on the way— the US developed the Distant Early Warning (DEW) line. The DEW line was a string of radar installations along the Arctic Circle that looked across the edge of the earth and would spot Soviet planes almost as they were talking off.

Tupolev TU 85
Soviet Tupolev TU 85 long-range bomber (circa 1950) unwittingly helped propel the development of computer graphics. (Photo courtesy of the Virtual Aircraft Museum)

The radar stations fed their data to a facility at MIT, (which was spun off into the  MITRE Corporation in 1958 to manage federally funded R&D). The first US digital computer, called Whirlwind, was in that facility; a computer originally designed to be a flight simulator for the Navy. Renamed the Semi-Automatic Ground Environment (SAGE) system, it had a dozen 24-inch circular vector scopes that tracked anything in the northern air. Out of that environment, Nebraska-born Ivan Sutherland developed his famous Sketchpad software, and Ken Olsen designed the first PDP computer and went on to found Digital Equipment Corporation. Molecular modeling was born there, and so was a branch of the CAD industry, and so were light pens. Almost inevitably, because Whirlwind was  capable of displaying real-time text and graphics, it inspired the development of the first graphics game when Charles Adams and Jack Gilmore created a simple bouncing ball game. All this, just because the U.S. was worried about a Russian bomber. The irony of the story is that by the time SAGE became fully functional, and the US even commissioned a backup system, the Soviets had moved to ICBMs against which SAGE and DEW would be useless. However, that useless system ran until the 1980s and became the heart of today’s air-traffic control system.

The folks at SAGE, MIT, and MITRE (which also participated in the development of the Internet) started many of the new mini-computer companies like Prime, DEC, Data General, Wang, and others. They also contributed foundation people for first-generation CG software firms including Computervision, Apollo, PTC, and others.

When the PC came to life in the early 1980s, it was destined to be the next generation CG platform and hundreds of CG programmers, circuit designers, and scientists began adapting minicomputer and mainframe-based graphics to the PC. It didn’t happen fast and often it wasn’t too impressive, but it was unstoppable. The PC probably hit its peak as the consumer CG consumption machine in 2006. It was still, and will be for a long time, the number one CG development platform, but that’s a smaller population of users than consumers are.

Today we are bathed in CG all day long and night. It’s on our phones, our TVs, of course on our PCs, tablets, in the theaters, throughout hospitals, in cockpits, and even ATMs and point of sale (POS) systems. Created on PCs or workstations, all the machines just listed and more, use CG. But it was the large consumer market, with its quest for games, and video and photo editing, that provided the economy of scale to support the gigantic R&D and push for more efficient manufacturing. The special effects you see in movies and on TV wouldn’t be possible without the massive market for GPUs driven by PCs.

The threat from below

Fast-forward to today. A big hunk of the consumer base has abandoned the PC. They have found they can get the same, if not better, level of visual enjoyment from a tablet as they used to enjoy on a PC. And the tablet is lighter, more portable, and less expensive. No, a tablet can’t totally replace a PC—but it is replacing a lot of PCs, maybe as many as 25-30%. Tablets use smaller GPUs and they are not just scaled down versions of the PC variety of GPUs, they have different types of architectures and different development pipelines. That means a company that designs a high-end GPU has to design a separate and additional GPU if they want to participate in the tablet market. So now you have a double-whammy—the customer base for expensive to develop high-end GPUs is shrinking, which will drive up the costs of the GPU and/or push out the introduction cycles, and the customer base for tablet GPUs is not big enough to drive down the costs for those parts.  Sure there are cross-over benefits, volume buying of silicon (from in-house or merchant fabs), and some architectural concept cross-fertilizations.

The next two years will set the stage for how the future in CG will be. If GPU development is hitting an asymptote due to fragmentation of the customer base, then the ROI for a new GPU will extend from 18 months to 36 or more. Some people would welcome that thinking it would take some of the pressure off to keep up. But they will find that’s not the case, it will just shift the pressure point.

If GPU suppliers change the cadence of development, the development of CG effects will slow down, and the costs will rise. Labor rates, like it or not, do go up over time. If there are no new engines to create new effects, or make the current ones run faster, then it will cost more to do the same thing tomorrow we are doing today—that’s just the opposite of Moore’s law.

Tablet GPU designers rationalize their development costs on the hope that they will be able to tap into the explosive smartphone market as well. There’s a lot of GPU suppliers offering either systems on a chip (SoCs) or IP chasing after that market. And depending upon whose sizing you choose to believe, the smartphone market is 650 to 700 million units, shared among  four or five Chinese, four US, one Korean, one or two Japanese, and  one European SoC builder, and two IP providers, with two more in the wings, and another US SoC builder trying to get into the smartphone market. Those same number of suppliers, and maybe a couple more, are also after the 170 million or so tablets that are expected to ship in 2013.

Contrast that to the two CPU suppliers and the two GPU suppliers who are chasing the 366 million PCs.

We can’t go on like this

The PC does a lot of great things, and the one of the great things it does is CG. It does CG better, faster, cheaper, and  uses less power than any other device on  a texel/watt/dollar basis—period.

The Rollin' Safari trailer for the FMX conference coming up in April in Germany.
The Rollin’ Safari trailer for the FMX conference coming up in April in Germany.

 

But is that enough? Do enough people really care about CG to make the conscious decision to buy a PC? A lot of people do, that’s for sure—but not as many that buy smartphones and tablets. So CG can’t save the PC, and CG on all platforms will suffer. It’s almost a case of be careful what you wish for (a portable note taking, email reading, movie watching, music listening device that fits in my pocket and costs $500).

The movies and other CG-based images you watch on your tablet won’t get any worse, and maybe what you have is good enough.

The advancements in CG due largely to the amazingly powerful new GPUs that came out every year may be looked back on (in my next history book) as the glory years of CG. This may be as good as it gets for a while. You can thank your tablet-love for that.