Come Fusion and come Sandy Bridge … the discrete GPU will continue to live on

New hybrid chipsets from Intel and AMD are aimed squarely at taking market share away from Nvidia and its powerful GPUs. Alex Herrera says pundits predicting the death of the GPU are missing the big picture.

By Alex Herrera
Senior Analyst, Jon Peddie Research

Nvidia CEO Jen-Hsun Huang shows off a pre-release copy of Fermi, the company's latest GPU.

Not so fast with the post-mortems for discrete graphic processing units (GPU). The discrete GPU isn’t going to die, at least not any time soon and surely not solely at the hands of new competition from the soon-to-be-launched hybrid CPU+GPU devices from Intel and AMD.

As we enter this new era of silicon graphics integration, Wall Street and technology pundits alike have been engaging in what’s become a popular pastime, questioning the viability of both the discrete GPU and its leading proponent, the x86-less Nvidia. Intel unveiled Sandy Bridge at September’s IDF, promising the biggest improvements in the CPU core since the first Core (with a capital C) generation back in 2006. And at the same time, it’s setting some high expectations—or at least as high as one can without quoting many actual numbers—for what we might experience with Sandy Bridge’s PG (integrated Processor Graphics). JPR is estimating a 2-to-2.5X jump in performance over Westmere’s graphics, though of course mileage will vary.

And AMD’s been gradually taking the wraps off its first generation of Fusion parts, merging CPU and GPU, a combination it calls an APU (Accelerated Programming Unit). Llano, Zacate and Ontario (in order of higher to lower power) have all been fabbed, and we’ve seen Zacate run. Both Sandy Bridge and some subset of the first Fusion generation should be out in production this year, with OEM ramping products as we enter 2011.

These events have many questioning the viability of the discrete GPU business, with the consensus being that either a) the discrete GPU is on its way to the grave, or b) the lower end of the discrete-based add-in card business’ days are numbered, as CPU-integrated graphics swill begin whittling down its numbers. Well, I’m convinced the former won’t happen, at least not in the foreseeable future and not solely based on the emergence of Fusion, Sandy Bridge and their descendants. And as far as triggering the erosion of the add-in card low end, well, it’s too late for that.

Fusion and Sandy Bridge can’t start killing off the low end graphics card market, for the simple reason that the market’s already been decimated by its integrated graphics predecessors. Remember, it’s not the integration of the GPU that’s new, it’s the location—the CPU versus the chipset. Chipset-integrated graphics have been around for years, and over that time have been steadily eating away at the discrete’s hold over the low-end graphics card market.

Actually, eating is an understatement—gorging is more like it. Delivering capable performance—though relatively lackluster as compared to its discrete rivals—Intel’s chipset integrated graphics processors (IGPs) have captured almost 55% of all hardware graphics units sold. Without IGPs as options, those 55% of buyers would have instead chosen lower-end add-in cards. It’s the same reason the mid-range of the consumer add-in card market has oversold the entry for some time now. It just makes sense; if you’re going to buy a discrete card, you’re going to want to buy one head and shoulders better than the free, integrated alternative.

Decline not certain

So Fusion and Sandy Bridge won’t kick off the decline of the low-end, as that train left the station long ago. But while the erosion itself is nothing new, this new generation of CPU/GPU hybrids certainly hold the potential of accelerating its pace. Fusion, Sandy Bridge (and its successors) will both cut build costs even further and push performance up substantially, taking advantage of on-die synergies between CPU and GPU resources.

So then will Fusion and Sandy Bridge be the first nails in the discrete GPU’s coffin? Will today’s minority share position turn to a more significant decline in absolute numbers and lead to its eventual demise? Well, despite the added pressure, the device won’t ever disappear completely for one undeniable reason: the discrete GPU can always be engineered for superior performance, and there will always be users and applications that demand the most performance available.

Yes, the discrete GPU is at a distinct disadvantage to its integrated rival in several respects—consuming more cost, system real estate and Watts. But ironically, it’s those same traits that make it unbeatable in performance, should engineers choose to leverage those traits. GPU architects without (or at least with less) concern for size, power, frequency and fans are, all else equal, going to produce higher performance products. They have in the past, with discrete parts from ATI and Nvidia exceeding Intel’s chipset-integrated solutions by a large margin, both in performance and features.
On top of that, consider the compromises CPU designers may need to make to accommodate the added GPU. For example in Westmere, integrating the second die meant cutting the quad-core to a dual-core. The state of efficient multi-threading in software may not be where it should, but gamers and workstation users would just as soon have more cores than fewer.

And with the  laws of physics not expected to change any time soon, the discrete solution will continue to outperform in the future. It will, provided there are customers prepared to pay the extra bucks for the bigger bang. And who among us will dish out the extra dollars to get the extra performance a discrete can offer? Well, the same customers that have in the past, and still do today:  gamers and workstation users. The demand for a more realistic, immersive gaming experience doesn’t look to be waning. Gamers today choose discrete GPUs over integrated 9:1, as a recent Steam survey can attest.

And even as integrated solutions improve on the gaming experience, so will discretes. Got a system that can deliver full frame rates for all the visual effects an ISV’s been working on to date? Well, then there’s always more accurate physics to simulate, or how about rendering with full-blown global illumination? Or if that’s not enough, move on to tackle real-time raytracing. There remains so much headroom in the effects ISVs want to create, there will always be games that will highlight the superior capabilities of the discrete GPU.

More power! More power!

And workstation users? Well, their relentless quest for more throughput won’t fade either, as the relatively modest premium coughed up for a discrete solution is quickly paid back through higher productivity. For workstation buyers, the attach rate for discrete GPUs is even higher than for gamers. Scant few past models even allowed integrated graphics to ship in a workstation, and for those models that have, the vast majority made the upgrade to a professional-brand add-in card. And don’t think it’s any different for area and Watt-challenged mobile workstations, as the discrete GPU attach rate for those has been a consistent 100%.

The CPU-integrated GPU isn’t going to do away the discrete GPU, certainly not any time soon. I’m convinced, and so is the company with the most vested interest in seeing discrete GPUs fall by the wayside. Because even if Intel were to think or hope that its PGs will eventually kill off discrete, it certainly doesn’t think Sandy Bridge will do the job. While it’s proud of the performance, it knows it can’t compete with a high-performance DX 11 GPU, especially in gaming and workstation applications. If it did, there’d be no need for the new SoC’s integrated PCI Express x16 interface, capable of driving not one but two discrete GPUs.

With a secure chunk of the computing base demanding the highest graphics performance money can buy, the discrete GPU will continue to have a role to play. But an important question remains: will that chunk be big enough, representing enough volume to sustain a viable business based on discrete GPUs alone? More specifically, will it be enough for the company most observers are placing between the biggest rock and the hardest place? Nvidia’s gained the most from rise of the modern GPU, and would logically have the most to lose were it to decline. Some investors, journalists and analysts have already written Nvidia’s epitaph, or at least are scaling back expectations for the company moving forward in this new era of silicon integration.

There’s no doubt, Nvidia is in a challenging situation. But it’s faced daunting challenges before (for example, pushing forward in the aftermath of NV1). Furthermore, this time it’s in a situation the company’s seen coming, and it has—actually has had for some time—several irons in the fire in preparing for this very change in the graphics landscape. What the future holds for Nvidia specifically in the post-Fusion, post-Sandy Bridge world is worth a deeper look … for another day, another blog. Specificaly, my post “x86 isn’t do or die for Nvidia” at JonPeddie.com