Nvidia’s Project Monterey may open the door to double duty for graphics workstations.
By Alex Herrera
Over the years, the balance of power between client and server has ebbed and flowed like the tide. In the 80’s, compute power resided overwhelmingly on minicomputers and mainframes, as IBM and DEC ruled the IT world. In the late 80’s and into the 90’s, the emergence of the workstation and PC pushed computation onto desktops, spurred by the demand for rich graphics. Dumb terminals could display text, but as bit-mapped graphics came to the forefront with Windows 3.1, the demand for client-side horsepower shifted the balance of power to the desk.
In the late 90s and early 00’s came the “thin client” revolution, which again argued that IT build and maintenance costs could be dramatically curtailed by centralizing compute capabilities in the datacenter. The desktop would have a minimal client with some graphics and networking capabilities and not much else. In the end, the “thin client” trend manifested itself far more widely on paper and words than it did in enterprises’ hardware topologies. There were — and still are — corners of the IT space where thin-client makes sense, but most clients today are still stand-alone capable. They increasingly rely on network-accessible content and storage (both LAN and WAN) but from a compute standpoint, they remain largely self-sufficient, able to run applications without relying on any real-time compute assistance. This is most true among the hard-core workstation contingent.
Now, we’re into the 10’s, and server-based computing is back capturing mindshare in a big way. Haven’t seen that particular phrase “server-based computing” in IT vendors’ marketing collateral much? Well then, how about “cloud computing”? Thinking big picture, it’s not a whole lot different. Instead of pushing a request for computation out to the enterprise’s datacenter, it’s being pushed over the Internet to some compute service provider who does the heavy lifting leaving local clients to performing I/O.
So will the trend to push computation back off the desk (and more and more out of the enterprise completely) be lasting? And specifically, what does that mean to the markets for workstations and professional graphics? It’s the visualization aspect of the two related spaces which can mean a different answer for professional computing than for other markets. Visual I/O is different than any other type in that it’s both compute intensive and latency-sensitive. Performing the rendering remotely and then displaying on the client with good quality, high frame rate and low latency is a hard thing to do, especially on a shared network which may not have quality of service (QoS) support.
But remote rendering is a trend that’s undoubtedly happening. HP’s Remote Graphics, Teradici’s PC over IP (PCoIP), blade (a.k.a. rackmount) workstations and Microsoft’s recent contribution RemoteFX … it’s no coincidence we’re seeing the level of the industry’s interest and activity increase on several fronts simultaneously. Remote rendering is getting another opportunity for two big reasons: the network infrastructure today has gotten to a point where the bandwidth available is high enough and latencies are more manageable. In addition, silicon advancements have allowed both servers and simple clients the ability to handle very complex encode and decode of image data in real-time. The algorithms Remote Graphics uses (and we assume similar schemes like RemoteFX) to encode heterogeneous visual media (e.g. synthetic graphics, 2D UIs, natural video, text) with high compression ratios and no-compromise quality could not have been implemented on anything very “thin”, not until recently.
Nvidia has been closely monitoring the still relatively small pockets of remote visualization, providing solutions like Quadro MXM graphics modules for blade workstations and the Tesla M2070Q graphics module for servers. And with its recently-disclosed Quadro Virtual Graphics Technology (VGT), the company at the very least appears to be covering all its bases when it comes to remote rendering, whether that rendering is being performed by the server in the local datacenter or by some cloud service halfway around the world.
Still a technology under development in the labs, rather than at work in the enterprise, Quadro VGT (also referred to by the code-name Project Monterey) embedded in the driver puts another layer of abstraction between the client and the renderer, such that the renderer need not be in the machine, or even on the continent. For remote server or “cloud-based” rendering, Monterey Technology delivers what Nvidia claims is “remote graphics with local look and feel”. Based on that description (and the limited details we’ve seen so far), Monterey Remote Technology from a functional perspective looks an awful lot like HP’s Remote Graphics, Teradici’s PCoIP and Microsoft’s RemoteFX. It’s possible Monterey reaches production level status before the end of 2011, and at that point we’ll have a better idea of what differentiation Nvidia can offer.
So is this new breed of remote rendering solutions going to mean workstations will be replaced en masse by thin clients? Except for niche pockets, it’s probably not in the cards. Rather it’s more likely that we will continue to see workstation clients using remote super-renderers (e.g. big clusters of CPUs and GPUs) to render the most complex visualizations with no-corners-cut quality. In essence, this is how workflows like Hollywood final-frame production operate today.
But more than that, we’re likely to see workstations—and most near and dear to Nvidia’s heart, Quadro and Tesla powered workstations—acting as those remote renderers, or even as one component in some big abstracted, virtualized renderer. In the end, some shift in rendering from the client to the server (wherever that server is and however it’s abstracted) isn’t likely to reduce the need for GPU computation on the client. On the contrary, it holds the potential for increasing demand on on workstation clients that may double as remote renderers.
Alex Herrera is a senior analyst for Jon Peddie Research.