3D graphics have hit a tipping point. Vendors are reaching out to users to help drive innovation.
By Kathleen Maher
For the longest time it seemed as if the game industry was defining the evolution of 3D computer graphics, and in fact it was. Great advances have come as a result of trying to get the best graphics bang for the buck from the lowest-cost graphics processors possible.
Fourteen years ago S3 introduced texture compression in hardware, a literal game-changer for the industry. Since then, texture compression has become such a fundamental capability for computer graphics that the Khronos Group has called to its community to establish an open format for texture compression. As usual there are all sorts of legal folderol behind it (see “Time for 3D imagery to have its own codec.”)
There have been many other breakthroughs driven by the once huge revenues associated with the game industry. Bump-mapping, anti-aliasing, frame buffer optimization have all been tweaked to improve the performance of graphics processors. But wait, I’m putting the chicken before the horse. The graphics processors themselves, these monster parallel processors designed to create the most realistic, gorgeous images possible, have evolved to better benefit games. It’s the beauty in servitude to the beast. It’s Lennie telling George what to do.
And somewhat inevitably, there really isn’t all that much more that graphics processors can do for games. Games don’t win or lose in the marketplace based on the quality of their graphics; they win or lose based on the quality of their content. And perhaps most ironic of all, the game developers don’t exactly break their necks updating the software to better take advantage of new hardware advances.
At Siggraph 2012, however, it’s clear the great beasts are breaking free of their chains. The graphics processors are getting more raw meat to chew on.
Long-time readers know that the failure of the software companies— game developers, CAD developers, and content creation software vendors—to take advantage of the available hardware and actively participate in making better graphics has been a favorite topic for JPR. But the ISVs would say that it’s more important for them to reach the widest number of customers than to adapt to new hardware. And a little bit more of the truth is that the software vendors have a rough job supporting their long-time customers and turning out needed updates; radical architectural changes would just have to wait.
Things have changed. There’s been one helluva recession. Customers are well informed and getting impatient. And better tools are evolving rapidly.
The customers are calling the shots just as they did in the early days of computer technology when customers were writing their own programs and telling their suppliers just exactly what they needed and when they needed it. The dynamic was dramatic at Siggraph this year when some of the important software announcements came not from software vendors but from customers such as DreamWorks, Lightstorm Entertainment, Sony Pictures Imageworks, and Industrial Light and Magic. These customers are helping to push the vendors, and this year many of the breakthroughs were around better use of the graphics and CPU processors in modern semiconductors. (See also the GraphicSpeak article, “Autodesk developing virtual production with James Cameron and Weta Digital.”
The movie industry has a long history of making their own tools, but there are important shifts now. Like their counterparts in the automotive, finance, simulation, and manufacture industries, proprietary software was not shared; it was jealously guarded. Developers worked in a vacuum, and software was a highly personalized expression—poetic maybe, but not guaranteed to work as efficiently as possible or even work as expected. (We might want to have a look at the recent rogue algorithms that have all but destroyed Wall Street’s Knight Trading as an example.) Now, however, the movie industry is finding the use of Open Source to be more productive, and sharing is a strategy.
Another example might be all those renegade rendering companies, those companies building physics engines and fluid dynamics that have been bubbling up for the last five years or so out of the universities. They too recognize the value of the hardware, and they’re putting it to work in the service of science, beauty, and well yes, games too. They’re showing their larger, slower, more conservative counterparts how to suck eggs.
The biggest changes that have happened are more social and economic than technical. It’s become kind of obvious that’s it’s more important to get work done efficiently using less money than it is to spend more money, work slower, and take the risk of having a giant failure you can keep all to yourself.
That’s something else the technology community has learned from the game community.
The next 18 months promise to be a spectacular period for computer graphics in all industries, and if we didn’t know better it will seem like change happened all at once. In reality, engineers on the hardware and the software side have been working for decades and decades to get to this point.
Kathleen Maher is Editor-in-Chief of GraphicSpeak and Jon Peddie Research TechWatch.