By Bob Cramblitt
More than 25 years ago, the SPEC Graphics and Workstation Performance Group (SPEC/GWPG) developed the Viewperf benchmark and published its first reported results.
Viewperf (now SPECviewperf) was unique among synthetic benchmarks for its use of real-world datasets (called viewsets), tests and weighting developed in cooperation with independent software vendors (ISVs). The original release featured five viewsets based on real applications: PTC’s CDRS, IBM Data Explorer, Intergraph DesignReview, Alias/Wavefront’s Advanced Visualizer, and the Lightscape Visualization System.
The benchmark ran across multiple OSs, processors, and windowing environments, encompassing a wide range of OpenGL features and rendering techniques. Perhaps best of all, it was available for free downloading.
Through the years, SPECviewperf has grown from a 55 MB download back in 2000 to a 7 GB download for the current SPECviewperf 13, to an estimated 18 GB download for the upcoming SPECviewperf 2020. The escalation of download sizes reflects the increasing complexity of models and the more sophisticated rendering techniques employed in the applications represented within the benchmark.
Last year, nearly 14,000 copies of SPECviewperf were downloaded without charge to users worldwide; vendors who are not members of SPEC/GWPG must pay a licensing fee that goes toward ongoing development efforts. SPECviewperf results are cited in thousands of press releases each year from companies around the globe.
Bob Cramblitt, SPEC/GWPG communications director, spoke with Ross Cunniff, current chair of the SPECgpc subcommittee and long-time SPEC/GWPG member representative, about the evolution of the SPECviewperf benchmark, its significance to the computer graphics industry, and features expected in future versions of the benchmark.
Back in 1994, what did Viewperf give the industry that it didn’t have before?
Viewperf gave the industry a standard way of measuring performance that was much more consistent across OpenGL implementations, with guarantees for visual quality and consistency. It also had higher complexity than the existing primitive-level benchmarks (GLperf, triangles per second, etc.) that dominated the press releases and marketing documentation at the time.
In May 2002, SPECgpc released SPECviewperf 7.0, which enabled general state changes to be made during frames. What was the significance of that development?
Real applications do state changes. Without them, the objects will have a very uniform appearance—all shaded, or all textured, for example. State changes can be computationally expensive if not implemented well. Including these operations in a widely used and quoted benchmark gave GPU vendors additional incentive to improve state-change performance.
Five years later, SPECviewperf 10 added performance measurement for full-scene anti-aliasing (FSAA) and separated the benchmark framework from the viewsets for the first time. Why were these developments important?
Modeling applications had been moving toward FSAA for a while; SPECviewperf 10 caught up with that trend. FSAA stresses a different part of the graphics pipeline than non-FSAA rendering, and requires more memory, so it more accurately reflects the demands of real applications.
The flexible architecture separating viewsets from the benchmark was an important precursor toward allowing APIs other than OpenGL to be benchmarked.
In June 2010, SPECviewperf 11 incorporated traces from eight different applications and benchmarked advanced OpenGL functionality such as shaders and vertex buffer objects (VBOs). Why was the ability to trace actual applications significant?
Even with the state-change tracking added in SPECviewperf 7, the API stream was not very close to what the real application did. The vertex commands were cooked into monolithic chunks of “draw this with a vertex array” or “draw that with immediate mode calls.” The only state changes that were supported were those which had been hand-coded into SPECviewperf and could easily be grafted onto the geometry traces of the day.
Tracing the actual API commands issued by the applications overcame these issues. It also meant that any effort vendors made to improve SPECviewperf performance was almost certain to translate into actual performance increases in the original application.
SPECviewperf 12 was released in December 2013. It added support for DirectX and expanded beyond product development and media and entertainment applications to include energy and medical viewsets. What was the significance of these updates?
SPECviewperf 12 cracked the door open for non-traditional uses of graphics. By allowing DirectX, the latest versions of 3ds Max could be translated into viewsets. The energy and medical viewsets explored new territory by using the GPU for something other than piles of triangles.
In May 2018, SPECviewperf 13, the current version, was released adding support for 4K resolution and new workloads. Why is this release important?
We got out ahead of the industry with this release, as 4K resolution is becoming increasingly popular, but is still not the default mode for most computers. The new energy and medical workloads use actual data and rendering pipelines within the industry applications, moving beyond the synthetic nature of the previous versions.
About 10 months later we also released a Linux edition of the benchmark, with much of the same functionality as the Windows edition.
What are the biggest developments planned for the next version of SPECviewperf, anticipated for release later this year?
First of all, every viewset will have 4K support. Next, a new results manager will make it much easier to collate and submit testing results. Third, the applications from which the traces were captured have nearly all been upgraded.
Five of the applications represented in the benchmark (3ds Max, Catia, Creo, Maya and Solidworks) have been updated to the most recent release. Several of the viewsets have been compressed on-disk to reduce the system footprint. And many small improvements to the user interface have led to a significantly better user experience. These improvements include a much better description of results with full annotations of the contents of each subtest, a variety of bug fixes, and a new results generation manager.
Looking even further ahead, are there other ways SPECviewperf can get even closer to application performance while maintaining a self-contained package?
I think the next step, if possible, is to incorporate more actual rendering engines into SPECviewperf. This is already being done with the medical viewset, where the Tuvok rendering engine is incorporated. One advantage of this is that the benchmark can be much smaller—the source models are typically not nearly as large as the trace of the API commands required to render them. Another advantage is that modern APIs such as DX12 and Vulkan have significant parallelism built in, which is hard or impossible to replicate accurately with traces.
This future step will require a much closer partnership with the various ISVs who provide the applications, so it may or may not be possible in all cases.
Bob Cramblitt is communications director for SPEC. He writes frequently about performance issues and digital design, engineering and manufacturing technologies. To find out more about graphics and workstation benchmarking, visit the SPEC/GWPG website, subscribe to the SPEC/GWPG enewsletter or join the Graphics and Workstation Benchmarking LinkedIn group: https://www.linkedin.com/groups/8534330.