Skin in the game: who are the best people to create benchmarks?

Graphics and workstation vendors have their major interests in performance.

By Bob Cramblitt

“It’s the fox guarding the henhouse.”

“Sounds like a case of collusion.”

“Conjures visions of a cabal of hooded individuals meeting secretively in a hotel basement.”

All these things have been said one time or another about benchmark committees composed primarily of computer vendors. The Standard Performance Evaluation Corporation’s Graphics and Workstation Performance Group (SPEC/GWPG) is one of those committees. So, how has it evolved over decades as the leading source of performance evaluation software based on professional graphics and workstation applications?

Motivation, resources, skills

First off, SPEC/GWPG fills a void. In theory, end users are the best people to develop benchmarks because, as SPEC/GWPG participant Peter Torvi says, “they feel the pain points every day when a system is under-powered or an application is designed inefficiently.”

Unfortunately, end-users rarely have the motivation, experience, resources and/or skill to develop benchmarks specific to their day-to-day work. That brings us back to the organizations that have the biggest vested interest in performance: graphics and workstation vendors.

“The best people to create and maintain these benchmarks are the people who understand the most about the technology and how people use it,” says Tom Fisher, chair of the SPEC workstation performance characterization (SPECwpc) group. “This is primarily the folks building the technology and interfacing with a wide range of users.”

Vendors feel pain too

Alex Shows, chair of the SPEC graphics performance characterization (SPECgpc) group, points out that it’s not just users who feel the pain of poorly constructed performance tests.

“SPEC is a valuable organization because it’s staffed by individuals committed to creating the best benchmarks in the world, because we feel the effects of bad benchmarking in our industry and what it does to productivity and buying decisions.”

Keeping it honest

But if vendors are placed at the wheel, won’t the resulting tests be characterized by excessive speeds and proprietary codes optimized for benchmarks but not real-world applications? Exactly the opposite, says Allen Jensen, vice chair of the SPEC application performance characterization (SPECapc) group.

Sven character animation model from the SPECapc for Maya 2017 benchmark. Realistic, accurate and repeatable benchmarks ensure that vendor tuning work will improve real-world customer performance.

“Our strength is that we are all competitors with a range of motivations,” he says. “Coming to an agreement among competitors keeps us all honest.”

It’s a classic case of checks and balances that has withstood the test of time.

Improving the customer’s life

All this “good for the industry” talk is inspiring, but don’t all vendors want a big, lofty number that they can quote in their press releases? Yes! Of course! But the numbers have to mean something to have value.

“As a graphics driver developer,” says Jensen, “I want good benchmarks. We all want the fastest, but being the fastest on a stupid benchmark is wasted time. The people designing the chips and systems depend on us to provide valid benchmarks of end-user workloads. If they are bogus, nobody wins.

“By developing realistic, accurate and repeatable benchmarks we ensure that our tuning work will improve the customer’s life.”

Bob Cramblitt is communications director for SPEC. He writes frequently about performance issues and digital design, engineering and manufacturing technologies. To find out more about graphics and workstation benchmarking, visit the SPEC/GWPG website, subscribe to the SPEC/GWPG enewsletter or join the Graphics and Workstation Benchmarking LinkedIn group: https://www.linkedin.com/groups/8534330.