Quantcast
By September 10, 2017 Read More →

Authenticity is the key to good workstation performance benchmarks

Benchmarks should include a variety of tests to characterize performance across a wide spectrum of applications and use cases.

By Bob Cramblitt

Ask enough people what makes a good professional workstation benchmark and the answers might cut a wide swath, but you should find some common ground.

While not exhaustive, the list below is built to last: It’s based on 30 years of seeing first-hand how graphics and workstation benchmarks are built by the Standard Performance Evaluation Corporation’s Graphics and Workstation Performance Group (SPEC/GWPG).

Benchmark models should be the same size, variety and complexity as those used by professionals when running a targeted application. (Source: SPEC)

There’s no “I” in benchmark: A benchmark should be developed in a collective process involving different people with different perspectives to counterbalance biases.

Keep models real: Benchmark models should be the same size, variety, and complexity as those used by professionals when running a targeted application.

Be authentic, man: Workloads and weightings should be based as much as possible on how professionals exercise a certain application in the real world.

You can relate: Results should be transferable to the real world; get good results for a benchmark based on a certain application and you should see similar results for that application in the real world.

Rules to benchmark by: Strong run rules need to be applied to ensure consistent results, discourage tampering, and make certain the playing field is level for all.

Don’t make me a number: Benchmark developers should resist the temptation to provide a single number to characterize performance across a wide spectrum of applications and use cases.

Be easy, but don’t compromise: The benchmark should be relatively easy to use without compromising the rigor of testing or accuracy of results. It should also be automated as much as possible to avoid mistakes by users.

Rinse and repeat: Benchmark results should be repeatable and consistent across multiple runs.

Believe in evolution: Benchmarks must continuously evolve to stay current with new technologies, methodologies, and user practices.

A typical report from the SPEC/GWPW benchmark. (Source: SPEC)

Bob Cramblitt is communications director for SPEC. He writes frequently about performance issues and digital design, engineering and manufacturing technologies. To find out more about graphics and workstation benchmarking, visit the SPEC/GWPG website, or join the Graphics and Workstation Benchmarking LinkedIn group: https://www.linkedin.com/groups/8534330.

 

email
Posted in: Blogs, HWD

About the Author:

Comments are closed.