Today marks the release of the first results from the MLPerf Inference benchmark, which audits the performance of 594 variations of machine learning acceleration across a variety of natural language and computer vision tasks. The benchmark is intended to create an industrywide standard for judging inference performance.

Each system takes a unique approach to inference and presents a trade-off between latency, throughput, power, and model quality, according to a white paper from organizers of the benchmark.

“The final results show a four-orders-of-magnitude performance variation ranging from embedded devices and smartphones to data-center systems,” the paper reads.

The analysis of CPUs, GPUs, and TPUs in datacenters and edge devices is the product of more than 30 organizations and 200 machine learning engineers and practitioners from Alibaba, Facebook, Google, Intel, Microsoft, Qualcomm, and Samsung.

The first MLPerf inference measured the performance of machine learning deployment tech from 14 organizations, representing 44 systems in total.

Nvidia says it achieved top honors in all five benchmark categories across edge and datacenter use cases.

MLPerf Inference Working Group cochair David Kanter estimates more than 100 companies are producing or on the verge of producing optimized inference chips. By comparison, only about 20 companies target training.

“One of the things that I would say is really novel about this is we’re defining how we want to measure performance [in] these scenarios … in [an] industry standard way, and we clearly have buy-in from across the industry,” Kanter said. “The point of the benchmark is to allow everyone — academia, industry, marketing, sales, customers, vendors, engineers — a common target to aim for, and that’s going to push the whole industry forward.”

Today’s news follows the release of the MLPerf benchmark suite and data sets in June.

Machine learning training performance results were introduced by the MLPerf consortium in December 2018, with Nvidia again claiming top honors.