MLCommons™ Releases MLPerf™ Inference v1.0 Results with First Power Measurements
The latest benchmark includes 1,994 performance and 862 power efficiency results for leading ML inference systems
SAN FRANCISCO–(BUSINESS WIRE)–Today, MLCommons, an open engineering consortium, released results for MLPerf Inference v1.0, the organization’s machine learning inference performance benchmark suite. In its third round of submissions, the results measured how quickly a trained neural network can process new data for a wide range of applications on a variety of form factors and for the first-time, a system power measurement methodology.
MLPerf Inference v1.0 is a cornerstone of MLCommons’ initiative to provide benchmarks and metrics that level the industry playing field through the comparison of ML systems, software, and solutions. The latest benchmark round received submissions from 17 organizations and released 1,994 peer-reviewed results for machine learning systems spanning from edge devices to data center servers. To view the results, please visit https://mlcommons.org/en/inference-datacenter-10/ and https://mlcommons.org/en/inference-edge-10/.
MLPerf Power Measurement – A new metric to understand system efficiency
The MLPerf Inference v1.0 suite introduces new power measurement techniques, tools, and metrics to complement performance benchmarks. These new metrics enable reporting and comparing energy consumption, performance and power for submitting systems. In this round, the new power measurement was optional for submission with 864 results released. The power measurement was developed in partnership with Standard Performance Evaluation Corp. (SPEC), the leading provider of standardized benchmarks and tools for evaluating the performance of today’s computing systems. MLPerf adopted and built on the industry-standard SPEC PTDaemon power measurement Interface.
“As we look at the accelerating adoption of machine learning, artificial intelligence, and the anticipated scale of ML projects, the ability to measure power consumption in ML environments will be critical for sustainability goals all around the world,” said Klaus-Dieter Lange, SPECpower Committee Chair. “MLCommons developed MLPerf in the best tradition of vendor-neutral standardized benchmarks, and SPEC was very excited to be a partner in their development process. We look forward to widespread adoption of this extremely valuable benchmark.”
“We are pleased to see the ongoing engagement from the machine learning community with MLPerf,” said Sachin Idgunji, Chair of the MLPerf Power Working Group. “The addition of a power methodology will highlight energy efficiency and bring a valuable, new level of transparency to the industry.”
“We wanted to add a metric that could showcase the power and energy cost from different levels of ML performance across workloads,” said Arun Tejusve, Chair of the MLPerf Power Working Group. “MLPerf Power v1.0 is a monumental step toward this goal, and will help drive the creation of more energy-efficient algorithms and systems across the industry.”
Additional information about the Inference v1.0 benchmarks will be available at https://mlcommons.org/en/inference-datacenter-10/.
MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding member partners – global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.
PTDaemon® and SPEC® are trademarks of the Standard Performance Evaluation Corporation. All other product and company names herein may be trademarks of their registered owners.