Get the latest tech news
MLPerf Inference 4.1 results show gains as Nvidia Blackwell makes its testing debut
MLPerf Inference 4.1 results show that AI inference keeps getting faster, with both new hardware and software optimizations.
The latest round of MLPerf inference benchmarks, released by MLCommons, provides a comprehensive snapshot of the rapidly evolving AI hardware and software landscape. “We just have a tremendous breadth of diversity of submissions and that’s really exciting,” David Kanter, founder and head of MLPerf at MLCommons said during a call discussing the results with press and analysts. He noted that the key goals were to better exercise the strengths of the MoE approach compared to a single-task benchmark and showcase the capabilities of this emerging architectural trend in large language models and generative AI.
Or read this on Venture Beat