A new AI benchmark evaluates how quickly users’ queries are answered

0
62
File photo: Visitors look at a Go1 quadruped robot by Unitree Robotics during the World Artificial Intelligence Cannes Festival (WAICF) in Cannes, France, February 10, 2023. REUTERS/Eric Gaillard/File photo

The artificial intelligence benchmarking group MLCommons published new test results and findings on Wednesday, rating how quickly the latest hardware can execute AI apps and react to user input.

The two new benchmarks introduced by MLCommons gauge how quickly AI chips and systems can produce responses from robust, data-rich AI models. The findings essentially show how fast an AI programme like ChatGPT can respond to a user’s inquiry.

Measuring the speed of a question-and-answer scenario for large language models was added to one of the new benchmarks. Named Llama 2, it was created by Meta Platforms and has 70 billion parameters.In addition, MLCommons officials expanded the set of benchmarking tools to include MLPerf, a second text-to-image generator built on Stability AI’s Stable Diffusion XL model.

Supermicro, an Alphabet company, Google, and Nvidia itself all built servers using Nvidia’s H100 chips, which easily set new performance records. Based on the less potent L40S chip offered by the company, a number of server builders submitted designs.

A design for the image generation benchmark using a Qualcomm AI chip—which uses a lot less energy than Nvidia’s state-of-the-art processors—was submitted by server builder Krai.

A design based on Intel’s Gaudi2 accelerator chips was also submitted. The outcomes were deemed “solid” by the business.

When implementing AI applications, there are other metrics that are just as important as raw performance. The deployment of chips that provide optimal performance while consuming the least amount of energy is a major challenge for AI companies, as advanced AI chips consume massive amounts of energy. Power consumption is measured in a different benchmark category in MLCommons.