Examining the Inadequate Insights from the Majority of AI Benchmarks

Here’s why most AI benchmarks tell us so littleOn Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. The reason — or rather, the problem — lies with the benchmarks AI companies use to quantify a model’s strengths — and weaknesses. “Many benchmarks used for evaluation are three-plus years old, from when AI systems were mostly just used for research and didn’t have many real users. In addition, people use generative AI in many ways — they’re very creative.”It’s not that the most-used benchmarks are totally useless. However, as generative AI models are increasingly positioned as mass market, “do-it-all” systems, old benchmarks are becoming less applicable.

Why Most AI Benchmarks Tell Us So Little

On Tuesday, the startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. Just a few days later, rival Inflection AI unveiled a model that it asserts comes close to matching some of the most capable models out there, including OpenAI’s GPT-4, in quality.

Anthropic and Inflection are by no means the first AI firms to contend their models have the competition met or beat by some objective measure. Google argued the same of its Gemini models at their release, and OpenAI said it of GPT-4 and its predecessors, GPT-3, GPT-2 and GPT-1. The list goes on.

But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what does that mean exactly? Perhaps more importantly, will a model that technically “performs” better than another model actually feel improved in a tangible way?

On the last question, the answer is not likely.

The reason – or rather, the problem – lies with the benchmarks AI companies use to quantify a model’s strengths and weaknesses.

Benchmarks are Failing to Capture Real-World Interactions

The most commonly used benchmarks today for AI models, specifically chatbot-powering models like OpenAI’s ChatGPT and Anthropic’s Claude, do a poor job of capturing how the average person interacts with the models being tested. For example, one benchmark cited by Anthropic in its recent announcement, GPQA (“A Graduate-Level Google-Proof Q&A Benchmark”), contains hundreds of Ph.D.-level biology, physics, and chemistry questions. Most people, however, use chatbots for tasks like responding to emails, writing cover letters, and talking about their feelings.

“Benchmarks are typically static and narrowly focused on evaluating a single capability, like a model’s factuality in a single domain, or its ability to solve mathematical reasoning multiple-choice questions,” said Jesse Dodge, a scientist at the Allen Institute for AI, in an interview with TechCrunch. “Many benchmarks used for evaluation are three-plus years old, from when AI systems were mostly just used for research and didn’t have many real users. In addition, people use generative AI in many ways – they’re very creative.”

It’s not that the most-used benchmarks are completely useless. There are undoubtedly people who ask ChatGPT Ph.D.-level math questions. However, as generative AI models are increasingly positioned as mass-market, “do-it-all” systems, old benchmarks are becoming less applicable.

Are Benchmarks Accurately Measuring What They Claim?

Aside from being misaligned with real-world use cases, there are also questions about whether some benchmarks even properly measure what they claim to. For instance, an analysis of HellaSwag, a test designed to evaluate commonsense reasoning in models, found that more than a third of the test questions contained typos and “nonsensical” writing. Further, the Massive Multitask Language Understanding (MMLU) benchmark – which has been cited by vendors like Google, OpenAI, and Anthropic as evidence of their models’ logical reasoning abilities – features questions that can be solved through rote memorization.

“[Benchmarks like MMLU are] more about memorizing and associating two keywords together,” noted David Widder, a postdoctoral researcher at Cornell studying AI and ethics, in an interview with TechCrunch. “I can find [a relevant] article fairly quickly and answer the question, but that doesn’t mean I understand the causal mechanism, or could use an understanding of this causal mechanism to actually reason through and solve new and complex problems in unforeseen contexts. A model can’t either.”

So, it’s clear that benchmarks are broken. But can they be fixed?

Can Benchmarks Be Improved?

Dodge believes that benchmarks can be improved with more human involvement.

“The right path forward, here, is a combination of evaluation benchmarks with human evaluation,” she said. “Prompting a model with a real user query and then hiring a person to rate how good the response is.”

However, Widder is less optimistic about improving benchmarks to the point where they would be informative for the majority of generative AI model users. Instead, he suggests focusing on the downstream impacts of these models and whether those impacts, positive or negative, are perceived as desirable by those affected.

The Way Forward

“I’d ask which specific contextual goals we want AI models to be able to be used for and evaluate whether they’d be – or are – successful in such contexts,” Widder said. “And hopefully, too, that process involves evaluating whether we should be using AI in such contexts.”

It’s clear that the current benchmarks for AI models are not accurately capturing their capabilities or evaluating their real-world usefulness. To truly determine the potential and impact of generative AI models, a more comprehensive and human-centered approach is needed. Only then can we truly understand the abilities and limitations of these powerful AI systems.

Avatar photo
Kira Kim

Kira Kim is a science journalist with a background in biology and a passion for environmental issues. She is known for her clear and concise writing, as well as her ability to bring complex scientific concepts to life for a general audience.

Articles: 867

Leave a Reply

Your email address will not be published. Required fields are marked *