As a tech reporter I typically get requested questions like “Is DeepSeek actually better than ChatGPT?” or “Is the Anthropic model any good?” If I don’t really feel like turning it into an hour-long seminar, I’ll often give the diplomatic reply: “They’re both solid in different ways.”
Most folks asking aren’t defining “good” in any exact means, and that’s honest. It’s human to need to make sense of one thing new and seemingly highly effective. But that easy query—Is this mannequin good?—is actually simply the on a regular basis model of a way more difficult technical drawback.
So far, the best way we’ve tried to reply that query is thru benchmarks. These give fashions a hard and fast set of inquiries to reply and grade them on what number of they get proper. But similar to exams just like the SAT (an admissions take a look at utilized by many US schools), these benchmarks don’t at all times mirror deeper skills. Lately it feels as if a brand new AI mannequin drops each week, and each time an organization launches one, it comes with recent scores exhibiting it beating the capabilities of predecessors. On paper, all the things seems to be getting higher on a regular basis.
In follow, it’s not so easy. Just as grinding for the SAT would possibly increase your rating with out enhancing your essential pondering, fashions will be skilled to optimize for benchmark outcomes with out truly getting smarter, as Russell Brandon defined in his piece for us. As OpenAI and Tesla AI veteran Andrej Karpathy lately put it, we’re residing by way of an evaluation disaster—our scoreboard for AI not displays what we actually need to measure.
Benchmarks have grown stale for just a few key causes. First, the trade has discovered to “teach to the test,” coaching AI fashions to attain effectively somewhat than genuinely enhance. Second, widespread knowledge contamination means fashions might have already seen the benchmark questions, and even the solutions, someplace of their coaching knowledge. And lastly, many benchmarks are merely maxed out. On fashionable assessments like SuperGLUE, fashions have already reached or surpassed 90% accuracy, making additional good points really feel extra like statistical noise than significant enchancment. At that time, the scores cease telling us something helpful. That’s very true in high-skill domains like coding, reasoning, and complicated STEM problem-solving.
However, there are a rising variety of groups all over the world making an attempt to deal with the AI evaluation disaster.
One result’s a brand new benchmark referred to as LiveCodeBench Pro. It attracts issues from worldwide algorithmic olympiads—competitions for elite highschool and college programmers the place contributors clear up difficult issues with out exterior instruments. The prime AI fashions presently handle solely about 53% at first move on medium-difficulty issues and 0% on the toughest ones. These are duties the place human consultants routinely excel.
Zihan Zheng, a junior at NYU and a North America finalist in aggressive coding, led the challenge to develop LiveCodeBench Pro with a workforce of olympiad medalists. They’ve revealed each the benchmark and an in depth research exhibiting that top-tier fashions like GPT o4-mini-high and Google’s Gemini 2.5 carry out at a degree akin to the highest 10% of human rivals. Across the board, Zheng noticed a sample: AI excels at planning and executing duties, nevertheless it struggles with nuanced algorithmic reasoning. “It shows that AI is still far from matching the best human coders,” he says.
LiveCodeBench Pro would possibly outline a brand new higher bar. But what in regards to the flooring? Earlier this month, a gaggle of researchers from a number of universities argued that LLM brokers ought to be evaluated totally on the premise of their riskiness, not simply how effectively they carry out. In real-world, application-driven environments—particularly with AI brokers—unreliability, hallucinations, and brittleness are ruinous. One flawed transfer may spell catastrophe when cash or security are on the road.
There are different new makes an attempt to deal with the issue. Some benchmarks, like ARC-AGI, now preserve a part of their knowledge set personal to stop AI fashions from being optimized excessively for the take a look at, an issue referred to as “overfitting.” Meta’s Yann LeCun has created LiveBench, a dynamic benchmark the place questions evolve each six months. The objective is to guage fashions not simply on information however on adaptability.
Xbench, a Chinese benchmark challenge developed by HongShan Capital Group (previously Sequoia China), is one other one in all these effort. I simply wrote about it in a narrative. Xbench was initially in-built 2022—proper after ChatGPT’s launch—as an inner instrument to guage fashions for funding analysis. Over time, the workforce expanded the system and introduced in exterior collaborators. It simply made components of its query set publicly out there final week.
Xbench is notable for its dual-track design, which tries to bridge the hole between lab-based assessments and real-world utility. The first monitor evaluates technical reasoning abilities by testing a mannequin’s STEM information and skill to hold out Chinese-language analysis. The second monitor goals to evaluate sensible usefulness—how effectively a mannequin performs on duties in fields like recruitment and advertising and marketing. For occasion, one process asks an agent to determine 5 certified battery engineer candidates; one other has it match manufacturers with related influencers from a pool of greater than 800 creators.
The workforce behind Xbench has large ambitions. They plan to develop its testing capabilities into sectors like finance, legislation, and design, and so they plan to replace the take a look at set quarterly to keep away from stagnation.
This is one thing that I typically surprise about, as a result of a mannequin’s hardcore reasoning potential doesn’t essentially translate right into a enjoyable, informative, and artistic expertise. Most queries from common customers are most likely not going to be rocket science. There isn’t a lot analysis but on how one can successfully consider a mannequin’s creativity, however I’d like to know which mannequin could be one of the best for artistic writing or artwork tasks.
Human desire testing has additionally emerged as a substitute for benchmarks. One more and more fashionable platform is LMarena, which lets customers submit questions and examine responses from completely different fashions aspect by aspect—after which choose which one they like greatest. Still, this methodology has its flaws. Users typically reward the reply that sounds extra flattering or agreeable, even when it’s flawed. That can incentivize “sweet-talking” fashions and skew ends in favor of pandering.
AI researchers are starting to comprehend—and admit—that the established order of AI testing can’t proceed. At the latest CVPR convention, NYU professor Saining Xie drew on historian James Carse’s Finite and Infinite Games to critique the hypercompetitive tradition of AI analysis. An infinite sport, he famous, is open-ended—the objective is to maintain enjoying. But in AI, a dominant participant typically drops a giant outcome, triggering a wave of follow-up papers chasing the identical slim subject. This race-to-publish tradition places huge stress on researchers and rewards velocity over depth, short-term wins over long-term perception. “If academia chooses to play a finite game,” he warned, “it will lose everything.”
I discovered his framing highly effective—and perhaps it applies to benchmarks, too. So, do we have a really complete scoreboard for a way good a mannequin is? Not actually. Many dimensions—social, emotional, interdisciplinary—nonetheless evade evaluation. But the wave of latest benchmarks hints at a shift. As the sphere evolves, a little bit of skepticism might be wholesome.
This story initially appeared in The Algorithm, our weekly e-newsletter on AI. To get tales like this in your inbox first, enroll right here.
Correction: A earlier model of the article mistakenly mentioned 4o-mini as a substitute of ChatGPT o4-mini-high, as a prime performing mannequin on LiveCodeBench Pro.
