15.8 C
New York
Monday, June 16, 2025

Buy now

OpenAI’s o3 AI model scores lower on a benchmark than the company initially implied

A discrepancy between first- and third-party benchmark outcomes for OpenAI’s o3 AI mannequin is elevating questions in regards to the firm’s transparency and mannequin testing practices.

When OpenAI unveiled o3 in December, the corporate claimed the mannequin might reply simply over a fourth of questions on FrontierMath, a difficult set of math issues. That rating blew the competitors away — the next-best mannequin managed to reply solely round 2% of FrontierMath issues accurately.

“Immediately, all choices on the market have lower than 2% [on FrontierMath],” Mark Chen, chief analysis officer at OpenAI, stated throughout a livestream. “We’re seeing [internally], with o3 in aggressive test-time compute settings, we’re capable of recover from 25%.”

Because it seems, that determine was possible an higher sure, achieved by a model of o3 with extra computing behind it than the mannequin OpenAI publicly launched final week.

Epoch AI, the analysis institute behind FrontierMath, launched outcomes of its impartial benchmark assessments of o3 on Friday. Epoch discovered that o3 scored round 10%, effectively beneath OpenAI’s highest claimed rating.

That doesn’t imply OpenAI lied, per se. The benchmark outcomes the corporate printed in December present a lower-bound rating that matches the rating Epoch noticed. Epoch additionally famous its testing setup possible differs from OpenAI’s, and that it used an up to date launch of FrontierMath for its evaluations.

See also  Amazon just gave Alexa its biggest upgrade since debut - and you'll want an Echo Show for it

“The distinction between our outcomes and OpenAI’s may be as a consequence of OpenAI evaluating with a extra highly effective inner scaffold, utilizing extra test-time [computing], or as a result of these outcomes had been run on a distinct subset of FrontierMath (the 180 issues in frontiermath-2024-11-26 vs the 290 issues in frontiermath-2025-02-28-private),” wrote Epoch.

In response to a put up on X from the ARC Prize Basis, a company that examined a pre-release model of o3, the general public o3 mannequin “is a distinct mannequin […] tuned for chat/product use,” corroborating Epoch’s report.

“All launched o3 compute tiers are smaller than the model we [benchmarked],” wrote ARC Prize. Usually talking, larger compute tiers will be anticipated to attain higher benchmark scores.

OpenAI’s personal Wenda Zhou, a member of the technical employees, stated throughout a livestream final week that the o3 in manufacturing is “extra optimized for real-world use circumstances” and pace versus the model of o3 demoed in December. Consequently, it might exhibit benchmark “disparities,” he added.

“[W]e’ve performed [optimizations] to make the [model] extra value environment friendly [and] extra helpful generally,” Zhou stated. “We nonetheless hope that — we nonetheless assume that — this can be a a lot better mannequin […] You gained’t have to attend as lengthy while you’re asking for a solution, which is an actual factor with these [types of] fashions.”

See also  Securing AI at scale: Databricks and Noma close the inference vulnerability gap

Granted, the truth that the general public launch of o3 falls in need of OpenAI’s testing guarantees is a little bit of a moot level, for the reason that firm’s o3-mini-high and o4-mini fashions outperform o3 on FrontierMath, and OpenAI plans to debut a extra highly effective o3 variant, o3-pro, within the coming weeks.

It’s, nonetheless, one other reminder that AI benchmarks are greatest not taken at face worth — notably when the supply is an organization with companies to promote.

Benchmarking “controversies” have gotten a typical incidence within the AI trade as distributors race to seize headlines and mindshare with new fashions.

In January, Epoch was criticized for ready to reveal funding from OpenAI till after the corporate introduced o3. Many lecturers who contributed to FrontierMath weren’t knowledgeable of OpenAI’s involvement till it was made public.

Extra not too long ago, Elon Musk’s xAI was accused of publishing deceptive benchmark charts for its newest AI mannequin, Grok 3. Simply this month, Meta admitted to touting benchmark scores for a model of a mannequin that differed from the one the corporate made out there to builders.

Up to date 4:21 p.m. Pacific: Added feedback from Wenda Zhou, a member of the OpenAI technical employees, from a livestream final week.

Supply hyperlink

Related Articles

Leave a Reply

Please enter your comment!
Please enter your name here

Latest Articles