26.8 C
New York
Saturday, July 26, 2025

Buy now

A new AI coding challenge just published its first results – and they aren’t pretty

A brand new AI coding problem has revealed its first winner — and set a brand new bar for AI-powered software program engineers. 

On Wednesday at 5pm PST, the nonprofit Laude Institute introduced the primary winner of the Okay Prize, a multi-round AI coding problem launched by Databricks and Perplexity co-founder Andy Konwinski. The winner was a Brazilian immediate engineer named Eduardo Rocha de Andrade, who will obtain $50,000 for the prize. However extra stunning than the win was his remaining rating: he received with appropriate solutions to simply 7.5% of the questions on the take a look at.

“We’re glad we constructed a benchmark that’s really laborious,” mentioned Konwinski. “Benchmarks must be laborious in the event that they’re going to matter,” he continued, including: “Scores can be totally different if the large labs had entered with their greatest fashions. However that’s type of the purpose. Okay Prize runs offline with restricted compute, so it favors smaller and open fashions. I like that. It ranges the taking part in area.”

Konwinski has pledged $1 million to the primary open-source mannequin that may rating increased than 90% on the take a look at.

Much like the well-known SWE-Bench system, the Okay Prize exams fashions towards flagged points from GitHub as a take a look at of how properly fashions can cope with real-world programming issues. However whereas SWE-Bench is predicated on a set set of issues that fashions can prepare towards, the Okay Prize is designed as a “contamination-free model of SWE-Bench,” utilizing a timed entry system to protect towards any benchmark-specific coaching. For spherical one, fashions have been due by March twelfth. The Okay Prize organizers then constructed the take a look at utilizing solely GitHub points flagged after that date.

See also  Nvidia provides Omniverse Blueprint for AI factory digital twins

The 7.5% high rating stands in marked distinction to SWE-Bench itself, which at the moment reveals a 75% high rating on its simpler ‘Verified’ take a look at and 34% on its tougher ‘Full’ take a look at. Konwinski nonetheless isn’t positive whether or not the disparity is because of contamination on SWE-Bench or simply the problem of amassing new points from GitHub, however he expects the Okay Prize challenge to reply the query quickly.

“As we get extra runs of the factor, we’ll have a greater sense,” he advised iinfoai, “as a result of we anticipate folks to adapt to the dynamics of competing on this each few months.”

Techcrunch occasion

San Francisco
|
October 27-29, 2025

It’d look like an odd place to fall quick, given the big selection of AI coding instruments already publicly out there – however with benchmarks changing into too straightforward, many critics see tasks just like the Okay Prize as a mandatory step towards fixing AI’s rising analysis downside.

“I’m fairly bullish about constructing new exams for present benchmarks,” says Princeton researcher Sayash Kapoor, who put ahead an identical thought in a latest paper. “With out such experiments, we are able to’t really inform if the problem is contamination, and even simply focusing on the SWE-Bench leaderboard with a human within the loop.”

For Konwinski, it’s not only a higher benchmark, however an open problem to the remainder of the trade. “In the event you take heed to the hype, it’s like we must be seeing AI docs and AI legal professionals and AI software program engineers, and that’s simply not true,” he says. “If we are able to’t even get greater than 10% on a contamination free SWE-Bench, that’s the truth test for me.”

See also  The Beatles won a Grammy last night, thanks to AI

Supply hyperlink

Related Articles

Leave a Reply

Please enter your comment!
Please enter your name here

Latest Articles