On Thursday, weeks after launching its strongest AI mannequin but, Gemini 2.5 Professional, Google printed a technical report exhibiting the outcomes of its inner security evaluations. Nonetheless, the report is mild on the small print, consultants say, making it tough to find out which dangers the mannequin may pose.
Technical reviews present helpful — and unflattering, at instances — data that corporations don’t at all times extensively promote about their AI. By and enormous, the AI neighborhood sees these reviews as good-faith efforts to help impartial analysis and security evaluations.
Google takes a special security reporting method than a few of its AI rivals, publishing technical reviews solely as soon as it considers a mannequin to have graduated from the “experimental” stage. The corporate additionally doesn’t embrace findings from all of its “harmful functionality” evaluations in these write-ups; it reserves these for a separate audit.
A number of consultants iinfoai spoke with have been nonetheless disillusioned by the sparsity of the Gemini 2.5 Professional report, nonetheless, which they famous doesn’t point out Google’s Frontier Security Framework (FSF). Google launched the FSF final yr in what it described as an effort to determine future AI capabilities that would trigger “extreme hurt.”
“This [report] could be very sparse, comprises minimal info, and got here out weeks after the mannequin was already made out there to the general public,” Peter Wildeford, co-founder of the Institute for AI Coverage and Technique, instructed iinfoai. “It’s inconceivable to confirm if Google resides as much as its public commitments and thus inconceivable to evaluate the protection and safety of their fashions.”
Thomas Woodside, co-founder of the Safe AI Challenge, mentioned that whereas he’s glad Google launched a report for Gemini 2.5 Professional, he’s not satisfied of the corporate’s dedication to delivering well timed supplemental security evaluations. Woodside identified that the final time Google printed the outcomes of harmful functionality assessments was in June 2024 — for a mannequin introduced in February that very same yr.
Not inspiring a lot confidence, Google hasn’t made out there a report for Gemini 2.5 Flash, a smaller, extra environment friendly mannequin the corporate introduced final week. A spokesperson instructed iinfoai a report for Flash is “coming quickly.”
“I hope this can be a promise from Google to begin publishing extra frequent updates,” Woodside instructed iinfoai. “These updates ought to embrace the outcomes of evaluations for fashions that haven’t been publicly deployed but, since these fashions may additionally pose severe dangers.”
Google could have been one of many first AI labs to suggest standardized reviews for fashions, but it surely’s not the one one which’s been accused of underdelivering on transparency these days. Meta launched a equally skimpy security analysis of its new Llama 4 open fashions, and OpenAI opted to not publish any report for its GPT-4.1 collection.
Hanging over Google’s head are assurances the tech big made to regulators to keep up a excessive normal of AI security testing and reporting. Two years in the past, Google instructed the U.S. authorities it could publish security reviews for all “vital” public AI fashions “inside scope.” The corporate adopted up that promise with comparable commitments to different international locations, pledging to “present public transparency” round AI merchandise.
Kevin Bankston, a senior adviser on AI governance on the Middle for Democracy and Know-how, referred to as the pattern of sporadic and obscure reviews a “race to the underside” on AI security.
“Mixed with reviews that competing labs like OpenAI have shaved their security testing time earlier than launch from months to days, this meager documentation for Google’s high AI mannequin tells a troubling story of a race to the underside on AI security and transparency as corporations rush their fashions to market,” he instructed iinfoai.
Google has mentioned in statements that, whereas not detailed in its technical reviews, it conducts security testing and “adversarial purple teaming” for fashions forward of launch.