This year’s International Math Olympiad (IMO) saw impressive performances from both Gemini and ChatGPT, which achieved gold medal-caliber scores in the competition. Google DeepMind announced that its AI chatbot participated independently, solving five out of six problems according to the competition’s regulations without human assistance. OpenAI’s submission, on the other hand, utilized an experimental research model that underwent independent evaluation, with results confirmed following a unanimous agreement.
Gemini and ChatGPT Score 35/42 in IMO
In updates shared on X (formerly Twitter), Google DeepMind CEO Demis Hassabis and OpenAI’s Member of Technical Staff Alexander Wei announced their respective chatbots’ gold medal scores during the 2025 IMO. Both Gemini and ChatGPT secured 35 out of 42 points by solving five of the six posed problems, which qualified them for gold accolades. While Gemini utilized the Deep Think model, OpenAI opted for an unnamed experimental model during the competition.
The IMO, which originated in Romania in 1959, is one of the longest-running annual contests for school students in mathematics. Currently, it includes participants from over 100 countries, focusing on mathematical proofs rather than solely solution-based questions. This format requires competitors to employ logical reasoning, various mathematical theorems, and applied mathematics knowledge to deliver proofs. Evaluators then assess the quality of these proofs and assign scores accordingly.
According to Hassabis, Gemini successfully operated entirely in natural language, generating mathematical proofs directly from the posed problems within the strict 4.5-hour time frame. The enhanced Gemini Deep Think model will soon be available for use by select testers and mathematicians and will later be accessible to those subscribed to Google AI Ultra.
A report from TechCrunch speaks to OpenAI’s participation, although it was not officially noted. The company reportedly enlisted three former IMO medalists, who provided expertise as independent evaluators, and reached out to the IMO with their final scores. Wei covered this in his post, mentioning that their results were disclosed after achieving unanimous consensus.
In a different post, Hassabis alluded to OpenAI’s decision to sidestep official protocols mandated by the IMO, suggesting premature announcements by OpenAI on Friday. He emphasized that Gemini’s announcement adhered to the IMO Board’s initial request for all AI laboratories to wait for verification by independent experts before publicizing their results, ensuring that students received the recognition they rightfully earned.