Simon Willison’s Weblog

Subscribe

Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad (via) OpenAI beat them to the punch in terms of publicity by publishing their results on Saturday, but a team from Google Gemini achieved an equally impressive result on this year's International Mathematics Olympiad scoring a gold medal performance with their custom research model.

(I saw an unconfirmed rumor that the Gemini team had to wait until Monday for approval from Google PR - this turns out to be inaccurate, see update below.)

It's interesting that Gemini achieved the exact same score as OpenAI, 35/42, and were able to solve the same set of questions - 1 through 5, failing only to answer 6, which is designed to be the hardest question.

Each question is worth seven points, so 35/42 cents corresponds to full marks on five out of the six problems.

Only 6 of the 630 human contestants this year scored all 7 points for question 6 this year, and just 55 more had greater than 0 points for that question.

OpenAI claimed their model had not been optimized for IMO questions. Gemini's model was different - emphasis mine:

We achieved this year’s result using an advanced version of Gemini Deep Think – an enhanced reasoning mode for complex problems that incorporates some of our latest research techniques, including parallel thinking. This setup enables the model to simultaneously explore and combine multiple possible solutions before giving a final answer, rather than pursuing a single, linear chain of thought.

To make the most of the reasoning capabilities of Deep Think, we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. We also provided Gemini with access to a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions.

The Gemini team, like the OpenAI team, achieved this result with no tool use or internet access for the model.

Gemini's solutions are listed in this PDF. If you are mathematically inclined you can compare them with OpenAI's solutions on GitHub.

Last year Google DeepMind achieved a silver medal in IMO, solving four of the six problems using custom models called AlphaProof and AlphaGeometry 2:

First, the problems were manually translated into formal mathematical language for our systems to understand. In the official competition, students submit answers in two sessions of 4.5 hours each. Our systems solved one problem within minutes and took up to three days to solve the others.

This year's result, scoring gold with a single model, within the allotted time and with no manual step to translate the problems first, is much more impressive.

Update: Concerning the timing of the news, DeepMind CEO Demis Hassabis says:

Btw as an aside, we didn’t announce on Friday because we respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved

We've now been given permission to share our results and are pleased to have been part of the inaugural cohort to have our model results officially graded and certified by IMO coordinators and experts, receiving the first official gold-level performance grading for an AI system!

OpenAI's Noam Brown:

Before we shared our results, we spoke with an IMO board member, who asked us to wait until after the award ceremony to make it public, a request we happily honored.

We announced at ~1am PT (6pm AEST), after the award ceremony concluded. At no point did anyone request that we announce later than that.

As far as I can tell the Gemini team was participating in an official capacity, while OpenAI were not. Noam again:

~2 months ago, the IMO emailed us about participating in a formal (Lean) version of the IMO. We’ve been focused on general reasoning in natural language without the constraints of Lean, so we declined. We were never approached about a natural language math option.

Neither OpenAI nor Gemini used Lean in their attempts, which would have counted as tool use.

Monthly briefing

Sponsor me for $10/month and get a curated email digest of the month's most important LLM developments.

Pay me to send you less!

Sponsor & subscribe