Simon Willison’s Weblog

Subscribe

July 2025

July 21, 2025

An AI tool that gets gold on the IMO is obviously immensely impressive. Does it mean math is “solved”? Is an AI-generated proof of the Riemann hypothesis clearly on the horizon? Obviously not.

Worth keeping timescales in mind here: IMO competitors spend an average of 1.5 hrs on each problem. High-quality math research, by contrast, takes month or years.

What are the obstructions to AI performing high-quality autonomous math research? I don’t claim to know for sure, but I think they include many of the same obstructions that prevent it from doing many jobs: Long context, long-term planning, consistency, unclear rewards, lack of training data, etc.

It’s possible that some or all of these will be solved soon (or have been solved) but I think it’s worth being cautious about over-indexing on recent (amazing) progress.

Daniel Litt, Assistant Professor of mathematics, University of Toronto

# 3:13 pm / mathematics, ai, generative-ai, llms, daniel-litt

Advanced version of Gemini with Deep Think officially achieves gold-medal standard at the International Mathematical Olympiad (via) OpenAI beat them to the punch in terms of publicity by publishing their results on Saturday, but a team from Google Gemini achieved an equally impressive result on this year's International Mathematics Olympiad scoring a gold medal performance with their custom research model.

(I saw an unconfirmed rumor that the Gemini team had to wait until Monday for approval from Google PR - this turns out to be inaccurate, see update below.)

It's interesting that Gemini achieved the exact same score as OpenAI, 35/42, and were able to solve the same set of questions - 1 through 5, failing only to answer 6, which is designed to be the hardest question.

Each question is worth seven points, so 35/42 cents corresponds to full marks on five out of the six problems.

Only 6 of the 630 human contestants this year scored all 7 points for question 6 this year, and just 55 more had greater than 0 points for that question.

OpenAI claimed their model had not been optimized for IMO questions. Gemini's model was different - emphasis mine:

We achieved this year’s result using an advanced version of Gemini Deep Think – an enhanced reasoning mode for complex problems that incorporates some of our latest research techniques, including parallel thinking. This setup enables the model to simultaneously explore and combine multiple possible solutions before giving a final answer, rather than pursuing a single, linear chain of thought.

To make the most of the reasoning capabilities of Deep Think, we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. We also provided Gemini with access to a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions.

The Gemini team, like the OpenAI team, achieved this result with no tool use or internet access for the model.

Gemini's solutions are listed in this PDF. If you are mathematically inclined you can compare them with OpenAI's solutions on GitHub.

Last year Google DeepMind achieved a silver medal in IMO, solving four of the six problems using custom models called AlphaProof and AlphaGeometry 2:

First, the problems were manually translated into formal mathematical language for our systems to understand. In the official competition, students submit answers in two sessions of 4.5 hours each. Our systems solved one problem within minutes and took up to three days to solve the others.

This year's result, scoring gold with a single model, within the allotted time and with no manual step to translate the problems first, is much more impressive.

Update: Concerning the timing of the news, DeepMind CEO Demis Hassabis says:

Btw as an aside, we didn’t announce on Friday because we respected the IMO Board's original request that all AI labs share their results only after the official results had been verified by independent experts & the students had rightly received the acclamation they deserved

We've now been given permission to share our results and are pleased to have been part of the inaugural cohort to have our model results officially graded and certified by IMO coordinators and experts, receiving the first official gold-level performance grading for an AI system!

OpenAI's Noam Brown:

Before we shared our results, we spoke with an IMO board member, who asked us to wait until after the award ceremony to make it public, a request we happily honored.

We announced at ~1am PT (6pm AEST), after the award ceremony concluded. At no point did anyone request that we announce later than that.

As far as I can tell the Gemini team was participating in an official capacity, while OpenAI were not. Noam again:

~2 months ago, the IMO emailed us about participating in a formal (Lean) version of the IMO. We’ve been focused on general reasoning in natural language without the constraints of Lean, so we declined. We were never approached about a natural language math option.

Neither OpenAI nor Gemini used Lean in their attempts, which would have counted as tool use.

# 7:14 pm / mathematics, ai, openai, generative-ai, llms, gemini, llm-reasoning

tidwall/pogocache (via) New project from Josh Baker, author of the excellent tg C geospatial libarry (covered previously) and various other interesting projects:

Pogocache is fast caching software built from scratch with a focus on low latency and cpu efficency.

Faster: Pogocache is faster than Memcache, Valkey, Redis, Dragonfly, and Garnet. It has the lowest latency per request, providing the quickest response times. It's optimized to scale from one to many cores, giving you the best single-threaded and multithreaded performance.

Faster than Memcache and Redis is a big claim! The README includes a design details section that explains how the system achieves that performance, using a sharded hashmap inspired by Josh's shardmap project and clever application of threads.

Performance aside, the most interesting thing about Pogocache is the server interface it provides: it emulates the APIs for Redis and Memcached, provides a simple HTTP API and lets you talk to it over the PostgreSQL wire protocol as well!

psql -h localhost -p 9401
=> SET first Tom;
=> SET last Anderson;
=> SET age 37;

$ curl http://localhost:9401/last
Anderson

# 11:58 pm / c, caching, http, memcached, postgresql, redis

July 22, 2025

Textual v4.0.0: The Streaming Release. Will McGugan may no longer be running a commercial company around Textual, but that hasn't stopped his progress on the open source project.

He recently released v4 of his Python framework for building TUI command-line apps, and the signature feature is streaming Markdown support - super relevant in our current age of LLMs, most of which default to outputting a stream of Markdown via their APIs.

I took an example from one of his tests, spliced in my async LLM Python library and got some help from o3 to turn it into a streaming script for talking to models, which can be run like this:

uv run http://tools.simonwillison.net/python/streaming_textual_markdown.py \
'Markdown headers and tables comparing pelicans and wolves' \
-m gpt-4.1-mini

Running that prompt streams a Markdown table to my console.

# 12:32 am / async, python, markdown, ai, will-mcgugan, generative-ai, llms, textual, llm, uv

2025 » July

MTWTFSS
 123456
78910111213
14151617181920
21222324252627
28293031