Simon Willison’s Weblog

Subscribe

Updated Anthropic model comparison table. A few details in here about Claude 4 that I hadn't spotted elsewhere:

  1. The training cut-off date for Claude Opus 4 and Claude Sonnet 4 is March 2025! That's the most recent cut-off for any of the current popular models, really impressive.
  2. Opus 4 has a max output of 32,000 tokens, Sonnet 4 has a max output of 64,000 tokens. Claude 3.7 Sonnet is 64,000 tokens too, so this is a small regression for Opus.
  3. The input limit for both of the Claude 4 models is still stuck at 200,000. I'm disjointed by this, I was hoping for a leap to a million to catch up with GPT 4.1 and the Gemini Pro series.
  4. Claude 3 Haiku is still in that table - it remains Anthropic's cheapest model, priced slightly lower than Claude 3.5 Haiku.

For pricing: Sonnet 4 is the same price as Sonnet 3.7 ($3/million input, $15/million output). Opus 4 matches the pricing of the older Opus 3 - $15/million for input and $75/million for output. I've updated llm-prices.com with the new models.

I spotted a few more interesting details in Anthropic's Migrating to Claude 4 documentation:

Claude 4 models introduce a new refusal stop reason for content that the model declines to generate for safety reasons, due to the increased intelligence of Claude 4 models.

Plus this note on the new summarized thinking feature:

With extended thinking enabled, the Messages API for Claude 4 models returns a summary of Claude’s full thinking process. Summarized thinking provides the full intelligence benefits of extended thinking, while preventing misuse.

While the API is consistent across Claude 3.7 and 4 models, streaming responses for extended thinking might return in a “chunky” delivery pattern, with possible delays between streaming events.

Summarization is processed by a different model than the one you target in your requests. The thinking model does not see the summarized output.

There's a new beta header, interleaved-thinking-2025-05-14, which turns on the "interleaved thinking" feature where tools can be called as part of the chain-of-thought. More details on that in the interleaved thinking documentation.

This is a frustrating note:

  • You’re charged for the full thinking tokens generated by the original request, not the summary tokens.
  • The billed output token count will not match the count of tokens you see in the response.

I initially misread that second bullet as meaning we would no longer be able to estimate costs based on the return token counts, but it's just warning us that we might see an output token integer that doesn't exactly match the visible tokens that were returned in the API.