Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet (via) Big advances in the field of LLM interpretability from Anthropic, who managed to extract millions of understandable features from their production Claude 3 Sonnet model (the mid-point between the inexpensive Haiku and the GPT-4-class Opus).
Some delightful snippets in here such as this one:
We also find a variety of features related to sycophancy, such as an empathy / “yeah, me too” feature 34M/19922975, a sycophantic praise feature 1M/847723, and a sarcastic praise feature 34M/19415708.
Recent articles
- Introducing Showboat and Rodney, so agents can demo what they’ve built - 10th February 2026
- How StrongDM's AI team build serious software without even looking at the code - 7th February 2026
- Running Pydantic's Monty Rust sandboxed Python subset in WebAssembly - 6th February 2026