<?xml version="1.0" encoding="utf-8"?>
<feed xml:lang="en-us" xmlns="http://www.w3.org/2005/Atom"><title>Simon Willison's Weblog</title><link href="http://feeds.simonwillison.net/" rel="alternate"/><link href="http://feeds.simonwillison.net/atom/everything/" rel="self"/><id>http://feeds.simonwillison.net/</id><updated>2026-04-04T02:20:17+00:00</updated><author><name>Simon Willison</name></author><entry><title>Quoting Kyle Daigle</title><link href="https://simonwillison.net/2026/Apr/4/kyle-daigle/#atom-everything" rel="alternate"/><published>2026-04-04T02:20:17+00:00</published><updated>2026-04-04T02:20:17+00:00</updated><id>https://simonwillison.net/2026/Apr/4/kyle-daigle/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://twitter.com/kdaigle/status/2040164759836778878"&gt;&lt;p&gt;[GitHub] platform activity is surging. There were 1 billion commits in 2025. Now, it's 275 million per week, on pace for 14 billion this year if growth remains linear (spoiler: it won't.)&lt;/p&gt;
&lt;p&gt;GitHub Actions has grown from 500M minutes/week in 2023 to 1B minutes/week in 2025, and now 2.1B minutes so far this week.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/kdaigle/status/2040164759836778878"&gt;Kyle Daigle&lt;/a&gt;, COO, GitHub&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/github"&gt;github&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/github-actions"&gt;github-actions&lt;/a&gt;&lt;/p&gt;



</summary><category term="github"/><category term="github-actions"/></entry><entry><title>Vulnerability Research Is Cooked</title><link href="https://simonwillison.net/2026/Apr/3/vulnerability-research-is-cooked/#atom-everything" rel="alternate"/><published>2026-04-03T23:59:08+00:00</published><updated>2026-04-03T23:59:08+00:00</updated><id>https://simonwillison.net/2026/Apr/3/vulnerability-research-is-cooked/#atom-everything</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/"&gt;Vulnerability Research Is Cooked&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can't design a better problem for an LLM agent than exploitation research.&lt;/p&gt;
&lt;p&gt;Before you feed it a single token of context, a frontier LLM already encodes supernatural amounts of correlation across vast bodies of source code. Is the Linux KVM hypervisor connected to the &lt;code&gt;hrtimer&lt;/code&gt; subsystem, &lt;code&gt;workqueue&lt;/code&gt;, or &lt;code&gt;perf_event&lt;/code&gt;? The model knows.&lt;/p&gt;
&lt;p&gt;Also baked into those model weights: the complete library of documented "bug classes" on which all exploit development builds: stale pointers, integer mishandling, type confusion, allocator grooming, and all the known ways of promoting a wild write to a controlled 64-bit read/write in Firefox.&lt;/p&gt;
&lt;p&gt;Vulnerabilities are found by pattern-matching bug classes and constraint-solving for reachability and exploitability. Precisely the implicit search problems that LLMs are most gifted at solving. Exploit outcomes are straightforwardly testable success/failure trials. An agent never gets bored and will search forever if you tell it to.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The article was partly inspired by &lt;a href="https://securitycryptographywhatever.com/2026/03/25/ai-bug-finding/"&gt;this episode of the Security Cryptography Whatever podcast&lt;/a&gt;, where David Adrian, Deirdre Connolly, and Thomas interviewed Anthropic's Nicholas Carlini for 1 hour 16 minutes.&lt;/p&gt;
&lt;p&gt;I just started a new tag here for &lt;a href="https://simonwillison.net/tags/ai-security-research/"&gt;ai-security-research&lt;/a&gt; - it's up to 11 posts already.


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/thomas-ptacek"&gt;thomas-ptacek&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/careers"&gt;careers&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/nicholas-carlini"&gt;nicholas-carlini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;



</summary><category term="security"/><category term="thomas-ptacek"/><category term="careers"/><category term="ai"/><category term="generative-ai"/><category term="llms"/><category term="nicholas-carlini"/><category term="ai-ethics"/><category term="ai-security-research"/></entry><entry><title>The cognitive impact of coding agents</title><link href="https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-everything" rel="alternate"/><published>2026-04-03T23:57:04+00:00</published><updated>2026-04-03T23:57:04+00:00</updated><id>https://simonwillison.net/2026/Apr/3/cognitive-cost/#atom-everything</id><summary type="html">
    &lt;p&gt;A fun thing about &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;recording a podcast&lt;/a&gt; with a professional like Lenny Rachitsky is that his team know how to slice the resulting video up into TikTok-sized short form vertical videos. Here's &lt;a href="https://x.com/lennysan/status/2039845666680176703"&gt;one he shared on Twitter today&lt;/a&gt; which ended up attracting over 1.1m views!&lt;/p&gt;
&lt;p&gt;&lt;video
  src="https://static.simonwillison.net/static/2026/cognitive-cost.mp4"
  poster="https://static.simonwillison.net/static/2026/cognitive-cost-poster.jpg"
  controls
  preload="none"
  playsinline
  style="display:block; max-width:400px; width:100%; height:auto; margin:0 auto"
&gt;&lt;track src="https://static.simonwillison.net/static/2026/cognitive-cost.vtt" kind="captions" srclang="en" label="English"&gt;&lt;/video&gt;
&lt;/p&gt;
&lt;p&gt;That was 48 seconds. Our &lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/"&gt;full conversation&lt;/a&gt; lasted 1 hour 40 minutes.&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/podcast-appearances"&gt;podcast-appearances&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/cognitive-debt"&gt;cognitive-debt&lt;/a&gt;&lt;/p&gt;



</summary><category term="ai-ethics"/><category term="coding-agents"/><category term="agentic-engineering"/><category term="generative-ai"/><category term="podcast-appearances"/><category term="ai"/><category term="llms"/><category term="cognitive-debt"/></entry><entry><title>Quoting Willy Tarreau</title><link href="https://simonwillison.net/2026/Apr/3/willy-tarreau/#atom-everything" rel="alternate"/><published>2026-04-03T21:48:22+00:00</published><updated>2026-04-03T21:48:22+00:00</updated><id>https://simonwillison.net/2026/Apr/3/willy-tarreau/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://lwn.net/Articles/1065620/"&gt;&lt;p&gt;On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.&lt;/p&gt;
&lt;p&gt;And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://lwn.net/Articles/1065620/"&gt;Willy Tarreau&lt;/a&gt;, Lead Software Developer. HAPROXY&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/linux"&gt;linux&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;



</summary><category term="security"/><category term="linux"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Quoting Daniel Stenberg</title><link href="https://simonwillison.net/2026/Apr/3/daniel-stenberg/#atom-everything" rel="alternate"/><published>2026-04-03T21:46:07+00:00</published><updated>2026-04-03T21:46:07+00:00</updated><id>https://simonwillison.net/2026/Apr/3/daniel-stenberg/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://mastodon.social/@bagder/116336957584445742"&gt;&lt;p&gt;The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good.&lt;/p&gt;
&lt;p&gt;I'm spending hours per day on this now. It's intense.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://mastodon.social/@bagder/116336957584445742"&gt;Daniel Stenberg&lt;/a&gt;, lead developer of cURL&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/daniel-stenberg"&gt;daniel-stenberg&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/curl"&gt;curl&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;



</summary><category term="daniel-stenberg"/><category term="security"/><category term="curl"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Quoting Greg Kroah-Hartman</title><link href="https://simonwillison.net/2026/Apr/3/greg-kroah-hartman/#atom-everything" rel="alternate"/><published>2026-04-03T21:44:41+00:00</published><updated>2026-04-03T21:44:41+00:00</updated><id>https://simonwillison.net/2026/Apr/3/greg-kroah-hartman/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/"&gt;&lt;p&gt;Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us.&lt;/p&gt;
&lt;p&gt;Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/"&gt;Greg Kroah-Hartman&lt;/a&gt;, Linux kernel maintainer (&lt;a href="https://en.wikipedia.org/wiki/Greg_Kroah-Hartman"&gt;bio&lt;/a&gt;), in conversation with Steven J. Vaughan-Nichols&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/linux"&gt;linux&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-security-research"&gt;ai-security-research&lt;/a&gt;&lt;/p&gt;



</summary><category term="security"/><category term="linux"/><category term="generative-ai"/><category term="ai"/><category term="llms"/><category term="ai-security-research"/></entry><entry><title>Can JavaScript Escape a CSP Meta Tag Inside an Iframe?</title><link href="https://simonwillison.net/2026/Apr/3/test-csp-iframe-escape/#atom-everything" rel="alternate"/><published>2026-04-03T16:05:00+00:00</published><updated>2026-04-03T16:05:00+00:00</updated><id>https://simonwillison.net/2026/Apr/3/test-csp-iframe-escape/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Research:&lt;/strong&gt; &lt;a href="https://github.com/simonw/research/tree/main/test-csp-iframe-escape#readme"&gt;Can JavaScript Escape a CSP Meta Tag Inside an Iframe?&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;In trying to build my own version of Claude Artifacts I got curious about options for applying CSP headers to content in sandboxed iframes without using a separate domain to host the files. Turns out you can inject &lt;code&gt;&amp;lt;meta http-equiv="Content-Security-Policy"...&amp;gt;&lt;/code&gt; tags at the top of the iframe content and they'll be obeyed even if subsequent untrusted JavaScript tries to manipulate them.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/iframes"&gt;iframes&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/content-security-policy"&gt;content-security-policy&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/sandboxing"&gt;sandboxing&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="iframes"/><category term="security"/><category term="javascript"/><category term="content-security-policy"/><category term="sandboxing"/></entry><entry><title>The Axios supply chain attack used individually targeted social engineering</title><link href="https://simonwillison.net/2026/Apr/3/supply-chain-social-engineering/#atom-everything" rel="alternate"/><published>2026-04-03T13:54:53+00:00</published><updated>2026-04-03T13:54:53+00:00</updated><id>https://simonwillison.net/2026/Apr/3/supply-chain-social-engineering/#atom-everything</id><summary type="html">
    &lt;p&gt;The Axios team have published a &lt;a href="https://github.com/axios/axios/issues/10636"&gt;full postmortem&lt;/a&gt; on the supply chain attack which resulted in a malware dependency going out &lt;a href="https://simonwillison.net/2026/Mar/31/supply-chain-attack-on-axios/"&gt;in a release the other day&lt;/a&gt;, and it involved a sophisticated social engineering campaign targeting one of their maintainers directly. Here's Jason Saayman'a description of &lt;a href="https://github.com/axios/axios/issues/10636#issuecomment-4180237789"&gt;how that worked&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;so the attack vector mimics what google has documented here: &lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering"&gt;https://cloud.google.com/blog/topics/threat-intelligence/unc1069-targets-cryptocurrency-ai-social-engineering&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;they tailored this process specifically to me by doing the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;they reached out masquerading as the founder of a company they had cloned the companys founders likeness as well as the company itself.&lt;/li&gt;
&lt;li&gt;they then invited me to a real slack workspace. this workspace was branded to the companies ci and named in a plausible manner. the slack was thought out very well, they had channels where they were sharing linked-in posts, the linked in posts i presume just went to the real companys account but it was super convincing etc. they even had what i presume were fake profiles of the team of the company but also number of other oss maintainers.&lt;/li&gt;
&lt;li&gt;they scheduled a meeting with me to connect. the meeting was on ms teams. the meeting had what seemed to be a group of people that were involved.&lt;/li&gt;
&lt;li&gt;the meeting said something on my system was out of date. i installed the missing item as i presumed it was something to do with teams, and this was the RAT.&lt;/li&gt;
&lt;li&gt;everything was extremely well co-ordinated looked legit and was done in a professional manner.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;A RAT is a Remote Access Trojan - this was the software which stole the developer's credentials which could then be used to publish the malicious package.&lt;/p&gt;
&lt;p&gt;That's a &lt;em&gt;very effective&lt;/em&gt; scam. I join a lot of meetings where I find myself needing to install Webex or Microsoft Teams or similar at the last moment and the time constraint means I always click "yes" to things as quickly as possible to make sure I don't join late.&lt;/p&gt;
&lt;p&gt;Every maintainer of open source software used by enough people to be worth taking in this way needs to be familiar with this attack strategy.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/open-source"&gt;open-source&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/packaging"&gt;packaging&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/social-engineering"&gt;social-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="open-source"/><category term="packaging"/><category term="security"/><category term="social-engineering"/><category term="supply-chain"/></entry><entry><title>Highlights from my conversation about agentic engineering on Lenny's Podcast</title><link href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#atom-everything" rel="alternate"/><published>2026-04-02T20:40:47+00:00</published><updated>2026-04-02T20:40:47+00:00</updated><id>https://simonwillison.net/2026/Apr/2/lennys-podcast/#atom-everything</id><summary type="html">
    &lt;p&gt;I was a guest on Lenny Rachitsky's podcast, in a new episode titled &lt;a href="https://www.lennysnewsletter.com/p/an-ai-state-of-the-union"&gt;An AI state of the union: We've passed the inflection point, dark factories are coming, and automation timelines&lt;/a&gt;. It's available on &lt;a href="https://youtu.be/wc8FBhQtdsA"&gt;YouTube&lt;/a&gt;, &lt;a href="https://open.spotify.com/episode/0DVjwLT6wgtscdB78Qf1BQ"&gt;Spotify&lt;/a&gt;, and &lt;a href="https://podcasts.apple.com/us/podcast/an-ai-state-of-the-union-weve-passed-the/id1627920305?i=1000758850377"&gt;Apple Podcasts&lt;/a&gt;. Here are my highlights from our conversation, with relevant links.&lt;/p&gt;

&lt;iframe style="margin-top: 1.5em; margin-bottom: 1.5em;" width="560" height="315" src="https://www.youtube-nocookie.com/embed/wc8FBhQtdsA" title="Why we’ve passed the AI inflection point and automation has already started | Simon Willison" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen="allowfullscreen"&gt; &lt;/iframe&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#the-november-inflection-point"&gt;The November inflection point&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#software-engineers-as-bellwethers-for-other-information-workers"&gt;Software engineers as bellwethers for other information workers&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#writing-code-on-my-phone"&gt;Writing code on my phone&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#responsible-vibe-coding"&gt;Responsible vibe coding&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#dark-factories-and-strongdm"&gt;Dark Factories and StrongDM&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#the-bottleneck-has-moved-to-testing"&gt;The bottleneck has moved to testing&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#this-stuff-is-exhausting"&gt;This stuff is exhausting&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#interruptions-cost-a-lot-less-now"&gt;Interruptions cost a lot less now&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#my-ability-to-estimate-software-is-broken"&gt;My ability to estimate software is broken&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#it-s-tough-for-people-in-the-middle"&gt;It's tough for people in the middle&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#it-s-harder-to-evaluate-software"&gt;It's harder to evaluate software&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#the-misconception-that-ai-tools-are-easy"&gt;The misconception that AI tools are easy&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#coding-agents-are-useful-for-security-research-now"&gt;Coding agents are useful for security research now&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#openclaw"&gt;OpenClaw&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#journalists-are-good-at-dealing-with-unreliable-sources"&gt;Journalists are good at dealing with unreliable sources&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#the-pelican-benchmark"&gt;The pelican benchmark&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#and-finally-some-good-news-about-parrots"&gt;And finally, some good news about parrots&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href="https://simonwillison.net/2026/Apr/2/lennys-podcast/#youtube-chapters"&gt;YouTube chapters&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id="the-november-inflection-point"&gt;The November inflection point&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=269"&gt;4:19&lt;/a&gt; - The end result of these two labs throwing everything they had at making their models better at code is that in November we had what I call the &lt;a href="https://simonwillison.net/tags/november-2025-inflection/"&gt;inflection point&lt;/a&gt; where GPT 5.1 and Claude Opus 4.5 came along.&lt;/p&gt;
&lt;p&gt;They were both incrementally better than the previous models, but in a way that crossed a threshold where previously the code would mostly work, but you had to pay very close attention to it. And suddenly we went from that to... almost all of the time it does what you told it to do, which makes all of the difference in the world.&lt;/p&gt;
&lt;p&gt;Now you can spin up a coding agent and say, &lt;a href="https://simonwillison.net/2026/Feb/25/present/"&gt;build me a Mac application that does this thing&lt;/a&gt;, and you'll get something back which won't just be a buggy pile of rubbish that doesn't do anything.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="software-engineers-as-bellwethers-for-other-information-workers"&gt;Software engineers as bellwethers for other information workers&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=349"&gt;5:49&lt;/a&gt; - I can churn out 10,000 lines of code in a day. And most of it works. Is that good? Like, how do we get from most of it works to all of it works? There are so many new questions that we're facing, which I think makes us a bellwether for other information workers.&lt;/p&gt;
&lt;p&gt;Code is easier than almost every other problem that you pose these agents because code is obviously right or wrong - either it works or it doesn't work. There might be a few subtle hidden bugs, but generally you can tell if the thing actually works.&lt;/p&gt;
&lt;p&gt;If it writes you an essay, if it prepares a lawsuit for you, it's so much harder to derive if it's actually done a good job, and to figure out if it got things right or wrong. But it's happening to us as software engineers. It came for us first.&lt;/p&gt;
&lt;p&gt;And we're figuring out, OK, what do our careers look like? How do we work as teams when part of what we did that used to take most of the time doesn't take most of the time anymore? What does that look like? And it's going to be very interesting seeing how this rolls out to other information work in the future.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Lawyers are falling for this really badly. The &lt;a href="https://www.damiencharlotin.com/hallucinations/"&gt;AI hallucination cases database&lt;/a&gt; is up to 1,228 cases now!&lt;/p&gt;
&lt;p&gt;Plus this bit from the cold open at &lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=0s"&gt;the start&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It used to be you'd ask ChatGPT for some code, and it would spit out some code, and you'd have to run it and test it. The coding agents take that step for you now. And an open question for me is how many other knowledge work fields are actually prone to these agent loops?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="writing-code-on-my-phone"&gt;Writing code on my phone&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=499"&gt;8:19&lt;/a&gt; - I write so much of my code on my phone. It's wild. I can get good work done walking the dog along the beach, which is delightful.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I mainly use the Claude iPhone app for this, both with a regular Claude chat session (which &lt;a href="https://simonwillison.net/2025/Sep/9/claude-code-interpreter/"&gt;can execute code now&lt;/a&gt;) or using it to control &lt;a href="https://code.claude.com/docs/en/claude-code-on-the-web"&gt;Claude Code for web&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="responsible-vibe-coding"&gt;Responsible vibe coding&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=595"&gt;9:55&lt;/a&gt; If you're vibe coding something for yourself, where the only person who gets hurt if it has bugs is you, go wild. That's completely fine. The moment you ship your vibe coding code for other people to use, where your bugs might actually harm somebody else, that's when you need to take a step back.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;See also &lt;a href="https://simonwillison.net/2025/Mar/19/vibe-coding/#when-is-it-ok-to-vibe-code-"&gt;When is it OK to vibe code?&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="dark-factories-and-strongdm"&gt;Dark Factories and StrongDM&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=769"&gt;12:49&lt;/a&gt; The reason it's called the dark factory is there's this idea in factory automation that if your factory is so automated that you don't need any people there, you can turn the lights off. Like the machines can operate in complete darkness if you don't need people on the factory floor. What does that look like for software? [...]&lt;/p&gt;
&lt;p&gt;So there's this policy that nobody writes any code: you cannot type code into a computer. And honestly, six months ago, I thought that was crazy. And today, probably 95% of the code that I produce, I didn't type myself. That world is practical already because the latest models are good enough that you can tell them to rename that variable and refactor and add this line there... and they'll just do it - it's faster than you typing on the keyboard yourself.&lt;/p&gt;
&lt;p&gt;The next rule though, is nobody &lt;em&gt;reads&lt;/em&gt; the code. And this is the thing which StrongDM started doing last year.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I wrote a lot more about &lt;a href="https://simonwillison.net/2026/Feb/7/software-factory/"&gt;StrongDM's dark factory explorations&lt;/a&gt; back in February.&lt;/p&gt;
&lt;h2 id="the-bottleneck-has-moved-to-testing"&gt;The bottleneck has moved to testing&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1287"&gt;21:27&lt;/a&gt; - It used to be, you'd come up with a spec and you hand it to your engineering team. And three weeks later, if you're lucky, they'd come back with an implementation. And now that maybe takes three hours, depending on how well the coding agents are established for that kind of thing. So now what, right? Now, where else are the bottlenecks?&lt;/p&gt;
&lt;p&gt;Anyone who's done any product work knows that your initial ideas are always wrong. What matters is proving them, and testing them.&lt;/p&gt;
&lt;p&gt;We can test things so much faster now because we can build workable prototypes so much quicker. So there's an interesting thing I've been doing in my own work where any feature that I want to design, I'll often prototype three different ways it could work because that takes very little time.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I've always loved prototyping things, and prototyping is even more valuable now.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1360"&gt;22:40&lt;/a&gt; - A UI prototype is free now. ChatGPT and Claude will just build you a very convincing UI for anything that you describe. And that's how you should be working. I think anyone who's doing product design and isn't vibe coding little prototypes is missing out on the most powerful boost that we get in that step.&lt;/p&gt;
&lt;p&gt;But then what do you do? Given your three options that you have instead of one option, how do you prove to yourself which one of those is the best? I don't have a confident answer to that. I expect this is where the good old fashioned usability testing comes in.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;More on prototyping later on:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=2795"&gt;46:35&lt;/a&gt; - Throughout my entire career, my superpower has been prototyping. I've been very quick at knocking out working prototypes of things. I'm the person who can show up at a meeting and say, look, here's how it could work. And that was kind of my unique selling point. And that's gone. Anyone can do what I could do.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="this-stuff-is-exhausting"&gt;This stuff is exhausting&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1585"&gt;26:25&lt;/a&gt; - I'm finding that using coding agents well is taking every inch of my 25 years of experience as a software engineer, and it is mentally exhausting. I can fire up four agents in parallel and have them work on four different problems. And by like 11 AM, I am wiped out for the day. [...]&lt;/p&gt;
&lt;p&gt;There's a personal skill we have to learn in finding our new limits - what's a responsible way for us not to burn out.&lt;/p&gt;
&lt;p&gt;I've talked to a lot of people who are losing sleep because they're like, my coding agents could be doing work for me. I'm just going to stay up an extra half hour and set off a bunch of extra things... and then waking up at four in the morning. That's obviously unsustainable. [...]&lt;/p&gt;
&lt;p&gt;There's an element of sort of gambling and addiction to how we're using some of these tools.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="interruptions-cost-a-lot-less-now"&gt;Interruptions cost a lot less now&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=2716"&gt;45:16&lt;/a&gt; - People talk about how important it is not to interrupt your coders. Your coders need to have solid two to four hour blocks of uninterrupted work so they can spin up their mental model and churn out the code. That's changed completely. My programming work, I need two minutes every now and then to prompt my agent about what to do next. And then I can do the other stuff and I can go back. I'm much more interruptible than I used to be.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="my-ability-to-estimate-software-is-broken"&gt;My ability to estimate software is broken&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1699"&gt;28:19&lt;/a&gt; - I've got 25 years of experience in how long it takes to build something. And that's all completely gone - it doesn't work anymore because I can look at a problem and say that this is going to take two weeks, so it's not worth it. And now it's like... maybe it's going to take 20 minutes because the reason it would have taken two weeks was all of the sort of crufty coding things that the AI is now covering for us.&lt;/p&gt;
&lt;p&gt;I constantly throw tasks at AI that I don't think it'll be able to do because every now and then it does it. And when it doesn't do it, you learn, right? But when it &lt;em&gt;does&lt;/em&gt; do something, especially something that the previous models couldn't do, that's actually cutting edge AI research.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And a related anecdote:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=2216"&gt;36:56&lt;/a&gt; - A lot of my friends have been talking about how they have this backlog of side projects, right? For the last 10, 15 years, they've got projects they never quite finished. And some of them are like, well, I've done them all now. Last couple of months, I just went through and every evening I'm like, let's take that project and finish it. And they almost feel a sort of sense of loss at the end where they're like, well, okay, my backlog's gone. Now what am I going to build?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="it-s-tough-for-people-in-the-middle"&gt;It's tough for people in the middle&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1769"&gt;29:29&lt;/a&gt; - So ThoughtWorks, the big IT consultancy, &lt;a href="https://www.thoughtworks.com/insights/articles/reflections-future-software-engineering-retreat"&gt;did an offsite about a month ago&lt;/a&gt;, and they got a whole bunch of engineering VPs in from different companies to talk about this stuff. And one of the interesting theories they came up with is they think this stuff is really good for experienced engineers, like it amplifies their skills. It's really good for new engineers because it solves so many of those onboarding problems. The problem is the people in the middle. If you're mid-career, if you haven't made it to sort of super senior engineer yet, but you're not sort of new either, that's the group which is probably in the most trouble right now.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I mentioned &lt;a href="https://blog.cloudflare.com/cloudflare-1111-intern-program/"&gt;Cloudflare hiring 1,000 interns&lt;/a&gt;, and Shopify too.&lt;/p&gt;
&lt;p&gt;Lenny asked for my advice for people stuck in that middle:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1881"&gt;31:21&lt;/a&gt; - That's a big responsibility you're putting on me there! I think the way forward is to lean into this stuff and figure out how do I help this make me better?&lt;/p&gt;
&lt;p&gt;A lot of people worry about skill atrophy: if the AI is doing it for you, you're not learning anything. I think if you're worried about that, you push back at it. You have to be mindful about how you're applying the technology and think, okay, I've been given this thing that can answer any question and &lt;em&gt;often&lt;/em&gt; gets it right. How can I use this to amplify my own skills, to learn new things, to take on much more ambitious projects? [...]&lt;/p&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1985"&gt;33:05&lt;/a&gt; - Everything is changing so fast right now. The only universal skill is being able to roll with the changes. That's the thing that we all need.&lt;/p&gt;
&lt;p&gt;The term that comes up most in these conversations about how you can be great with AI is &lt;em&gt;agency&lt;/em&gt;. I think agents have no agency at all. I would argue that the one thing AI can never have is agency because it doesn't have human motivations.&lt;/p&gt;
&lt;p&gt;So I'd say that's the thing is to invest in your own agency and invest in how to use this technology to get better at what you do and to do new things.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="it-s-harder-to-evaluate-software"&gt;It's harder to evaluate software&lt;/h2&gt;
&lt;p&gt;The fact that it's so easy to create software with detailed documentation and robust tests means it's harder to figure out what's a credible project.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=2267"&gt;37:47&lt;/a&gt; Sometimes I'll have an idea for a piece of software, Python library or whatever, and I can knock it out in like an hour and get to a point where it's got documentation and tests and all of those things, and it looks like the kind of software that previously I'd have spent several weeks on - and I can stick it up on GitHub&lt;/p&gt;
&lt;p&gt;And yet... I don't believe in it. And the reason I don't believe in it is that I got to rush through all of those things... I think the quality is probably good, but I haven't spent enough time with it to feel confident in that quality. Most importantly, I &lt;em&gt;haven't used it yet&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;It turns out when I'm using somebody else's software, the thing I care most about is I want them to have used it for months.&lt;/p&gt;
&lt;p&gt;I've got some very cool software that I built that I've &lt;em&gt;never used&lt;/em&gt;. It was quicker to build it than to actually try and use it!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="the-misconception-that-ai-tools-are-easy"&gt;The misconception that AI tools are easy&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=2491"&gt;41:31&lt;/a&gt; - Everyone's like, oh, it must be easy. It's just a chat bot. It's not easy. That's one of the great misconceptions in AI is that using these tools effectively is easy. It takes a lot of practice and it takes a lot of trying things that didn't work and trying things that did work.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="coding-agents-are-useful-for-security-research-now"&gt;Coding agents are useful for security research now&lt;/h2&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1144"&gt;19:04&lt;/a&gt; - In the past sort of three to six months, they've started being credible as security researchers, which is sending shockwaves through the security research industry.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;See Thomas Ptacek: &lt;a href="https://sockpuppet.org/blog/2026/03/30/vulnerability-research-is-cooked/"&gt;Vulnerability Research Is Cooked&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;At the same time, open source projects are being bombarded with junk security reports:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=1205"&gt;20:05&lt;/a&gt; - There are these people who don't know what they're doing, who are asking ChatGPT to find a security hole and then reporting it to the maintainer. And the report looks good. ChatGPT can produce a very well formatted report of a vulnerability. It's a total waste of time. It's not actually verified as being a real problem.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A good example of the right way to do this is &lt;a href="https://blog.mozilla.org/en/firefox/hardening-firefox-anthropic-red-team/"&gt;Anthropic's collaboration with Firefox&lt;/a&gt;, where Anthropic's security team &lt;em&gt;verified&lt;/em&gt; every security problem before passing them to Mozilla.&lt;/p&gt;
&lt;h2 id="openclaw"&gt;OpenClaw&lt;/h2&gt;
&lt;p&gt;Of course we had to talk about OpenClaw! Lenny had his running on a Mac Mini.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=5363"&gt;1:29:23&lt;/a&gt; - OpenClaw demonstrates that people want a personal digital assistant so much that they are willing to not just overlook the security side of things, but also getting the thing running is not easy. You've got to create API keys and tokens and install stuff. It's not trivial to get set up and hundreds of thousands of people got it set up. [...]&lt;/p&gt;
&lt;p&gt;The first line of code for OpenClaw was written on November the 25th. And then in the Super Bowl, there was an ad for AI.com, which was effectively a vaporware white labeled OpenClaw hosting provider. So we went from first line of code in November to Super Bowl ad in what? Three and a half months.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I continue to love Drew Breunig's description of OpenClaw as a digital pet:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A friend of mine said that OpenClaw is basically a Tamagotchi. It's a digital pet and you buy the Mac Mini as an aquarium.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="journalists-are-good-at-dealing-with-unreliable-sources"&gt;Journalists are good at dealing with unreliable sources&lt;/h2&gt;
&lt;p&gt;In talking about my explorations of AI for data journalism through &lt;a href="https://datasette.io/"&gt;Datasette&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=5698"&gt;1:34:58&lt;/a&gt; - You would have thought that AI is a very bad fit for journalism where the whole idea is to find the truth. But the flip side is journalists deal with untrustworthy sources all the time. The art of journalism is you talk to a bunch of people and some of them lie to you and you figure out what's true. So as long as the journalist treats the AI as yet another unreliable source, they're actually better equipped to work with AI than most other professions are.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="the-pelican-benchmark"&gt;The pelican benchmark&lt;/h2&gt;
&lt;p&gt;Obviously we talked about &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle/"&gt;pelicans riding bicycles&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=3370"&gt;56:10&lt;/a&gt; - There appears to be a very strong correlation between how good their drawing of a pelican riding a bicycle is and how good they are at everything else. And nobody can explain to me why that is. [...]&lt;/p&gt;
&lt;p&gt;People kept on asking me, what if labs cheat on the benchmark? And my answer has always been, really, &lt;a href="https://simonwillison.net/2025/Nov/13/training-for-pelicans-riding-bicycles/"&gt;all I want from life is a really good picture of a pelican riding a bicycle&lt;/a&gt;. And if I can trick every AI lab in the world into cheating on benchmarks to get it, then that just achieves my goal.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=3596"&gt;59:56&lt;/a&gt; - I think something people often miss is that this space is inherently funny. The fact that we have these incredibly expensive, power hungry, supposedly the most advanced computers of all time. And if you ask them to draw a pelican on a bicycle, it looks like a five-year-old drew it. That's really funny to me.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2 id="and-finally-some-good-news-about-parrots"&gt;And finally, some good news about parrots&lt;/h2&gt;
&lt;p&gt;Lenny asked if I had anything else I wanted to leave listeners with to wrap up the show, so I went with the best piece of news in the world right now.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://youtu.be/wc8FBhQtdsA?t=5890"&gt;1:38:10&lt;/a&gt; - There is a rare parrot in New Zealand called the Kākāpō. There are only 250 of these parrots left in the world. They are flightless nocturnal parrots - beautiful green dumpy looking things. And the good news is they're having a fantastic breeding season in 2026,&lt;/p&gt;
&lt;p&gt;They only breed when the Rimu trees in New Zealand have a mass fruiting season, and the Rimu trees haven't done that since 2022 - so there has not been a single baby kākāpō born in four years.&lt;/p&gt;
&lt;p&gt;This year, the Rimu trees are in fruit. The kākāpō are breeding. There have been dozens of new chicks born. It's a really, really good time. It's great news for rare New Zealand parrots and you should look them up because they're delightful.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Everyone should &lt;a href="https://www.youtube.com/live/LDSWtyU6-Lg"&gt;watch the live stream of Rakiura on her nest with two chicks&lt;/a&gt;!&lt;/p&gt;
&lt;h2 id="youtube-chapters"&gt;YouTube chapters&lt;/h2&gt;
&lt;p&gt;Here's the full list of chapters Lenny's team defined for the YouTube video:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA"&gt;00:00&lt;/a&gt;: Introduction to Simon Willison&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=160s"&gt;02:40&lt;/a&gt;: The November 2025 inflection point&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=481s"&gt;08:01&lt;/a&gt;: What's possible now with AI coding&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=642s"&gt;10:42&lt;/a&gt;: Vibe coding vs. agentic engineering&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=837s"&gt;13:57&lt;/a&gt;: The dark-factory pattern&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=1241s"&gt;20:41&lt;/a&gt;: Where bottlenecks have shifted&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=1416s"&gt;23:36&lt;/a&gt;: Where human brains will continue to be valuable&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=1532s"&gt;25:32&lt;/a&gt;: Defending of software engineers&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=1752s"&gt;29:12&lt;/a&gt;: Why experienced engineers get better results&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=1848s"&gt;30:48&lt;/a&gt;: Advice for avoiding the permanent underclass&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2032s"&gt;33:52&lt;/a&gt;: Leaning into AI to amplify your skills&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2112s"&gt;35:12&lt;/a&gt;: Why Simon says he's working harder than ever&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2243s"&gt;37:23&lt;/a&gt;: The market for pre-2022 human-written code&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2401s"&gt;40:01&lt;/a&gt;: Prediction: 50% of engineers writing 95% AI code by the end of 2026&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2674s"&gt;44:34&lt;/a&gt;: The impact of cheap code&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=2907s"&gt;48:27&lt;/a&gt;: Simon's AI stack&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=3248s"&gt;54:08&lt;/a&gt;: Using AI for research&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=3312s"&gt;55:12&lt;/a&gt;: The pelican-riding-a-bicycle benchmark&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=3541s"&gt;59:01&lt;/a&gt;: The inherent ridiculousness of AI&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=3652s"&gt;1:00:52&lt;/a&gt;: Hoarding things you know how to do&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=4101s"&gt;1:08:21&lt;/a&gt;: Red/green TDD pattern for better AI code&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=4483s"&gt;1:14:43&lt;/a&gt;: Starting projects with good templates&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=4591s"&gt;1:16:31&lt;/a&gt;: The lethal trifecta and prompt injection&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=4913s"&gt;1:21:53&lt;/a&gt;: Why 97% effectiveness is a failing grade&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=5119s"&gt;1:25:19&lt;/a&gt;: The normalization of deviance&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=5312s"&gt;1:28:32&lt;/a&gt;: OpenClaw: the security nightmare everyone is looking past&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=5662s"&gt;1:34:22&lt;/a&gt;: What's next for Simon&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=5807s"&gt;1:36:47&lt;/a&gt;: Zero-deliverable consulting&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.youtube.com/watch?v=wc8FBhQtdsA&amp;amp;t=5885s"&gt;1:38:05&lt;/a&gt;: Good news about Kakapo parrots&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/kakapo"&gt;kakapo&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/podcast-appearances"&gt;podcast-appearances&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="ai"/><category term="kakapo"/><category term="generative-ai"/><category term="llms"/><category term="podcast-appearances"/><category term="coding-agents"/><category term="agentic-engineering"/></entry><entry><title>Gemma 4: Byte for byte, the most capable open models</title><link href="https://simonwillison.net/2026/Apr/2/gemma-4/#atom-everything" rel="alternate"/><published>2026-04-02T18:28:54+00:00</published><updated>2026-04-02T18:28:54+00:00</updated><id>https://simonwillison.net/2026/Apr/2/gemma-4/#atom-everything</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/gemma-4/"&gt;Gemma 4: Byte for byte, the most capable open models&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Four new vision-capable Apache 2.0 licensed reasoning LLMs from Google DeepMind, sized at 2B, 4B, 31B, plus a 26B-A4B Mixture-of-Experts.&lt;/p&gt;
&lt;p&gt;Google emphasize "unprecedented level of intelligence-per-parameter", providing yet more evidence that creating small useful models is one of the hottest areas of research right now.&lt;/p&gt;
&lt;p&gt;They actually label the two smaller models as E2B and E4B for "Effective" parameter size. The system card explains:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The smaller models incorporate Per-Layer Embeddings (PLE) to maximize parameter efficiency in on-device deployments. Rather than adding more layers or parameters to the model, PLE gives each decoder layer its own small embedding for every token. These embedding tables are large but are only used for quick lookups, which is why the effective parameter count is much smaller than the total.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I don't entirely understand that, but apparently that's what the "E" in E2B means!&lt;/p&gt;
&lt;p&gt;One particularly exciting feature of these models is that they are multi-modal beyond just images:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Vision and audio&lt;/strong&gt;: All models natively process video and images, supporting variable resolutions, and excelling at visual tasks like OCR and chart understanding. Additionally, the E2B and E4B models feature native audio input for speech recognition and understanding.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I've not figured out a way to run audio input locally - I don't think that feature is in LM Studio or Ollama yet.&lt;/p&gt;
&lt;p&gt;I tried them out using the GGUFs for &lt;a href="https://lmstudio.ai/models/gemma-4"&gt;LM Studio&lt;/a&gt;. The 2B (4.41GB), 4B (6.33GB) and 26B-A4B (17.99GB) models all worked perfectly, but the 31B (19.89GB) model was broken and spat out &lt;code&gt;"---\n"&lt;/code&gt; in a loop for every prompt I tried.&lt;/p&gt;
&lt;p&gt;The succession of &lt;a href="https://gist.github.com/simonw/12ae4711288637a722fd6bd4b4b56bdb"&gt;pelican quality&lt;/a&gt; from 2B to 4B to 26B-A4B is notable:&lt;/p&gt;
&lt;p&gt;E2B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Two blue circles on a brown rectangle and a weird mess of orange blob and yellow triangle for the pelican" src="https://static.simonwillison.net/static/2026/gemma-4-2b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;E4B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Two black wheels joined by a sort of grey surfboard, the pelican is semicircles and a blue blob floating above it" src="https://static.simonwillison.net/static/2026/gemma-4-4b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;26B-A4B:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Bicycle has the right pieces although the frame is wonky. Pelican is genuinely good, has a big triangle beak and a nice curved neck and is clearly a bird that is sitting on the bicycle" src="https://static.simonwillison.net/static/2026/gemma-4-26b-pelican.png" /&gt;&lt;/p&gt;
&lt;p&gt;(This one actually had an SVG error - "error on line 18 at column 88: Attribute x1 redefined" - but after &lt;a href="https://gist.github.com/simonw/12ae4711288637a722fd6bd4b4b56bdb?permalink_comment_id=6074105#gistcomment-6074105"&gt;fixing that&lt;/a&gt; I got probably the best pelican I've seen yet from a model that runs on my laptop.)&lt;/p&gt;
&lt;p&gt;Google are providing API access to the two larger Gemma models via their &lt;a href="https://aistudio.google.com/prompts/new_chat?model=gemma-4-31b-it"&gt;AI Studio&lt;/a&gt;. I added support to &lt;a href="https://github.com/simonw/llm-gemini"&gt;llm-gemini&lt;/a&gt; and then &lt;a href="https://gist.github.com/simonw/f9f9e9c34c7cc0ef5325a2876413e51e"&gt;ran a pelican&lt;/a&gt; through the 31B model using that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -m gemini/gemma-4-31b-it 'Generate an SVG of a pelican riding a bicycle'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Pretty good, though it is missing the front part of the bicycle frame:&lt;/p&gt;
&lt;p&gt;&lt;img alt="Motion blur lines, a mostly great bicycle albeit missing the front part of the frame. Pelican is decent. " src="https://static.simonwillison.net/static/2026/gemma-4-31b-pelican.png" /&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/google"&gt;google&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/vision-llms"&gt;vision-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/pelican-riding-a-bicycle"&gt;pelican-riding-a-bicycle&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-reasoning"&gt;llm-reasoning&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemma"&gt;gemma&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm-release"&gt;llm-release&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/lm-studio"&gt;lm-studio&lt;/a&gt;&lt;/p&gt;



</summary><category term="google"/><category term="ai"/><category term="generative-ai"/><category term="local-llms"/><category term="llms"/><category term="llm"/><category term="vision-llms"/><category term="pelican-riding-a-bicycle"/><category term="llm-reasoning"/><category term="gemma"/><category term="llm-release"/><category term="lm-studio"/></entry><entry><title>llm-gemini 0.30</title><link href="https://simonwillison.net/2026/Apr/2/llm-gemini/#atom-everything" rel="alternate"/><published>2026-04-02T18:25:08+00:00</published><updated>2026-04-02T18:25:08+00:00</updated><id>https://simonwillison.net/2026/Apr/2/llm-gemini/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm-gemini/releases/tag/0.30"&gt;llm-gemini 0.30&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;New models &lt;code&gt;gemini-3.1-flash-lite-preview&lt;/code&gt;, &lt;code&gt;gemma-4-26b-a4b-it&lt;/code&gt; and &lt;code&gt;gemma-4-31b-it&lt;/code&gt;. See &lt;a href="https://simonwillison.net/2026/Apr/2/gemma-4/"&gt;my notes on Gemma 4&lt;/a&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/gemini"&gt;gemini&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/gemma"&gt;gemma&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="gemini"/><category term="llm"/><category term="gemma"/></entry><entry><title>March 2026 sponsors-only newsletter</title><link href="https://simonwillison.net/2026/Apr/2/march-newsletter/#atom-everything" rel="alternate"/><published>2026-04-02T05:15:04+00:00</published><updated>2026-04-02T05:15:04+00:00</updated><id>https://simonwillison.net/2026/Apr/2/march-newsletter/#atom-everything</id><summary type="html">
    &lt;p&gt;I just sent the March edition of my &lt;a href="https://github.com/sponsors/simonw/"&gt;sponsors-only monthly newsletter&lt;/a&gt;. If you are a sponsor (or if you start a sponsorship now) you can &lt;a href="https://github.com/simonw-private/monthly/blob/main/2026-03-march.md"&gt;access it here&lt;/a&gt;. In this month's newsletter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;More agentic engineering patterns&lt;/li&gt;
&lt;li&gt;Streaming experts with MoE models on a Mac&lt;/li&gt;
&lt;li&gt;Model releases in March&lt;/li&gt;
&lt;li&gt;Vibe porting&lt;/li&gt;
&lt;li&gt;Supply chain attacks against PyPI and NPM&lt;/li&gt;
&lt;li&gt;Stuff I shipped&lt;/li&gt;
&lt;li&gt;What I'm using, March 2026 edition&lt;/li&gt;
&lt;li&gt;And a couple of museums&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here's &lt;a href="https://gist.github.com/simonw/8b5fa061937842659dbcd5bd676ce0e8"&gt;a copy of the February newsletter&lt;/a&gt; as a preview of what you'll get. Pay $10/month to stay a month ahead of the free copy!&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/newsletter"&gt;newsletter&lt;/a&gt;&lt;/p&gt;



</summary><category term="newsletter"/></entry><entry><title>datasette-llm 0.1a6</title><link href="https://simonwillison.net/2026/Apr/1/datasette-llm-2/#atom-everything" rel="alternate"/><published>2026-04-01T23:01:37+00:00</published><updated>2026-04-01T23:01:37+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-llm-2/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm/releases/tag/0.1a6"&gt;datasette-llm 0.1a6&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The same model ID no longer needs to be repeated in both the default model and allowed models lists - setting it as a default model automatically adds it to the allowed models list. &lt;a href="https://github.com/datasette/datasette-llm/issues/6"&gt;#6&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Improved documentation for &lt;a href="https://github.com/datasette/datasette-llm/blob/0.1a6/README.md#usage"&gt;Python API usage&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-enrichments-llm 0.2a1</title><link href="https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm-2/#atom-everything" rel="alternate"/><published>2026-04-01T22:00:34+00:00</published><updated>2026-04-01T22:00:34+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm-2/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a1"&gt;datasette-enrichments-llm 0.2a1&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;actor&lt;/code&gt; who triggers an enrichment is now passed to the &lt;code&gt;llm.mode(... actor=actor)&lt;/code&gt; method. &lt;a href="https://github.com/datasette/datasette-enrichments-llm/issues/3"&gt;#3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/enrichments"&gt;enrichments&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="enrichments"/><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-extract 0.3a0</title><link href="https://simonwillison.net/2026/Apr/1/datasette-extract/#atom-everything" rel="alternate"/><published>2026-04-01T03:32:16+00:00</published><updated>2026-04-01T03:32:16+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-extract/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-extract/releases/tag/0.3a0"&gt;datasette-extract 0.3a0&lt;/a&gt;&lt;/p&gt;
    &lt;ul&gt;
&lt;li&gt;Now uses &lt;a href="https://github.com/datasette/datasette-llm"&gt;datasette-llm&lt;/a&gt; to manage model configuration, which means you can control which models are available for extraction tasks using the &lt;code&gt;extract&lt;/code&gt; purpose and &lt;a href="https://github.com/datasette/datasette-llm/blob/main/README.md#configuration"&gt;LLM model configuration&lt;/a&gt;. &lt;a href="https://github.com/datasette/datasette-extract/issues/38"&gt;#38&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-enrichments-llm 0.2a0</title><link href="https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm/#atom-everything" rel="alternate"/><published>2026-04-01T03:28:44+00:00</published><updated>2026-04-01T03:28:44+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-enrichments-llm/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a0"&gt;datasette-enrichments-llm 0.2a0&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;This plugin now uses &lt;a href="https://github.com/datasette/datasette-llm"&gt;datasette-llm&lt;/a&gt; to configure and manage models. This means it's possible to &lt;a href="https://github.com/datasette/datasette-enrichments-llm/blob/0.2a0/README.md#configuration"&gt;specify which models&lt;/a&gt; should be made available for enrichments, using the new &lt;code&gt;enrichments&lt;/code&gt; purpose.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-llm-usage 0.2a0</title><link href="https://simonwillison.net/2026/Apr/1/datasette-llm-usage/#atom-everything" rel="alternate"/><published>2026-04-01T03:24:03+00:00</published><updated>2026-04-01T03:24:03+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-llm-usage/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm-usage/releases/tag/0.2a0"&gt;datasette-llm-usage 0.2a0&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Removed features relating to allowances and estimated pricing. These are now the domain of &lt;a href="https://github.com/datasette/datasette-llm-accountant"&gt;datasette-llm-accountant&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Now depends on &lt;a href="https://github.com/datasette/datasette-llm"&gt;datasette-llm&lt;/a&gt; for model configuration. &lt;a href="https://github.com/datasette/datasette-llm-usage/pull/3"&gt;#3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Full prompts and responses and tool calls can now be logged to the &lt;code&gt;llm_usage_prompt_log&lt;/code&gt; table in the internal database if you set the new &lt;code&gt;datasette-llm-usage.log_prompts&lt;/code&gt; plugin configuration setting.&lt;/li&gt;
&lt;li&gt;Redesigned the &lt;code&gt;/-/llm-usage-simple-prompt&lt;/code&gt; page, which now requires the &lt;code&gt;llm-usage-simple-prompt&lt;/code&gt; permission.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>datasette-llm 0.1a5</title><link href="https://simonwillison.net/2026/Apr/1/datasette-llm/#atom-everything" rel="alternate"/><published>2026-04-01T03:11:01+00:00</published><updated>2026-04-01T03:11:01+00:00</updated><id>https://simonwillison.net/2026/Apr/1/datasette-llm/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm/releases/tag/0.1a5"&gt;datasette-llm 0.1a5&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;llm_prompt_context()&lt;/code&gt; plugin hook wrapper mechanism now tracks prompts executed within a chain as well as one-off prompts, which means it can be used to track tool call loops. &lt;a href="https://github.com/datasette/datasette-llm"&gt;#5&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>Quoting Soohoon Choi</title><link href="https://simonwillison.net/2026/Apr/1/soohoon-choi/#atom-everything" rel="alternate"/><published>2026-04-01T02:07:16+00:00</published><updated>2026-04-01T02:07:16+00:00</updated><id>https://simonwillison.net/2026/Apr/1/soohoon-choi/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://www.greptile.com/blog/ai-slopware-future"&gt;&lt;p&gt;I want to argue that AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long-term.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://www.greptile.com/blog/ai-slopware-future"&gt;Soohoon Choi&lt;/a&gt;, Slop Is Not Necessarily The Future&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/slop"&gt;slop&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/agentic-engineering"&gt;agentic-engineering&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;&lt;/p&gt;



</summary><category term="slop"/><category term="ai-assisted-programming"/><category term="generative-ai"/><category term="agentic-engineering"/><category term="ai"/><category term="llms"/></entry><entry><title>Supply Chain Attack on Axios Pulls Malicious Dependency from npm</title><link href="https://simonwillison.net/2026/Mar/31/supply-chain-attack-on-axios/#atom-everything" rel="alternate"/><published>2026-03-31T23:28:40+00:00</published><updated>2026-03-31T23:28:40+00:00</updated><id>https://simonwillison.net/2026/Mar/31/supply-chain-attack-on-axios/#atom-everything</id><summary type="html">
    
&lt;p&gt;&lt;strong&gt;&lt;a href="https://socket.dev/blog/axios-npm-package-compromised"&gt;Supply Chain Attack on Axios Pulls Malicious Dependency from npm&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
Useful writeup of today's supply chain attack against Axios, the HTTP client NPM package with &lt;a href="https://www.npmjs.com/package/axios"&gt;101 million weekly downloads&lt;/a&gt;. Versions &lt;code&gt;1.14.1&lt;/code&gt; and &lt;code&gt;0.30.4&lt;/code&gt; both included a new dependency called &lt;code&gt;plain-crypto-js&lt;/code&gt; which was freshly published malware, stealing credentials and installing a remote access trojan (RAT).&lt;/p&gt;
&lt;p&gt;It looks like the attack came from a leaked long-lived npm token. Axios have &lt;a href="https://github.com/axios/axios/issues/7055"&gt;an open issue to adopt trusted publishing&lt;/a&gt;, which would ensure that only their GitHub Actions workflows are able to publish to npm. The malware packages were published without an accompanying GitHub release, which strikes me as a useful heuristic for spotting potentially malicious releases - the same pattern was present for LiteLLM &lt;a href="https://simonwillison.net/2026/Mar/24/malicious-litellm/"&gt;last week&lt;/a&gt; as well.

    &lt;p&gt;&lt;small&gt;&lt;/small&gt;Via &lt;a href="https://lobste.rs/s/l57wuc/supply_chain_attack_on_axios"&gt;lobste.rs&lt;/a&gt;&lt;/small&gt;&lt;/p&gt;


    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/javascript"&gt;javascript&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/security"&gt;security&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/npm"&gt;npm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/supply-chain"&gt;supply-chain&lt;/a&gt;&lt;/p&gt;



</summary><category term="javascript"/><category term="security"/><category term="npm"/><category term="supply-chain"/></entry><entry><title>datasette-llm 0.1a4</title><link href="https://simonwillison.net/2026/Mar/31/datasette-llm/#atom-everything" rel="alternate"/><published>2026-03-31T21:17:23+00:00</published><updated>2026-03-31T21:17:23+00:00</updated><id>https://simonwillison.net/2026/Mar/31/datasette-llm/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm/releases/tag/0.1a4"&gt;datasette-llm 0.1a4&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Ability to &lt;a href="https://github.com/datasette/datasette-llm/blob/0.1a4/README.md#model-references-with-custom-api-keys"&gt;configure different API keys for models based on their purpose&lt;/a&gt; - for example, set it up so enrichments always use &lt;code&gt;gpt-5.4-mini&lt;/code&gt; with an API key dedicated to that purpose. &lt;a href="https://github.com/datasette/datasette-llm/pull/4"&gt;#4&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;I released &lt;a href="https://github.com/simonw/llm-echo/releases/tag/0.3"&gt;llm-echo 0.3&lt;/a&gt; to provide an API key testing utility I needed for the tests for this new feature.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>llm-all-models-async 0.1</title><link href="https://simonwillison.net/2026/Mar/31/llm-all-models-async/#atom-everything" rel="alternate"/><published>2026-03-31T20:52:02+00:00</published><updated>2026-03-31T20:52:02+00:00</updated><id>https://simonwillison.net/2026/Mar/31/llm-all-models-async/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm-all-models-async/releases/tag/0.1"&gt;llm-all-models-async 0.1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;LLM plugins can define new models in both &lt;a href="https://llm.datasette.io/en/stable/plugins/tutorial-model-plugin.html"&gt;sync&lt;/a&gt; and &lt;a href="https://llm.datasette.io/en/stable/plugins/advanced-model-plugins.html#async-models"&gt;async&lt;/a&gt; varieties. The async variants are most common for API-backed models - sync variants tend to be things that run the model directly within the plugin.&lt;/p&gt;
&lt;p&gt;My &lt;a href="https://simonwillison.net/2026/Mar/30/mr-chatterbox/#running-it-locally-with-llm"&gt;llm-mrchatterbox&lt;/a&gt; plugin is sync only. I wanted to try it out with various Datasette LLM features (specifically &lt;a href="https://github.com/datasette/datasette-enrichments-llm"&gt;datasette-enrichments-llm&lt;/a&gt;) but Datasette can only use async models.&lt;/p&gt;
&lt;p&gt;So... I had Claude spin up this plugin that turns sync models into async models using a thread pool. This ended up needing an extra plugin hook mechanism in LLM itself, which I shipped just now in &lt;a href="https://llm.datasette.io/en/stable/changelog.html#v0-30"&gt;LLM 0.30&lt;/a&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/async"&gt;async&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/python"&gt;python&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="async"/><category term="python"/></entry><entry><title>llm 0.30</title><link href="https://simonwillison.net/2026/Mar/31/llm/#atom-everything" rel="alternate"/><published>2026-03-31T20:35:51+00:00</published><updated>2026-03-31T20:35:51+00:00</updated><id>https://simonwillison.net/2026/Mar/31/llm/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm/releases/tag/0.30"&gt;llm 0.30&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;a href="http://llm.datasette.io/en/stable/plugins/plugin-hooks.html#plugin-hooks-register-models"&gt;register_models() plugin hook&lt;/a&gt; now takes an optional &lt;code&gt;model_aliases&lt;/code&gt; parameter listing all of the models, async models and aliases that have been registered so far by other plugins. A plugin with &lt;code&gt;@hookimpl(trylast=True)&lt;/code&gt; can use this to take previously registered models into account. &lt;a href="https://github.com/simonw/llm/issues/1389"&gt;#1389&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Added docstrings to public classes and methods and included those directly in the documentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/></entry><entry><title>llm-echo 0.4</title><link href="https://simonwillison.net/2026/Mar/31/llm-echo/#atom-everything" rel="alternate"/><published>2026-03-31T16:48:59+00:00</published><updated>2026-03-31T16:48:59+00:00</updated><id>https://simonwillison.net/2026/Mar/31/llm-echo/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm-echo/releases/tag/0.4"&gt;llm-echo 0.4&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Prompts now have the &lt;code&gt;input_tokens&lt;/code&gt; and &lt;code&gt;output_tokens&lt;/code&gt; fields populated on the response.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/></entry><entry><title>llm-echo 0.3</title><link href="https://simonwillison.net/2026/Mar/31/llm-echo-2/#atom-everything" rel="alternate"/><published>2026-03-31T15:43:26+00:00</published><updated>2026-03-31T15:43:26+00:00</updated><id>https://simonwillison.net/2026/Mar/31/llm-echo-2/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm-echo/releases/tag/0.3"&gt;llm-echo 0.3&lt;/a&gt;&lt;/p&gt;
    &lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Mechanisms for &lt;a href="https://github.com/simonw/llm-echo/blob/0.3/README.md#tool-calling"&gt;testing tool calls&lt;/a&gt;. #3&lt;/li&gt;
&lt;li&gt;Mechanism for &lt;a href="https://github.com/simonw/llm-echo/blob/0.3/README.md#raw-responses"&gt;testing raw responses&lt;/a&gt;. #4&lt;/li&gt;
&lt;li&gt;New &lt;code&gt;echo-needs-key&lt;/code&gt; model for &lt;a href="https://github.com/simonw/llm-echo/blob/0.3/README.md#echo-needs-key-model"&gt;testing model key logic&lt;/a&gt;. #7&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/></entry><entry><title>datasette-files 0.1a3</title><link href="https://simonwillison.net/2026/Mar/30/datasette-files/#atom-everything" rel="alternate"/><published>2026-03-30T23:58:49+00:00</published><updated>2026-03-30T23:58:49+00:00</updated><id>https://simonwillison.net/2026/Mar/30/datasette-files/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-files/releases/tag/0.1a3"&gt;datasette-files 0.1a3&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;I'm working on integrating &lt;code&gt;datasette-files&lt;/code&gt; into other plugins, such as &lt;a href="https://github.com/datasette/datasette-extract"&gt;datasette-extract&lt;/a&gt;. This necessitated a new release of the base plugin.&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;owners_can_edit&lt;/code&gt; and &lt;code&gt;owners_can_delete&lt;/code&gt; configuration options, plus the &lt;code&gt;files-edit&lt;/code&gt; and &lt;code&gt;files-delete&lt;/code&gt; actions are now scoped to a new &lt;code&gt;FileResource&lt;/code&gt; which is a child of &lt;code&gt;FileSourceResource&lt;/code&gt;. &lt;a href="https://github.com/datasette/datasette-files/issues/18"&gt;#18&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;The file picker UI is now available as a &lt;code&gt;&amp;lt;datasette-file-picker&amp;gt;&lt;/code&gt; Web Component. Thanks, &lt;a href="https://github.com/asg017"&gt;Alex Garcia&lt;/a&gt;. &lt;a href="https://github.com/datasette/datasette-files/issues/19"&gt;#19&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;New &lt;code&gt;from datasette_files import get_file&lt;/code&gt; Python API for other plugins that need to access file data. &lt;a href="https://github.com/datasette/datasette-files/issues/20"&gt;#20&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="datasette"/></entry><entry><title>Quoting Georgi Gerganov</title><link href="https://simonwillison.net/2026/Mar/30/georgi-gerganov/#atom-everything" rel="alternate"/><published>2026-03-30T21:31:02+00:00</published><updated>2026-03-30T21:31:02+00:00</updated><id>https://simonwillison.net/2026/Mar/30/georgi-gerganov/#atom-everything</id><summary type="html">
    &lt;blockquote cite="https://twitter.com/ggerganov/status/2038674698809102599"&gt;&lt;p&gt;Note that the main issues that people currently unknowingly face with local models mostly revolve around the harness and some intricacies around model chat templates and prompt construction. Sometimes there are even pure inference bugs. From typing the task in the client to the actual result, there is a long chain of components that atm are not only fragile - are also developed by different parties. So it's difficult to consolidate the entire stack and you have to keep in mind that what you are currently observing is with very high probability still broken in some subtle way along that chain.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p class="cite"&gt;&amp;mdash; &lt;a href="https://twitter.com/ggerganov/status/2038674698809102599"&gt;Georgi Gerganov&lt;/a&gt;, explaining why it's hard to find local models that work well with coding agents&lt;/p&gt;

    &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/coding-agents"&gt;coding-agents&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/georgi-gerganov"&gt;georgi-gerganov&lt;/a&gt;&lt;/p&gt;



</summary><category term="coding-agents"/><category term="generative-ai"/><category term="ai"/><category term="local-llms"/><category term="llms"/><category term="georgi-gerganov"/></entry><entry><title>datasette-llm 0.1a3</title><link href="https://simonwillison.net/2026/Mar/30/datasette-llm/#atom-everything" rel="alternate"/><published>2026-03-30T19:48:43+00:00</published><updated>2026-03-30T19:48:43+00:00</updated><id>https://simonwillison.net/2026/Mar/30/datasette-llm/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/datasette/datasette-llm/releases/tag/0.1a3"&gt;datasette-llm 0.1a3&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;Adds the ability to configure &lt;a href="https://github.com/datasette/datasette-llm/tree/0.1a3#purpose-specific-configuration"&gt;which LLMs are available for which purpose&lt;/a&gt;, which means you can restrict the list of models that can be used with a specific plugin. &lt;a href="https://github.com/datasette/datasette-llm/issues/3"&gt;#3&lt;/a&gt;&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/datasette"&gt;datasette&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/><category term="datasette"/></entry><entry><title>Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer</title><link href="https://simonwillison.net/2026/Mar/30/mr-chatterbox/#atom-everything" rel="alternate"/><published>2026-03-30T14:28:34+00:00</published><updated>2026-03-30T14:28:34+00:00</updated><id>https://simonwillison.net/2026/Mar/30/mr-chatterbox/#atom-everything</id><summary type="html">
    &lt;p&gt;Trip Venturella released &lt;a href="https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/"&gt;Mr. Chatterbox&lt;/a&gt;, a language model trained entirely on out-of-copyright text from the British Library. Here's how he describes it in &lt;a href="https://huggingface.co/tventurella/mr_chatterbox_model"&gt;the model card&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Mr. Chatterbox is a language model trained entirely from scratch on a corpus of over 28,000 Victorian-era British texts published between 1837 and 1899, drawn from a dataset made available &lt;a href="https://huggingface.co/datasets/TheBritishLibrary/blbooks"&gt;by the British Library&lt;/a&gt;. The model has absolutely no training inputs from after 1899 — the vocabulary and ideas are formed exclusively from nineteenth-century literature.&lt;/p&gt;
&lt;p&gt;Mr. Chatterbox's training corpus was 28,035 books, with an estimated 2.93 billion input tokens after filtering. The model has roughly 340 million paramaters, roughly the same size as GPT-2-Medium. The difference is, of course, that unlike GPT-2, Mr. Chatterbox is trained entirely on historical data.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Given how hard it is to train a useful LLM without using vast amounts of scraped, unlicensed data I've been dreaming of a model like this for a couple of years now. What would a model trained on out-of-copyright text be like to chat with?&lt;/p&gt;
&lt;p&gt;Thanks to Trip we can now find out for ourselves!&lt;/p&gt;
&lt;p&gt;The model itself is tiny, at least by Large Language Model standards - just &lt;a href="https://huggingface.co/tventurella/mr_chatterbox_model/tree/main"&gt;2.05GB&lt;/a&gt; on disk. You can try it out using Trip's &lt;a href="https://huggingface.co/spaces/tventurella/mr_chatterbox"&gt;HuggingFace Spaces demo&lt;/a&gt;:&lt;/p&gt;
&lt;p style="text-align: center"&gt;&lt;img src="https://static.simonwillison.net/static/2026/chatterbox.jpg" alt="Screenshot of a Victorian-themed chatbot interface titled &amp;quot;🎩 Mr. Chatterbox (Beta)&amp;quot; with subtitle &amp;quot;The Victorian Gentleman Chatbot&amp;quot;. The conversation shows a user asking &amp;quot;How should I behave at dinner?&amp;quot; with the bot replying &amp;quot;My good fellow, one might presume that such trivialities could not engage your attention during an evening's discourse!&amp;quot; The user then asks &amp;quot;What are good topics?&amp;quot; and the bot responds &amp;quot;The most pressing subjects of our society— Indeed, a gentleman must endeavor to engage the conversation with grace and vivacity. Such pursuits serve as vital antidotes against ennui when engaged in agreeable company.&amp;quot; A text input field at the bottom reads &amp;quot;Say hello...&amp;quot; with a send button. The interface uses a dark maroon and cream color scheme." style="max-width: 80%;" /&gt;&lt;/p&gt;
&lt;p&gt;Honestly, it's pretty terrible. Talking with it feels more like chatting with a Markov chain than an LLM - the responses may have a delightfully Victorian flavor to them but it's hard to get a response that usefully answers a question.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://arxiv.org/abs/2203.15556"&gt;2022 Chinchilla paper&lt;/a&gt; suggests a ratio of 20x the parameter count to training tokens. For a 340m model that would suggest around 7 billion tokens, more than twice the British Library corpus used here. The smallest Qwen 3.5 model is 600m parameters and that model family starts to get interesting at 2b - so my hunch is we would need 4x or more the training data to get something that starts to feel like a useful conversational partner.&lt;/p&gt;
&lt;p&gt;But what a fun project!&lt;/p&gt;
&lt;h4 id="running-it-locally-with-llm"&gt;Running it locally with LLM&lt;/h4&gt;
&lt;p&gt;I decided to see if I could run the model on my own machine using my &lt;a href="https://llm.datasette.io/"&gt;LLM&lt;/a&gt; framework.&lt;/p&gt;
&lt;p&gt;I got Claude Code to do most of the work - &lt;a href="https://gisthost.github.io/?7d0f00e152dd80d617b5e501e4ff025b/index.html"&gt;here's the transcript&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Trip trained the model using Andrej Karpathy's &lt;a href="https://github.com/karpathy/nanochat"&gt;nanochat&lt;/a&gt;, so I cloned that project, pulled the model weights and told Claude to build a Python script to run the model. Once we had that working (which ended up needing some extra details from the &lt;a href="https://huggingface.co/spaces/tventurella/mr_chatterbox/tree/main"&gt;Space demo source code&lt;/a&gt;) I had Claude &lt;a href="https://llm.datasette.io/en/stable/plugins/tutorial-model-plugin.html"&gt;read the LLM plugin tutorial&lt;/a&gt; and build the rest of the plugin.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/simonw/llm-mrchatterbox"&gt;llm-mrchatterbox&lt;/a&gt; is the result. Install the plugin like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm install llm-mrchatterbox
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The first time you run a prompt it will fetch the 2.05GB model file from Hugging Face. Try that like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm -m mrchatterbox "Good day, sir"
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Or start an ongoing chat session like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm chat -m mrchatterbox
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you don't have LLM installed you can still get a chat session started from scratch using uvx like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;uvx --with llm-mrchatterbox llm chat -m mrchatterbox
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When you are finished with the model you can delete the cached file using:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;llm mrchatterbox delete-model
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is the first time I've had Claude Code build a full LLM model plugin from scratch and it worked really well. I expect I'll be using this method again in the future.&lt;/p&gt;
&lt;p&gt;I continue to hope we can get a useful model from entirely public domain data. The fact that Trip was able to get this far using nanochat and 2.93 billion training tokens is a promising start.&lt;/p&gt;

&lt;p id="update-31st"&gt;&lt;strong&gt;Update 31st March 2026&lt;/strong&gt;: I had missed this when I first published this piece but Trip has his own &lt;a href="https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/"&gt;detailed writeup of the project&lt;/a&gt; which goes into much more detail about how he trained the model. Here's how the books were filtered for pre-training:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;First, I downloaded the British Library dataset split of all 19th-century books. I filtered those down to books contemporaneous with the reign of Queen Victoria—which, unfortunately, cut out the novels of Jane Austen—and further filtered those down to a set of books with a optical character recognition (OCR) confidence of .65 or above, as listed in the metadata. This left me with 28,035 books, or roughly 2.93 billion tokes for pretraining data.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Getting it to behave like a conversational model was a lot harder. Trip started by trying to train on plays by Oscar Wilde and George Bernard Shaw, but found they didn't provide enough pairs. Then he tried extracting dialogue pairs from the books themselves with poor results. The approach that worked was to have Claude Haiku and GPT-4o-mini generate synthetic conversation pairs for the supervised fine tuning, which solved the problem but sadly I think dilutes the "no training inputs from after 1899" claim from the original model card.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/ai"&gt;ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/andrej-karpathy"&gt;andrej-karpathy&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/generative-ai"&gt;generative-ai&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/local-llms"&gt;local-llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llms"&gt;llms&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-assisted-programming"&gt;ai-assisted-programming&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/hugging-face"&gt;hugging-face&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/training-data"&gt;training-data&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/uv"&gt;uv&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/ai-ethics"&gt;ai-ethics&lt;/a&gt;, &lt;a href="https://simonwillison.net/tags/claude-code"&gt;claude-code&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="ai"/><category term="andrej-karpathy"/><category term="generative-ai"/><category term="local-llms"/><category term="llms"/><category term="ai-assisted-programming"/><category term="hugging-face"/><category term="llm"/><category term="training-data"/><category term="uv"/><category term="ai-ethics"/><category term="claude-code"/></entry><entry><title>llm-mrchatterbox 0.1</title><link href="https://simonwillison.net/2026/Mar/30/llm-mrchatterbox-2/#atom-everything" rel="alternate"/><published>2026-03-30T02:20:46+00:00</published><updated>2026-03-30T02:20:46+00:00</updated><id>https://simonwillison.net/2026/Mar/30/llm-mrchatterbox-2/#atom-everything</id><summary type="html">
    &lt;p&gt;&lt;strong&gt;Release:&lt;/strong&gt; &lt;a href="https://github.com/simonw/llm-mrchatterbox/releases/tag/0.1"&gt;llm-mrchatterbox 0.1&lt;/a&gt;&lt;/p&gt;
    &lt;p&gt;See &lt;a href="https://simonwillison.net/2026/Mar/30/mr-chatterbox/"&gt;Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer&lt;/a&gt;.&lt;/p&gt;
    
        &lt;p&gt;Tags: &lt;a href="https://simonwillison.net/tags/llm"&gt;llm&lt;/a&gt;&lt;/p&gt;
    

</summary><category term="llm"/></entry></feed>