Weeknotes: A new llm CLI tool, plus automating my weeknotes and newsletter
4th April 2023
I started publishing weeknotes in 2019 partly as a way to hold myself accountable but mainly as a way to encourage myself to write more.
Now that I’m writing multiple posts a week (mainly about AI)—and sending them out as a newsletter—my weeknotes are feeling a little less necessary. Here’s everything I’ve written here since my last weeknotes on 22nd March:
- I built a ChatGPT plugin to answer questions about data hosted in Datasette
- AI-enhanced development makes me more ambitious with my projects—and for another illustrative example of that effect, see my TIL Reading thermometer temperatures over time from a video
- What AI can do for you on the Theory of Change podcast
- Think of language models like ChatGPT as a “calculator for words”
- Semi-automating a Substack newsletter with an Observable notebook
(That list created using this SQL query.)
I’m going to keep them going though: I’ve had so much value out of the habit that I don’t feel it’s time to stop.
The llm CLI tool
This is one new piece of software I’ve released in the past few weeks that I haven’t written about yet.
I built the first version of llm, a command-line tool for running prompts against large language model (currently just ChatGPT and GPT-4), getting the results back on the command-line and also storing the prompt and response in a SQLite database.
It’s still pretty experimental, but it’s already looking like it will be a fun playground for trying out new things.
Here’s the 30s version of how to start using it:
# Install the tool
pipx install llm
# Put an OpenAI API key somewhere it can find it
echo 'your-OpenAI-API-key' > ~/.openai-api-key.txt
# Or you can set it as an environment variable:
# export OPENAI_API_KEY='...'
# Run a prompt
llm 'Ten names for cheesecakes'
This will output the response to that prompt directly to the terminal.
Add the -s
or --stream
option to stream results instead:
Prompts are run against ChatGPT’s inexpensive gpt-3.5-turbo
model by default. You can use -4
to run against the GPT-4 model instead (if you have access to it), or --model X
to run against another named OpenAI model.
If a SQLite database file exists in ~/.llm/log.db
any prompts you run will be automatically recorded to that database, which you can then explore using datasette ~/.llm/log.db
.
The following command will create that database if it does not yet exist:
llm init-db
There’s more in the README.
There are plenty of other options for tools for running LLM prompts on your own machines, including some that work on the command-line and some that record your results. llm
is probably less useful than those alternatives, but it’s a fun space for me to try out new ideas.
Automating my weeknotes
I wrote at length about how I automated most of my newsletter using an Observable notebook and some Datasette tricks.
I realized the same trick could work for my weeknotes as well. The “releases this week” and “TILs this week” sections have previously been generated by hand, so I applied the same technique from the newsletter notebook to automate them as well.
observablehq.com/@simonw/weeknotes is the notebook. It fetches TILs from my TILs Datasette, then grabs releases from this page on GitHub.
It also fetches the full text of my most recent weeknotes post from my blog’s Datasette backup so it can calculate which releases and TILs are new since last time.
It uses various regular expression and array tricks to filter that content to just the new stuff, then assembles me a markdown string which I can use as the basis of my new post.
Here’s what that generated for me this week:
Releases since last time
-
datasette-explain 0.1a1—2023-04-04
Explain and validate SQL queries as you type them into Datasette -
llm 0.2—2023-04-01
Access large language models from the command-line -
datasette-graphql 2.2—2023-03-23
Datasette plugin providing an automatic GraphQL API for your SQLite databases
TIL since last time
- Copy tables between SQLite databases—2023-04-03
- Reading thermometer temperatures over time from a video—2023-04-02
- Using the ChatGPT streaming API from Python—2023-04-01
- Interactive row selection prototype with Datasette—2023-03-30
- Using jq in an Observable notebook—2023-03-26
- Convert git log output to JSON using jq—2023-03-25
More recent articles
- Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac - 12th November 2024
- Visualizing local election results with Datasette, Observable and MapLibre GL - 9th November 2024
- Project: VERDAD - tracking misinformation in radio broadcasts using Gemini 1.5 - 7th November 2024