Simon Willison’s Weblog

Subscribe
Atom feed for will-mcgugan

3 items tagged “will-mcgugan”

2024

Anatomy of a Textual User Interface. Will McGugan used Textual and my LLM Python library to build a delightful TUI for talking to a simulation of Mother, the AI from the Aliens movies:

Animated screenshot of a terminal app called MotherApp. Mother: INTERFACE 2037 READY FOR INQUIRY. I type: Who is onboard? Mother replies, streaming content to the screen:  The crew of the Nostromo consists of the following personnel: 1. Captain Arthur Dallas - commanding officer. 2. Executive Officer Thomas Kane - second-in-command. 3. Warrant Officer Ellen Ripley - third-in-command. 4. Navigator Joan Lambert - responsible for navigation and communications. 5. Science Officer Ash - responsible for scientific analysis. 6. Engineering Technician Brett - maintenance and repair. 7. Chief Engineer Parker - head of the engineering department. All crew members are currently accounted for. How may I assist you further?

The entire implementation is just 77 lines of code. It includes PEP 723 inline dependency information:

# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "llm",
#     "textual",
# ]
# ///

Which means you can run it in a dedicated environment with the correct dependencies installed using uv run like this:

wget 'https://gist.githubusercontent.com/willmcgugan/648a537c9d47dafa59cb8ece281d8c2c/raw/7aa575c389b31eb041ae7a909f2349a96ffe2a48/mother.py'
export OPENAI_API_KEY='sk-...'
uv run mother.py

I found the send_prompt() method particularly interesting. Textual uses asyncio for its event loop, but LLM currently only supports synchronous execution and can block for several seconds while retrieving a prompt.

Will used the Textual @work(thread=True) decorator, documented here, to run that operation in a thread:

@work(thread=True)
def send_prompt(self, prompt: str, response: Response) -> None:
    response_content = ""
    llm_response = self.model.prompt(prompt, system=SYSTEM)
    for chunk in llm_response:
        response_content += chunk
        self.call_from_thread(response.update, response_content)

Looping through the response like that and calling self.call_from_thread(response.update, response_content) with an accumulated string is all it takes to implement streaming responses in the Textual UI, and that Response object sublasses textual.widgets.Markdown so any Markdown is rendered using Rich.

# 2nd September 2024, 4:39 pm / python, will-mcgugan, textual, llm, uv

2021

The thing about semver major version numbers are that they don't mean new stuff, they're a permanent reminder of how many times you got the API wrong. Semver doesn't mean MAJOR.MINOR.PATCH, it means FAILS.FEATURES.BUGS

Will McGugan

# 6th August 2021, 4:17 pm / versioning, semver, will-mcgugan

2020

How to get Rich with Python (a terminal rendering library). Will McGugan introduces Rich, his new Python library for rendering content on the terminal. This is a very cool piece of software—out of the box it supports coloured text, emoji, tables, rendering Markdown, syntax highlighting code, rendering Python tracebacks, progress bars and more. “pip install rich” and then “python -m rich” to render a “test card” demo demonstrating the features of the library.

# 4th May 2020, 11:27 pm / cli, python, will-mcgugan