LLM 0.18. New release of LLM. The big new feature is asynchronous model support - you can now use supported models in async Python code like this:
import llm
model = llm.get_async_model("gpt-4o")
async for chunk in model.prompt(
"Five surprising names for a pet pelican"
):
print(chunk, end="", flush=True)
Also new in this release: support for sending audio attachments to OpenAI's gpt-4o-audio-preview
model.
Recent articles
- Long context support in LLM 0.24 using fragments and template plugins - 7th April 2025
- Initial impressions of Llama 4 - 5th April 2025
- Putting Gemini 2.5 Pro through its paces - 25th March 2025