LLM 0.18. New release of LLM. The big new feature is asynchronous model support - you can now use supported models in async Python code like this:
import llm
model = llm.get_async_model("gpt-4o")
async for chunk in model.prompt(
"Five surprising names for a pet pelican"
):
print(chunk, end="", flush=True)
Also new in this release: support for sending audio attachments to OpenAI's gpt-4o-audio-preview
model.
Recent articles
- Two publishers and three authors fail to understand what "vibe coding" means - 1st May 2025
- Understanding the recent criticism of the Chatbot Arena - 30th April 2025
- Qwen 3 offers a case study in how to effectively release a model - 29th April 2025