We've adjusted prompt caching so that you now only need to specify cache write points in your prompts - we'll automatically check for cache hits at previous positions. No more manual tracking of read locations needed.
— Alex Albert, Anthropic
Recent articles
- LLM 0.22, the annotated release notes - 17th February 2025
- Run LLMs on macOS using llm-mlx and Apple's MLX framework - 15th February 2025
- URL-addressable Pyodide Python environments - 13th February 2025