I recently spoke with the CTO of a popular AI note-taking app who told me something surprising: they spend twice as much on vector search as they do on OpenAI API calls. Think about that for a second. Running the retrieval layer costs them more than paying for the LLM itself.
— James Luan, Engineering architect of Milvus
Recent articles
- JustHTML is a fascinating example of vibe engineering in action - 14th December 2025
- OpenAI are quietly adopting skills, now available in ChatGPT and Codex CLI - 12th December 2025
- GPT-5.2 - 11th December 2025