I think prompt engineering can be divided into “context engineering”, selecting and preparing relevant context for a task, and “prompt programming”, writing clear instructions. For an LLM search application like Perplexity, both matter a lot, but only the final, presentation-oriented stage of the latter is vulnerable to being echoed.
Recent articles
- OpenAI DevDay 2025 live blog - 6th October 2025
- Embracing the parallel coding agent lifestyle - 5th October 2025
- Designing agentic loops - 30th September 2025