93 posts tagged “prompt-injection”
Prompt Injection is a security attack against applications built on top of Large Language Models, introduced here and further described in this series of posts.
2022
Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack. I’m quoted in this Ars Technica article about prompt injection and the Remoteli.io Twitter bot.
I don’t know how to solve prompt injection
Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.
[... 581 words]
«« first
« previous
page 4 / 4