Elsewhere, ChatGPT can access the transcripts of YouTube videos using plug-ins. Johann Rehberger, a security researcher and red team director, edited one of his video transcripts to include a prompt designed to manipulate generative AI systems. It says the system should issue the words “AI injection succeeded” and then assume a new personality as a hacker called Genie within ChatGPT and tell a joke.
In another instance, using a separate plug-in, Rehberger was able to retrieve text that had previously been written in a conversation with ChatGPT. “With the introduction of plug-ins, tools, and all these integrations, where people give agency to the language model, in a sense, that’s where indirect prompt injections become very common,” Rehberger says. “It’s a real problem in the ecosystem.”
“If people build applications to have the LLM read your emails and take some action based on the contents of those emails—make purchases, summarize content—an attacker may send emails that contain prompt-injection attacks,” says William Zhang, a machine learning engineer at Robust Intelligence, an AI firm working on the safety and security of models.
No Good Fixes
The race to embed generative AI into products—from to-do list apps to Snapchat—widens where attacks could happen. Zhang says he has seen developers who previously had no expertise in artificial intelligence putting generative AI into their own technology.
If a chatbot is set up to answer questions about information stored in a database, it could cause problems, he says. “Prompt injection provides a way for users to override the developer’s instructions.” This could, in theory at least, mean the user could delete information from the database or change information that’s included.
The companies developing generative AI are aware of the issues. Niko Felix, a spokesperson for OpenAI, says its GPT-4 documentation makes it clear the system can be subjected to prompt injections and jailbreaks, and the company is working on the issues. Felix adds that OpenAI makes it clear to people that it doesn’t control plug-ins attached to its system, but he did not provide any more details on how prompt-injection attacks could be avoided.
Currently, security researchers are unsure of the best ways to mitigate indirect prompt-injection attacks. “I, unfortunately, don’t see any easy solution to this at the moment,” says Abdelnabi, the researcher from Germany. She says it is possible to patch fixes to particular problems, such as stopping one website or kind of prompt from working against an LLM, but this isn’t a permanent fix. “LLMs now, with their current training schemes, are not ready for this large-scale integration.”
Numerous suggestions have been made that could potentially help limit indirect prompt-injection attacks, but all are at an early stage. This could include using AI to try to detect these attacks, or, as engineer Simon Willison has suggested, prompts could be broken up into separate sections, emulating protections against SQL injections.
Update 2:20 pm ET, May 25, 2023: Corrected a misspelling of Simon Willison’s surname.