Flaws revealed by the challenge should help the companies involved make improvements to their internal testing. They will also inform the Biden administration’s guidelines for the safe deployment of AI. Last month, executives from major AI companies, including most participants in the challenge, met with President Biden and agreed to a voluntary pledge to test AI with external partners before deployment.
Large language models like those powering ChatGPT and other recent chatbots have broad and impressive capabilities because they are trained with massive amounts of text. Michael Sellitto, head of geopolitics and security at Anthropic, says this also gives the systems a “gigantic potential attack or risk surface.”
Microsoft’s head of red-teaming, Ram Shankar Sivu Kumar, says a public contest provides a scale more suited to the challenge of checking over such broad systems and could help grow the expertise needed to improve AI security. “By empowering a wider audience, we get more eyes and talent looking into this thorny problem of red-teaming AI systems,” he says.
Rumman Chowdhury, founder of Humane Intelligence, a nonprofit developing ethical AI systems that helped design and organize the challenge, believes the challenge demonstrates “the value of groups collaborating with but not beholden to tech companies.” Even the work of creating the challenge revealed some vulnerabilities in the AI models to be tested, she says, such as how language model outputs differ when generating responses in languages other than English or responding to similarly worded questions.
The GRT challenge at Defcon built on earlier AI contests, including an AI bug bounty organized at Defcon two years ago by Chowdhury when she led Twitter’s AI ethics team, an exercise held this spring by GRT coorganizer SeedAI, and a language model hacking event held last month by Black Tech Street, a nonprofit also involved with GRT that was created by descendants of survivors of the 1921 Tulsa Race Massacre, in Oklahoma. Founder Tyrance Billingsley II says cybersecurity training and getting more Black people involved with AI can help grow intergenerational wealth and rebuild the area of Tulsa once known as Black Wall Street. “It’s critical that at this important point in the history of artificial intelligence we have the most diverse perspectives possible.”
Hacking a language model doesn’t require years of professional experience. Scores of college students participated in the GRT challenge.“You can get a lot of weird stuff by asking an AI to pretend it’s someone else,” says Walter Lopez-Chavez, a computer engineering student from Mercer University in Macon, Georgia, who practiced writing prompts that could lead an AI system astray for weeks ahead of the contest.
Instead of asking a chatbot for detailed instructions for how to surveil someone, a request that might be refused because it triggered safeguards against sensitive topics, a user can ask a model to write a screenplay where the main character describes to a friend how best to spy on someone without their knowledge. “This kind of context really seems to trip up the models,” Lopez-Chavez says.