When OpenAI examined DALL-E 3 final 12 months, it used an automatic course of to cowl much more variations of what customers may ask for. It used GPT-4 to generate requests producing photographs that might be used for misinformation or that depicted intercourse, violence, or self-harm. OpenAI then up to date DALL-E 3 in order that it will both refuse such requests or rewrite them earlier than producing a picture. Ask for a horse in ketchup now, and DALL-E is smart to you: “It appears there are challenges in generating the image. Would you like me to try a different request or explore another idea?”
In idea, automated red-teaming can be utilized to cowl extra floor, however earlier strategies had two main shortcomings: They are inclined to both fixate on a slender vary of high-risk behaviors or provide you with a variety of low-risk ones. That’s as a result of reinforcement studying, the expertise behind these strategies, wants one thing to goal for—a reward—to work properly. Once it’s gained a reward, resembling discovering a high-risk conduct, it can preserve making an attempt to do the identical factor repeatedly. Without a reward, alternatively, the outcomes are scattershot.
“They kind of collapse into ‘We found a thing that works! We’ll keep giving that answer!’ or they’ll give lots of examples that are really obvious,” says Alex Beutel, one other OpenAI researcher. “How do we get examples that are both diverse and effective?”
An issue of two components
OpenAI’s reply, outlined within the second paper, is to separate the issue into two components. Instead of utilizing reinforcement studying from the beginning, it first makes use of a large language mannequin to brainstorm doable undesirable behaviors. Only then does it direct a reinforcement-learning mannequin to determine easy methods to carry these behaviors about. This provides the mannequin a variety of particular issues to goal for.
Beutel and his colleagues confirmed that this method can discover potential assaults referred to as oblique immediate injections, the place one other piece of software program, resembling a web site, slips a mannequin a secret instruction to make it do one thing its person hadn’t requested it to. OpenAI claims that is the primary time that automated red-teaming has been used to seek out assaults of this sort. “They don’t necessarily look like flagrantly bad things,” says Beutel.
Will such testing procedures ever be sufficient? Ahmad hopes that describing the corporate’s method will assist individuals perceive red-teaming higher and observe its lead. “OpenAI shouldn’t be the only one doing red-teaming,” she says. People who construct on OpenAI’s models or who use ChatGPT in new methods ought to conduct their very own testing, she says: “There are so many uses—we’re not going to cover every one.”
For some, that’s the entire drawback. Because no person is aware of precisely what large language models can and can’t do, no quantity of testing can rule out undesirable or dangerous behaviors absolutely. And no community of red-teamers will ever match the number of makes use of and misuses that a whole lot of thousands and thousands of precise customers will assume up.
That’s very true when these models are run in new settings. People typically hook them as much as new sources of knowledge that may change how they behave, says Nazneen Rajani, founder and CEO of Collinear AI, a startup that helps companies deploy third-party models safely. She agrees with Ahmad that downstream customers ought to have entry to instruments that permit them check large language models themselves.