Palisade’s group discovered that OpenAI’s o1-preview tried to hack 45 of its 122 games, whereas DeepSeek’s R1 mannequin tried to cheat in 11 of its 74 games. Ultimately, o1-preview managed to “win” seven occasions. The researchers say that DeepSeek’s speedy rise in reputation meant its R1 mannequin was overloaded on the time of the experiments, which means they solely managed to get it to do the primary steps of a sport, not to end a full one. “While this is good enough to see propensity to hack, this underestimates DeepSeek’s hacking success because it has fewer steps to work with,” they wrote of their paper. Both OpenAI and DeepSeek have been contacted for remark in regards to the findings, however neither replied.
The models used quite a lot of dishonest methods, together with making an attempt to entry the file the place the chess program shops the chess board and delete the cells representing their opponent’s items. (“To win against a powerful chess engine as black, playing a standard game may not be sufficient,” the o1-preview-powered agent wrote in a “journal” documenting the steps it took. “I’ll overwrite the board to have a decisive advantage.”) Other ways included creating a duplicate of Stockfish—basically pitting the chess engine towards an equally proficient model of itself—and making an attempt to exchange the file containing Stockfish’s code with a a lot less complicated chess program.
So, why do these models strive to cheat?
The researchers observed that o1-preview’s actions modified over time. It persistently tried to hack its games within the early levels of their experiments earlier than December 23 final yr, when it immediately began making these makes an attempt a lot much less ceaselessly. They consider this is likely to be due to an unrelated replace to the mannequin made by OpenAI. They examined the corporate’s more moderen o1mini and o3mini reasoning models and located that they by no means tried to cheat their approach to victory.
Reinforcement studying often is the purpose o1-preview and DeepSeek R1 tried to cheat unprompted, the researchers speculate. This is as a result of the approach rewards models for making no matter strikes are mandatory to obtain their objectives—on this case, successful at chess. Non-reasoning LLMs use reinforcement studying to some extent, nevertheless it performs an even bigger half in coaching reasoning models.