There’s extra. To make its use of reinforcement studying as environment friendly as potential, DeepSeek has additionally developed a brand new algorithm referred to as Group Relative Policy Optimization (GRPO). It first used GRPO a 12 months in the past, to construct a mannequin referred to as DeepSeekMath.
We’ll skip the particulars—you simply want to know that reinforcement studying includes calculating a rating to decide whether or not a possible transfer is sweet or unhealthy. Many current reinforcement-learning strategies require a complete separate mannequin to make this calculation. In the case of huge language fashions, which means a second mannequin that might be as costly to construct and run as the first. Instead of utilizing a second mannequin to predict a rating, GRPO simply makes an informed guess. It’s low-cost, however nonetheless correct sufficient to work.
A typical method
DeepSeek’s use of reinforcement studying is the major innovation that the firm describes in its R1 paper. But DeepSeek shouldn’t be the solely agency experimenting with this system. Two weeks earlier than R1 dropped, a crew at Microsoft Asia introduced a mannequin referred to as rStar-Math, which was skilled in the same method. “It has similarly huge leaps in performance,” says Matt Zeiler, founder and CEO of the AI agency Clarifai.
AI2’s Tulu was additionally constructed utilizing environment friendly reinforcement-learning strategies (however on prime of, not as an alternative of, human-led steps like supervised fine-tuning and RLHF). And the US agency Hugging Face is racing to replicate R1 with OpenR1, a clone of DeepSeek’s mannequin that Hugging Face hopes will expose much more of the components in R1’s particular sauce.
What’s extra, it’s an open secret that prime companies like OpenAI, Google DeepMind, and Anthropic could already be utilizing their very own variations of DeepSeek’s method to practice their new technology of fashions. “I’m sure they’re doing almost the exact same thing, but they’ll have their own flavor of it,” says Zeiler.
But DeepSeek has a couple of trick up its sleeve. It skilled its base mannequin V3 to do one thing referred to as multi-token prediction, the place the mannequin learns to predict a string of phrases directly as an alternative of one by one. This coaching is cheaper and seems to increase accuracy as effectively. “If you think about how you speak, when you’re halfway through a sentence, you know what the rest of the sentence is going to be,” says Zeiler. “These models should be capable of that too.”
It has additionally discovered cheaper methods to create massive knowledge units. To practice final 12 months’s mannequin, DeepSeekMath, it took a free knowledge set referred to as Common Crawl—an enormous variety of paperwork scraped from the web—and used an automatic course of to extract simply the paperwork that included math issues. This was far cheaper than constructing a brand new knowledge set of math issues by hand. It was additionally simpler: Common Crawl consists of much more math than every other specialist math knowledge set that’s out there.
And on the {hardware} aspect, DeepSeek has discovered new methods to juice outdated chips, permitting it to practice top-tier fashions with out coughing up for the newest {hardware} on the market. Half their innovation comes from straight engineering, says Zeiler: “They definitely have some really, really good GPU engineers on that team.”