In September 2024, OpenAI launched its o1 mannequin, skilled on large-scale reinforcement studying, giving it “superior reasoning” capabilities. Sadly, the main points of how they pulled this off have been by no means shared publicly. Right now, nonetheless, DeepSeek (an AI analysis lab) has replicated this reasoning conduct and revealed the complete technical particulars of their method. On this article, I’ll talk about the important thing concepts behind this innovation and describe how they work underneath the hood.
OpenAI’s o1 mannequin marked a brand new paradigm for coaching giant language fashions (LLMs). It launched so-called “considering” tokens, which allow a kind of scratch pad that the mannequin can use to suppose by way of issues and consumer queries.
The most important perception from o1 was efficiency improved with elevated test-time compute. That is only a fancy method of claiming that the extra tokens a mannequin generates, the higher its response. The determine under, reproduced from OpenAI’s weblog, captures this level properly.

Within the plots above, the y-axes are mannequin efficiency on AIME (math issues), whereas the x-axes are numerous compute occasions. The left plot depicts the well-known neural scaling legal guidelines that kicked off the LLM rush of 2023. In different phrases, the longer a mannequin is skilled (i.e. train-time compute), the higher its efficiency.
On the appropriate, nonetheless, we see a brand new sort of scaling legislation. Right here, the extra tokens a mannequin generates (i.e. test-time compute), the higher its efficiency.
“Pondering” tokens
A key function of o1 is its so-called “considering” tokens. These are particular tokens launched throughout post-training, which delimit the mannequin’s chain of thought (CoT) reasoning (i.e., considering by way of the issue). These particular tokens are vital for 2 causes.
One, they clearly demarcate the place the mannequin’s “considering” begins and stops so it may be simply parsed when spinning up a UI. And two, it produces a human-interpretable readout of how the mannequin “thinks” by way of the issue.
Though OpenAI disclosed that they used reinforcement studying to provide this means, the precise particulars of how they did it weren’t shared. Right now, nonetheless, we’ve a fairly good concept due to a latest publication from DeepSeek.
DeepSeek’s paper
In January 2025, DeepSeek revealed “DeepSeek-R1: Incentivizing Reasoning Functionality in LLMs by way of Reinforcement Studying” [2]. Whereas this paper induced its justifiable share of pandemonium, its central contribution was unveiling the secrets and techniques behind o1.
It introduces two fashions: DeepSeek-R1-Zero and DeepSeek-R1. The previous was skilled completely on reinforcement studying (RL), and the latter was a combination of Supervised Effective-tuning (SFT) and RL.
Though the headlines (and title of the paper) have been about DeepSeek-R1, the previous mannequin is vital as a result of, one, it generated coaching information for R1, and two, it demonstrates hanging emergent reasoning skills that weren’t taught to the mannequin.
In different phrases, R1-Zero discovers CoT and test-time compute scaling by way of RL alone! Let’s talk about the way it works.
DeepSeek-R1-Zero (RL solely)
Reinforcement studying (RL) is a Machine Studying method during which, reasonably than coaching fashions on specific examples, fashions study by way of trial and error [3]. It really works by passing a reward sign to a mannequin that has no specific purposeful relationship with the mannequin’s parameters.
That is much like how we frequently study in the true world. For instance, if I apply for a job and don’t get a response, I’ve to determine what I did mistaken and the right way to enhance. That is in distinction to supervised studying, which, on this analogy, could be just like the recruiter giving me particular suggestions on what I did mistaken and the right way to enhance.
Whereas utilizing RL to coach R1-Zero consists of many technical particulars, I need to spotlight 3 key ones: the immediate template, reward sign, and GRPO (Group Relative Coverage Optimization).
1) Immediate template
The template used for coaching is given under, the place {immediate}
is changed with a query from a dataset of (presumably) complicated math, coding, and logic issues. Discover the inclusion of
and
tags by way of easy prompting.
A dialog between Person and Assistant. The consumer asks a query, and the
Assistant solves it.The assistant first thinks concerning the reasoning course of in
the thoughts after which gives the consumer with the reply. The reasoning course of and
reply are enclosed inside and tags,
respectively, i.e., reasoning course of right here
reply right here . Person: {immediate}. Assistant:
One thing that stands out right here is the minimal and relaxed prompting technique. This was an intentional selection by DeepSeek to keep away from biasing mannequin responses and to observe its pure evolution throughout RL.
2) Reward sign
The RL reward has two parts: accuracy and format rewards. For the reason that coaching dataset consists of questions with clear proper solutions, a easy rule-based technique is used to judge response accuracy. Equally, a rule-based formatting reward is used to make sure reasoning tokens are generated in between the considering tags.
It’s famous by the authors {that a} neural reward mannequin isn’t used (i.e. rewards usually are not computed by a neural web), as a result of these could also be liable to reward hacking. In different phrases, the LLM learns the right way to trick the reward mannequin into maximizing rewards whereas reducing downstream efficiency.
This is rather like how people discover methods to take advantage of any incentive construction to maximise their private positive aspects whereas forsaking the unique intent of the incentives. This highlights the problem of manufacturing good rewards (whether or not for people or computer systems).
3) GRPO (Group Relative Coverage Optimization)
The ultimate element is how rewards are translated into mannequin parameter updates. This part is sort of technical, so the enlightened reader can be happy to skip forward.
GRPO is an RL method that mixes a group of responses to replace mannequin parameters. To encourage steady coaching, the authors additionally incorporate clipping and KL-divergence regularization phrases into the loss perform. Clipping ensures optimization steps usually are not too huge, and regularization ensures the mannequin predictions don’t change too abruptly.
Right here is the entire loss perform with some (hopefully) useful annotations.

Outcomes (emergent skills)
Essentially the most hanging results of R1-Zero is that, regardless of its minimal steering, it develops efficient reasoning methods that we’d acknowledge.
For instance, it learns implicitly by way of the RL to enhance responses by way of test-time compute (recall the sooner perception from o1). That is depicted by way of the plot under from the R1 paper [2].

One other notable means is reflecting on previous responses with an uncanny human-like inside monologue. Right here is an instance from R1-Zero. Discover the lengthy CoT and extra verification step earlier than producing the ultimate reply (I omitted some elements as a result of the response was very lengthy).
Person immediate: If it takes me 1.5 hours to put in writing a codebase and my intern 8
hours. How lengthy will it take us each to put in writing 3 codebases?
To resolve for a way lengthy it would take each you and your intern to put in writing
three codebases collectively, we have to first learn the way many codebases every of
you possibly can write per hour individually.
(lengthy CoT) ...
So, it would take roughly 3.7895 hours for you and your intern to put in writing
three codebases collectively.
To ensure all the things is sensible, allow us to spherical the reply to an affordable
variety of decimal locations (e.g., spherical to 2 decimal locations):
t ≈ 3.79 hours
Nevertheless, when you want a extra "human-friendly" reply, you could possibly convert the
decimal half into minutes. Since 0.7895 of an hour is:
0.7895 × 60 minutes ≈ 47.37 minutes
So, 3.7895 hours is roughly 3 hours and 47.37 minutes (or roughly 3
hours and 47 minutes if rounding to the closest minute).
## Verification (non-compulsory step for verification)
To confirm, allow us to verify what number of codebases you possibly can individually write in t =
72/19 hours:
Your work in 72/19 hours
= (2/3 codebases per hour) × (72/19 hours)
= (2 × 72) / (3 × 19)
= 144 / 57
= 48/19 codebases.
(verification course of) ...
Nevertheless, a extra "precise" reply must be stored in fractional type except
specified in any other case. Thus, the reply is: 72/19hours which is roughly
$3.7895$ hours or about $3.79$ hours when rounded to 2 decimal locations).
72/19 hours (or roughly 3.79 hours).
Issues with R1-Zero
Though the considering tokens from R1-Zero give a human-readable window into the mannequin’s “thought course of,” the authors report some points. Particularly, the realized CoT generally suffers from readability points and language mixing. Suggesting (maybe) that its reasoning begins to veer away from one thing simply interpretable by people.
DeepSeek-R1 (SFT + RL)
To mitigate R1-Zero’s interpretability points, the authors discover a multi-step coaching technique that makes use of each supervised fine-tuning (SFT) and RL. This technique ends in DeepSeek-R1, a better-performing mannequin that’s getting extra consideration at the moment. All the coaching course of could be damaged down into 4 steps.
Step 1: SFT with reasoning information
To assist get the mannequin heading in the right direction on the subject of studying the right way to motive, the authors begin with SFT. This leverages 1000s of lengthy CoT examples from numerous sources, together with few-shot prompting (i.e., exhibiting examples of the right way to suppose by way of issues), instantly prompting the mannequin to make use of reflection and verification, and refining artificial information from R1-Zero [2].
The two key benefits of this are, one, the specified response format could be explicitly proven to the mannequin, and two, seeing curated reasoning examples unlocks higher efficiency for the ultimate mannequin.
Step 2: R1-Zero fashion RL (+ language consistency reward)
Subsequent, an RL coaching step is utilized to the mannequin after SFT. That is carried out in an equivalent method as R1-Zero with an added element to the reward sign that incentivizes language persistently. This was added to the reward as a result of R1-Zero tended to combine languages, making it troublesome to learn its generations.
Step 3: SFT with blended information
At this level, the mannequin probably has on par (or higher) efficiency than R1-Zero on reasoning duties. Nevertheless, this intermediate mannequin wouldn’t be very sensible as a result of it desires to motive about any enter it receives (e.g., “hello there”), which is pointless for factual Q&A, translation, and artistic writing. That’s why one other SFT spherical is carried out with each reasoning (600k examples) and non-reasoning (200k examples) information.
The reasoning information right here is generated from the ensuing mannequin from Step 2. Moreover, examples are included which use an LLM choose to check mannequin predictions to floor reality solutions.
The non-reasoning information comes from two locations. First, the SFT dataset used to coach DeepSeek-V3 (the bottom mannequin). Second, artificial information generated by DeepSeek-V3. Word that examples are included that don’t use CoT in order that the mannequin doesn’t use considering tokens for each response.
Step 4: RL + RLHF
Lastly, one other RL spherical is finished, which incorporates (once more) R1-Zero fashion reasoning coaching and RL on human suggestions. This latter element helps enhance the mannequin’s helpfulness and harmlessness.
The results of this whole pipeline is DeepSeek-R1, which excels at reasoning duties and is an AI assistant you possibly can chat with usually.
Accessing R1-Zero and R1
One other key contribution from DeepSeek is that the weights of the 2 fashions described above (and lots of different distilled variations of R1) have been made publicly obtainable. This implies there are a lot of methods to entry these fashions, whether or not utilizing an inference supplier or operating them regionally.
Listed here are a couple of locations that I’ve seen these fashions.
- DeepSeek (DeepSeek-V3 and DeepSeek-R1)
- Collectively (DeepSeek-V3, DeepSeek-R1, and distillations)
- Hyperbolic (DeepSeek-V3, DeepSeek-R1-Zero, and DeepSeek-R1)
- Ollama (native) (DeepSeek-V3, DeepSeek-R1, and distillations)
- Hugging Face (native) (the entire above)
Conclusions
The discharge of o1 launched a brand new dimension by which LLMs could be improved: test-time compute. Though OpenAI didn’t launch its secret sauce for doing this, 5 months later, DeepSeek was in a position to replicate this reasoning conduct and publish the technical particulars of its method.
Whereas present reasoning fashions have limitations, it is a promising analysis path as a result of it has demonstrated that reinforcement studying (with out people) can produce fashions that study independently. This (probably) breaks the implicit limitations of present fashions, which might solely recall and remix data beforehand seen on the web (i.e., current human information).
The promise of this new RL method is that fashions can surpass human understanding (on their very own), resulting in new scientific and technological breakthroughs that may take us many years to find (on our personal).
🗞️ Get unique entry to AI assets and undertaking concepts: https://the-data-entrepreneurs.equipment.com/shaw
🧑🎓 Study AI in 6 weeks by constructing it: https://maven.com/shaw-talebi/ai-builders-bootcamp
References
[1] Studying to motive with LLMs
[2] arXiv:2501.12948 [cs.CL]