LLMs Alignment: DPO

This blog post introduces a streamlined alternative to RLHF called DPO. Like RLHF, DPO is designed to align model outputs with human preferences, but it stands apart with its simplicity and lower resource demands. In scenarios where project resources are limited, DPO emerges as a highly attractive and practical solution worth exploring. Notations Symbol \( x \) User input (Prompt): the question the model needs to answer \( y \) Model-generated response (Response / Completion): the text output by the model \( \pi_\theta(y \mid x) \) Actor model: The trainable policy used to generate response \(y\); parameterized by \(\theta\) \( \pi_{\mathrm{ref}}(y \mid x) \) Reference model: The frozen SFT (Supervised Fine-Tuning) model, serving as the alignment baseline \( r_\phi(x,y) \) Reward model: A reward function (with parameter \(\phi\)) used to evaluate the quality of response \(y\) \( V_\psi(x) \) Critic model: A value function (with parameter \(\psi\)) used to estimate the future cumulative reward given \(x\) \( \pi^*(y \mid x) \) Optimal policy distribution, determined via the reference model and reward function \( r_\theta(x,y) \) Reward derived from the Actor model, constructed from \(\pi_\theta\) and \(\pi_{\mathrm{ref}}\) \(\beta\) Hyperparameter that controls the weight of the KL penalty or the log-ratio difference term \(\mathbb{D}_{\mathrm{KL}}[P \| Q]\) KL divergence, a measure of the difference between probability distributions \(P\) and \(Q\) \(\sigma(z)\) Sigmoid function, defined as: \(\sigma(z)=\frac{1}{1+e^{-z}}\) \(\log\) Logarithm function \(\mathbb{E}\) Expectation operator, used to compute the average value of a random variable \( (y_w, y_l) \) A pair of preference data where \( y_w \) is the preferred (better quality) response and \( y_l \) is the lesser one \( P\left(y_w \succ y_l \mid x\right) \) The probability that response \( y_w \) is preferred over \( y_l \) given input \(x\) \( Z(x) \) Partition function, which normalizes the probability distribution over all responses \(y\) \( \mathcal{L}_{\mathrm{DPO}} \) The loss function of DPO From RLHF to DPO RLHF OpenAI primarily leverages Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017) to train InstructGPT (Ouyang et al., 2022), which forms the basis for LLMs (such as ChatGPT, Llama, etc.). The entire training process generally comprises the following three main steps: ...

2025-02-08 · 13 min · 2577 words · Yue Shui

Normalization in Deep Learning

Introduction In deep learning, the design of network architectures significantly impacts model performance and training efficiency. As model depth increases, training deep neural networks faces numerous challenges, such as the vanishing and exploding gradient problems. To address these challenges, residual connections and various normalization methods have been introduced and are widely used in modern deep learning models. This article will first introduce residual connections and two architectures: pre-norm and post-norm. Then, it will describe four common normalization methods: Batch Normalization, Layer Normalization, Weight Normalization, and RMS Normalization, and analyze why current mainstream large models tend to adopt an architecture combining RMSNorm and Pre-Norm. ...

2025-02-01 · 13 min · 2576 words · Yue Shui

OpenAI o1 Replication Progress: DeepSeek-R1

DeepSeek AI recently released DeepSeek-R1 (DeepSeek-AI, 2025), whose reasoning performance on multiple benchmarks approaches the level of OpenAI’s o1 (OpenAI, 2024), marking a significant step for the open-source community in successfully replicating o1. Relevant code for R1 can be found in the huggingface’s attempt to open-source replication project open-r1. While previous research has often relied on massive amounts of supervised data to enhance the performance of Large Language Models (LLMs), the success of DeepSeek-R1 and its earlier experiment, DeepSeek-R1-Zero, powerfully demonstrates the potential of purely large-scale reinforcement learning in improving the reasoning capabilities of LLMs. This success reinforces the profound insight proposed by Richard Sutton in “The Bitter Lesson”: ...

2025-01-27 · 48 min · 10156 words · Yue Shui

Attention Mechanisms in Transformers: Comparing MHA, MQA, and GQA

Background The Transformer (Vaswani et al., 2017) is a model based on the encoder-decoder architecture. This model has demonstrated outstanding performance in the field of natural language processing (NLP), leading to a series of optimized models based on it, such as BERT (Devlin et al., 2018) which uses only the encoder, GPT (Radford et al., 2018) series which uses only the decoder, and subsequent large language models (LLMs) like LLaMA (Touvron et al., 2023) and GPT-4 (OpenAI et al., 2024), most of which adopt a decoder-only architecture. ...

2025-01-16 · 29 min · 6139 words · Yue Shui

Building Domain-Specific LLMs

Background With the widespread application of Large Language Models (LLMs) across various industries, enterprises and research teams face an urgent need to adapt general-purpose models to specific domains. Foundational LLMs often fail to meet deep domain-specific requirements when handling specialized tasks. For example, in the application of closed-source programming languages, existing open-source models lack sufficient understanding of their syntax and semantics, leading to poor performance in tasks such as code generation and error correction. Therefore, injecting domain knowledge and training dedicated LLMs has become a key step in enhancing development efficiency and code quality. ...

2025-01-05 · 21 min · 4340 words · Yue Shui