👋 Welcome to Yue’s blog

Hi, this is Yue Shui, an LLM Algorithm Engineer at PwC. My work focuses on researching and applying LLMs in areas like finance, audit, and code generation. This blog serves as a space to document and share insights from my work and learning journey. The grammar mistakes in the posts might give you a hint about ChatGPT’s involvement 😉—let me know if you spot any! My interests include model training, RAG and Agent. Recently, I’ve been learning how to utilize RL to train reasoning models. Feel free to connect!

Large Language Model Agents

Agents Since OpenAI released ChatGPT in October 2022, and with the subsequent emergence of projects such as AutoGPT and AgentGPT, LLM-related agents have gradually become a research hotspot and a promising direction for practical applications in AI in recent years. This article will introduce the basic concepts of agents, their core technologies, and the latest advances in their applications. Large Language Model Agents Large Language Model Agents (LLM agents) utilize LLMs as the system’s brain, combined with modules such as planning, memory, and external tools, to achieve automated execution of complex tasks. ...

2025-03-27 · 32 min · 6788 words · Yue Shui

Parallelism and Memory Optimization Techniques for Training Large Models

Background Recently, the number of parameters in large models has been continuously increasing, from the initial billions to today’s hundreds of billions or even trillions. While large models have brought unprecedented application effects, they have also triggered a series of severe challenges in computing resources, memory management, and training stability. Therefore, this blog summarizes some commonly used distributed parallel training and memory management techniques, hoping to help everyone better train and optimize large models. ...

2025-03-01 · 60 min · 12755 words · Yue Shui

LLMs Alignment: DPO

This blog post introduces a streamlined alternative to RLHF called DPO. Like RLHF, DPO is designed to align model outputs with human preferences, but it stands apart with its simplicity and lower resource demands. In scenarios where project resources are limited, DPO emerges as a highly attractive and practical solution worth exploring. Notations Symbol \( x \) User input (Prompt): the question the model needs to answer \( y \) Model-generated response (Response / Completion): the text output by the model \( \pi_\theta(y \mid x) \) Actor model: The trainable policy used to generate response \(y\); parameterized by \(\theta\) \( \pi_{\mathrm{ref}}(y \mid x) \) Reference model: The frozen SFT (Supervised Fine-Tuning) model, serving as the alignment baseline \( r_\phi(x,y) \) Reward model: A reward function (with parameter \(\phi\)) used to evaluate the quality of response \(y\) \( V_\psi(x) \) Critic model: A value function (with parameter \(\psi\)) used to estimate the future cumulative reward given \(x\) \( \pi^*(y \mid x) \) Optimal policy distribution, determined via the reference model and reward function \( r_\theta(x,y) \) Reward derived from the Actor model, constructed from \(\pi_\theta\) and \(\pi_{\mathrm{ref}}\) \(\beta\) Hyperparameter that controls the weight of the KL penalty or the log-ratio difference term \(\mathbb{D}_{\mathrm{KL}}[P \| Q]\) KL divergence, a measure of the difference between probability distributions \(P\) and \(Q\) \(\sigma(z)\) Sigmoid function, defined as: \(\sigma(z)=\frac{1}{1+e^{-z}}\) \(\log\) Logarithm function \(\mathbb{E}\) Expectation operator, used to compute the average value of a random variable \( (y_w, y_l) \) A pair of preference data where \( y_w \) is the preferred (better quality) response and \( y_l \) is the lesser one \( P\left(y_w \succ y_l \mid x\right) \) The probability that response \( y_w \) is preferred over \( y_l \) given input \(x\) \( Z(x) \) Partition function, which normalizes the probability distribution over all responses \(y\) \( \mathcal{L}_{\mathrm{DPO}} \) The loss function of DPO From RLHF to DPO RLHF OpenAI primarily leverages Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017) to train InstructGPT (Ouyang et al., 2022), which forms the basis for LLMs (such as ChatGPT, Llama, etc.). The entire training process generally comprises the following three main steps: ...

2025-02-08 · 13 min · 2577 words · Yue Shui

Normalization in Deep Learning

Introduction In deep learning, the design of network architectures significantly impacts model performance and training efficiency. As model depth increases, training deep neural networks faces numerous challenges, such as the vanishing and exploding gradient problems. To address these challenges, residual connections and various normalization methods have been introduced and are widely used in modern deep learning models. This article will first introduce residual connections and two architectures: pre-norm and post-norm. Then, it will describe four common normalization methods: Batch Normalization, Layer Normalization, Weight Normalization, and RMS Normalization, and analyze why current mainstream large models tend to adopt an architecture combining RMSNorm and Pre-Norm. ...

2025-02-01 · 13 min · 2576 words · Yue Shui

Deep Reinforcement Learning (Ongoing Updates)

Note: This article is currently being updated. The content is in draft version and may change. Please check back for the latest version. Notations Symbol Meaning \(s, s', S_t, S_{t+1}\) State, next state, state at time \(t\), state at time \(t+1\) \(o, o_t\) Observation, observation at time \(t\) \(a, a', A_t, A_{t+1}\) Action, next action, action at time \(t\), action at time \(t+1\) \(r, r_t\) Immediate reward, reward at time \(t\) \(G_t\) Return at time \(t\) \(R(\tau)\) Return of a trajectory \(\tau\) \(\mathcal{S}\) Set of all possible states \(\mathcal{A}\) Set of all possible actions \(\mathcal{R}\) Set of all possible rewards \(\pi(a\mid s), \pi_\theta(a\mid s)\) Policy (stochastic), parameterized policy \(\mu(s), \mu_\theta(s)\) Policy (deterministic), parameterized policy \(\theta, \phi, w\) Policy or value function parameters \(\gamma\) Discount factor \(J(\pi)\) Expected return of policy \(\pi\) \(V_\pi(s)\) State-value function for policy \(\pi\) \(Q_\pi(s,a)\) Action-value function for policy \(\pi\) \(V_*(s)\) Optimal state-value function \(Q_*(s,a)\) Optimal action-value function \(A_\pi(s,a)\) Advantage function for policy \(\pi\) \(P(s'\mid s,a)\) Transition probability function \(R(s,a,s')\) Reward function \(\rho_0(s)\) Start-state distribution \(\tau\) Trajectory \(D\) Replay memory \(\alpha\) Learning rate, temperature parameter (in SAC) \(\lambda\) Eligibility trace parameter \(\epsilon\) Exploration parameter (e.g., in \(\epsilon\)-greedy), clipping parameter (in PPO) What is Reinforcement Learning? Definition ...

2025-01-31 · 34 min · 7096 words · Yue Shui