Online Causal Kalman Filtering for Stable and Effective Policy Optimization
Abstract
Online Causal Kalman Filtering addresses high-variance token-level importance sampling in reinforcement learning for large language models by modeling IS ratios as evolving latent states and using Kalman filtering for stable policy optimization.
Reinforcement learning for large language models suffers from high-variance token-level importance sampling (IS) ratios, which would destabilize policy optimization at scale. To improve stability, recent methods typically use a fixed sequence-level IS ratio for all tokens in a sequence or adjust each token's IS ratio separately, thereby neglecting temporal off-policy derivation across tokens in a sequence. In this paper, we first empirically identify that local off-policy deviation is structurally inconsistent at the token level, which may distort policy-gradient updates across adjacent tokens and lead to training collapse. To address the issue, we propose Online Causal Kalman Filtering for stable and effective Policy Optimization (KPO). Concretely, we model the desired IS ratio as a latent state that evolves across tokens and apply a Kalman filter to update this state online and autoregressively based on the states of past tokens, regardless of future tokens. The resulting filtered IS ratios preserve token-wise local structure-aware variation while strongly smoothing noise spikes, yielding more stable and effective policy updates. Experimentally, KPO achieves superior results on challenging math reasoning datasets compared with state-of-the-art counterparts.
Community
(Work in progress) We are adding more comparison methods and models for KPO and will soon open-source KPO.
๐ This paper introduces KPO: Online Causal Kalman Filtering to stabilize RL for LLMs by modeling importance sampling ratios as evolving latent states, effectively smoothing out high-variance noise.
By applying an autoregressive Kalman filter, KPO preserves key token-level structural information while mitigating volatility, achieving superior results on challenging math reasoning benchmarks!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Orchestrating Tokens and Sequences: Dynamic Hybrid Policy Optimization for RLVR (2026)
- A Step Back: Prefix Importance Ratio Stabilizes Policy Optimization (2026)
- ECHO: Entropy-Confidence Hybrid Optimization for Test-Time Reinforcement Learning (2026)
- DISPO: Enhancing Training Efficiency and Stability in Reinforcement Learning for Large Language Model Mathematical Reasoning (2026)
- Clipping-Free Policy Optimization for Large Language Models (2026)
- SOUP: Token-level Single-sample Mix-policy Reinforcement Learning for Large Language Models (2026)
- Ratio-Variance Regularized Policy Optimization for Efficient LLM Fine-tuning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper