Papers
arxiv:2604.08865

SPPO: Sequence-Level PPO for Long-Horizon Reasoning Tasks

Published on Apr 10
ยท Submitted by
Yixia Li
on Apr 15
#3 Paper of the day
Authors:
,
,
,
,
,
,

Abstract

Sequence-Level PPO addresses instability in long-chain-of-thought reasoning by reformulating the process as a contextual bandit problem with decoupled value functions for improved efficiency.

AI-generated summary

Proximal Policy Optimization (PPO) is central to aligning Large Language Models (LLMs) in reasoning tasks with verifiable rewards. However, standard token-level PPO struggles in this setting due to the instability of temporal credit assignment over long Chain-of-Thought (CoT) horizons and the prohibitive memory cost of the value model. While critic-free alternatives like GRPO mitigate these issues, they incur significant computational overhead by requiring multiple samples for baseline estimation, severely limiting training throughput. In this paper, we introduce Sequence-Level PPO (SPPO), a scalable algorithm that harmonizes the sample efficiency of PPO with the stability of outcome-based updates. SPPO reformulates the reasoning process as a Sequence-Level Contextual Bandit problem, employing a decoupled scalar value function to derive low-variance advantage signals without multi-sampling. Extensive experiments on mathematical benchmarks demonstrate that SPPO significantly surpasses standard PPO and matches the performance of computation-heavy group-based methods, offering a resource-efficient framework for aligning reasoning LLMs.

Community

Paper author Paper submitter

We introduce SPPO (Sequence-Level PPO), a scalable RL algorithm for aligning reasoning LLMs that resolves the fundamental tension between PPO's unstable credit assignment and GRPO's costly multi-sampling.

Standard token-level PPO struggles in long Chain-of-Thought (CoT) reasoning due to the "Tail Effect" โ€” the critic overfits positional cues and fails to propagate sparse rewards across thousands of tokens. While GRPO sidesteps this with group-based baselines, it demands N>1 samples per prompt, severely bottlenecking training throughput.

Our key insight: GRPO's success stems from implicitly treating reasoning as a Sequence-Level Contextual Bandit. SPPO makes this explicit โ€” collapsing the entire reasoning chain into a single atomic action and employing a learned scalar value function V(s_p) to estimate prompt solvability, enabling stable single-sample (N=1) updates.

Highlights:

  • ๐Ÿ† Outperforms standard PPO and matches GRPO (N=8) on AIME24/25, AMC23, MATH500, and Minerva Math at both 1.5B and 7B scales
  • โšก 5.9ร— training speedup over GRPO with single-sample efficiency
  • ๐Ÿง  Decoupled Critic: a lightweight 1.5B critic successfully aligns a 7B policy, reducing VRAM by 12.8% while achieving the highest average score (58.56)
  • ๐Ÿ”ฌ Validated beyond LLMs on classic control tasks (CartPole, Hopper, MountainCar, LunarLander, Pendulum) under the RLVR framework

๐Ÿ“„ Paper (ACL 2026 Main): https://arxiv.org/abs/2604.08865
๐Ÿ’ป Code: https://github.com/sustech-nlp/SPPO

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Interesting breakdown of this paper on arXivLens: https://arxivlens.com/PaperView/Details/sppo-sequence-level-ppo-for-long-horizon-reasoning-tasks-7765-e8c183eb
Covers the executive summary, detailed methodology, and practical applications.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.08865
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.08865 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.08865 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.08865 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.