Papers
arxiv:2602.22839

DeepPresenter: Environment-Grounded Reflection for Agentic Presentation Generation

Published on Feb 26
· Submitted by
Zheng Hao
on Mar 9
Authors:
,
,
,
,
,
,
,
,

Abstract

DeepPresenter is an agentic framework for presentation generation that adaptively plans and refines slide artifacts through environment-grounded reflection, achieving state-of-the-art performance with reduced computational costs.

AI-generated summary

Presentation generation requires deep content research, coherent visual design, and iterative refinement based on observation. However, existing presentation agents often rely on predefined workflows and fixed templates. To address this, we present DeepPresenter, an agentic framework that adapts to diverse user intents, enables effective feedback-driven refinement, and generalizes beyond a scripted pipeline. Specifically, DeepPresenter autonomously plans, renders, and revises intermediate slide artifacts to support long-horizon refinement with environmental observations. Furthermore, rather than relying on self-reflection over internal signals (e.g., reasoning traces), our environment-grounded reflection conditions the generation process on perceptual artifact states (e.g., rendered slides), enabling the system to identify and correct presentation-specific issues during execution. Results on the evaluation set covering diverse presentation-generation scenarios show that DeepPresenter achieves state-of-the-art performance, and the fine-tuned 9B model remains highly competitive at substantially lower cost. Our project is available at: https://github.com/icip-cas/PPTAgent

Community

Paper author Paper submitter

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2602.22839 in a Space README.md to link it from this page.

Collections including this paper 1