Papers
arxiv:2603.22003

VP-VLA: Visual Prompting as an Interface for Vision-Language-Action Models

Published on Mar 23
· Submitted by
Yuqi Liu
on Mar 25
Authors:
,
,
,
,

Abstract

VP-VLA is a dual-system framework that decouples high-level reasoning from low-level robotic control through structured visual prompting, improving spatial precision and robustness in vision-language-action tasks.

AI-generated summary

Vision-Language-Action (VLA) models typically map visual observations and linguistic instructions directly to robotic control signals. This "black-box" mapping forces a single forward pass to simultaneously handle instruction interpretation, spatial grounding, and low-level control, often leading to poor spatial precision and limited robustness in out-of-distribution scenarios. To address these limitations, we propose VP-VLA, a dual-system framework that decouples high-level reasoning and low-level execution via a structured visual prompting interface. Specifically, a "System 2 Planner" decomposes complex instructions into sub-tasks and identifies relevant target objects and goal locations. These spatial anchors are then overlaid directly onto visual observations as structured visual prompts, such as crosshairs and bounding boxes. Guided by these prompts and enhanced by a novel auxiliary visual grounding objective during training, a "System 1 Controller" reliably generates precise low-level execution motions. Experiments on the Robocasa-GR1-Tabletop benchmark and SimplerEnv simulation demonstrate that VP-VLA improves success rates by 5% and 8.3%, surpassing competitive baselines including QwenOFT and GR00T-N1.6.

Community

Paper author Paper submitter

We propose VP-VLA, a dual-system framework that decouples high-level reasoning and low-level execution via a structured visual prompting interface.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.22003 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.22003 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.22003 in a Space README.md to link it from this page.

Collections including this paper 1