Papers
arxiv:2603.07901

NaviDriveVLM: Decoupling High-Level Reasoning and Motion Planning for Autonomous Driving

Published on Mar 9
· Submitted by
pardis
on Mar 10
Authors:
,
,
,
,

Abstract

NaviDriveVLM presents a decoupled vision-language model framework for autonomous driving that separates high-level reasoning from motion planning, achieving superior performance in end-to-end driving while reducing training costs.

AI-generated summary

Vision-language models (VLMs) have emerged as a promising direction for end-to-end autonomous driving (AD) by jointly modeling visual observations, driving context, and language-based reasoning. However, existing VLM-based systems face a trade-off between high-level reasoning and motion planning: large models offer strong semantic understanding but are costly to adapt for precise control, whereas small VLM models can be fine-tuned efficiently but often exhibit weaker reasoning. We propose NaviDriveVLM, a decoupled framework that separates reasoning from action generation using a large-scale Navigator and a lightweight trainable Driver. This design preserves reasoning ability, reduces training cost, and provides an explicit interpretable intermediate representation for downstream planning. Experiments on the nuScenes benchmark show that NaviDriveVLM outperforms large VLM baselines in end-to-end motion planning.

Community

Screenshot 2026-03-09 at 11.28.36 PM

Screenshot 2026-03-09 at 11.29.14 PM

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.07901 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.07901 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.07901 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.