Papers
arxiv:2603.19076

DROID-SLAM in the Wild

Published on Mar 19
· Submitted by
Moyang Li
on Mar 23
Authors:
,

Abstract

A real-time RGB SLAM system uses differentiable uncertainty-aware bundle adjustment to handle dynamic environments by estimating per-pixel uncertainty from multi-view visual features, achieving state-of-the-art performance in cluttered scenes while maintaining real-time processing.

AI-generated summary

We present a robust, real-time RGB SLAM system that handles dynamic environments by leveraging differentiable Uncertainty-aware Bundle Adjustment. Traditional SLAM methods typically assume static scenes, leading to tracking failures in the presence of motion. Recent dynamic SLAM approaches attempt to address this challenge using predefined dynamic priors or uncertainty-aware mapping, but they remain limited when confronted with unknown dynamic objects or highly cluttered scenes where geometric mapping becomes unreliable. In contrast, our method estimates per-pixel uncertainty by exploiting multi-view visual feature inconsistency, enabling robust tracking and reconstruction even in real-world environments. The proposed system achieves state-of-the-art camera poses and scene geometry in cluttered dynamic scenarios while running in real time at around 10 FPS. Code and datasets are available at https://github.com/MoyangLi00/DROID-W.git.

Community

Paper author Paper submitter

Our approach delivers high-quality dynamic point cloud reconstruction, accurate camera pose estimation, and dynamic uncertainty estimation. It robustly handles any real-world videos, including challenging film clips, and also supports static Gaussian Splatting mapping. Check our webpage for more interesting results!

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.19076 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.19076 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.19076 in a Space README.md to link it from this page.

Collections including this paper 1