The dataset viewer is not available for this subset.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
DriveFusion-Data
DriveFusionQA
An Autonomous Driving Vision-Language Model for Scenario Understanding & Decision Reasoning.
DriveFusion-Data is a large-scale multimodal autonomous driving dataset collected in the CARLA simulator using a privileged rule-based expert policy (PDM-Lite). The dataset contains rich sensor data, vehicle measurements, and language annotations for training vision-language-action (VLA) models.
This dataset is part of the DriveFusion project.
Dataset Overview
DriveFusion-Data provides a comprehensive multimodal dataset for autonomous driving research, including:
- RGB camera images from 360° multi-camera coverage (front, front-left, front-right, back-left, back-right)
- LiDAR point clouds
- Semantic segmentation maps
- Depth maps
- Bounding boxes
- Vehicle and simulator measurements
- Natural language annotations (VQA, commentary, instruction following)
The dataset is generated using a CARLA-based data collection framework with multi-town, multi-scenario, and multi-sensor configurations.
Data Collection Framework
The data was collected using the DriveFusion CARLA Data Collection Framework, which provides:
- Rule-based expert driving using PDM-Lite
- Multi-camera 360° sensor recording and LiDAR
- Weather and lighting augmentation
- Scenario-based route execution
- Automated batch data generation on clusters (SLURM)
- Format conversion and dataset validation tools
Collection code repository:
https://github.com/DriveFusion/carla-data-collection
Dataset Sources and Attribution
DriveFusion-Data builds upon several open-source frameworks and datasets:
Core Simulation:
Reference Methods:
- DriveLM (PDM-Lite autopilot and VQA generation)
Language Dataset Reference:
Users must comply with the licenses of all referenced frameworks and datasets.
Dataset Format
Two main formats are provided:
Pre-DriveFusion Format
- Raw sensor data and measurements stored in compressed JSON and sensor files.
DriveFusion Format
- Standardized multimodal structure for end-to-end VLA training.
- Includes aligned sensor data and language annotations.
Intended Use
This dataset is designed for:
- Vision-Language-Action (VLA) model training
- Autonomous driving research and benchmarking
- Multimodal perception and planning research
- Language grounding in driving environments
- Embodied AI and robotics research
License and Attribution
This dataset is derived from simulation and public frameworks. Users must comply with:
- CARLA license
- CARLA Leaderboard and Scenario Runner licenses (MIT)
- DriveLM license
- SimLingo license
The DriveFusion framework code is released under Apache 2.0. Language annotations and third-party components may have additional license restrictions.
Citation
If you use DriveFusion-Data, please cite:
@misc{drivefusiondata2026,
title={DriveFusion-Data: A Large-Scale Multimodal Dataset for Autonomous Driving},
author={Samir, Omar and DriveFusion Team},
year={2026},
url={https://huggingface.co/datasets/DriveFusion/DriveFusion-Data}
}
- Downloads last month
- -