Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image | scene
string | segmentation_mask
image | front_mask
image | back_mask
image |
|---|---|---|---|---|
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
| ||||
just-environment-real
|
Dataset Description
This dataset is used to train the Mask Splitter neural network, a key component of the NSER-IBVS visual servoing framework for autonomous drone control. The network learns to split a vehicle segmentation mask into front and back regions, enabling the analytical IBVS controller to compute precise velocity commands.
Associated Resources
| Resource | Link |
|---|---|
| Paper | ICCV 2025 Workshop |
| arXiv | 2507.19878 |
| Models | nser-ibvs-models |
| Annotation and Train | mask-splitter |
| NN Integration for drone | nser-ibvs-drone |
| Project | Website |
| Demo | Hugging Face Space |
Sample Data
![]() RGB Image |
![]() Segmentation |
![]() Front Mask |
![]() Back Mask |
Intended Use
This dataset is intended for:
- Training and evaluating mask-splitting or part-aware segmentation models
- Visual servoing and robotics perception research
- Simulation-to-real transfer studies
Out of scope:
- Generic object detection benchmarks
- Autonomous driving datasets
Dataset Structure
data/
βββ sim/ # Simulation data (UE4 Bunker environment)
β βββ train/
β β βββ images/ # RGB frames
β β β βββ around-car-30-45-60-75-90-high-quality/
β β β βββ around-car-30-45-60-75-90-low-quality/
β β β βββ around-car-90-75-60-45-30-low-quality/
β β β βββ just-environment-high-quality/
β β β βββ just-environment-low-quality/
β β βββ labels/ # Manually annotated front/back masks with mask-splitter code.
β β β βββ <scene>/
β β β βββ front/
β β β βββ back/
β β βββ segmented/ # Full segmentation masks
β βββ validation/
β βββ images/
β βββ labels/
β βββ segmented/
βββ real/ # Real-world data
βββ train/
βββ validation/
Data Format
| Component | Format | Resolution | Description |
|---|---|---|---|
images/ |
PNG | 640x360 | RGB frames from drone camera |
segmented/ |
PNG (binary) | 640x360 | Full vehicle mask |
labels/front/ |
PNG (binary) | 640x360 | Manually annotated front vehicle region |
labels/back/ |
PNG (binary) | 640x360 | Manually annotated back vehicle region |
Naming Convention: Files share the same name across images/, segmented/, and labels/ directories for
simple correspondence (e.g., frame_000000_1076195.png).
Scenes
Simulation (UE4 Bunker Environment)
Training scenes:
around-car-30-45-60-75-90-high-quality- Various angles, high render qualityaround-car-30-45-60-75-90-low-quality- Various angles, low render qualityaround-car-90-75-60-45-30-low-quality- Reverse angle sequencejust-environment-high-quality- Environment-only frames (negatives)just-environment-low-quality- Environment-only frames (negatives)
Validation scenes:
around-car-45-high-qualityaround-car-45-low-qualityaround-car-45-low-quality-car-at-45
Real-World
Captured with Parrot Anafi 4K drone tracking a real vehicle.
Training scenes:
real-30-45-60-75-90- Various anglesjust-environment-real- Environment-only frames (negatives)
Validation scenes:
real-val
Usage
With git
git lfs install
git clone https://huggingface.co/datasets/brittleru/nser-ibvs-mask-splitter-dataset
Direct File Access
from huggingface_hub import snapshot_download
# Download entire dataset
snapshot_download(
repo_id="brittleru/nser-ibvs-mask-splitter-dataset",
repo_type="dataset",
local_dir="./mask-splitter-data"
)
# Or download specific subset
snapshot_download(
repo_id="brittleru/nser-ibvs-mask-splitter-dataset",
repo_type="dataset",
local_dir="./mask-splitter-data-sim",
allow_patterns="data/sim/*"
)
Load with Hugging Face Datasets
from datasets import load_dataset
# Load specific config and split
ds_sim_train = load_dataset("brittleru/nser-ibvs-mask-splitter-dataset", "sim", split="train")
ds_sim_val = load_dataset("brittleru/nser-ibvs-mask-splitter-dataset", "sim", split="validation")
ds_real_train = load_dataset("brittleru/nser-ibvs-mask-splitter-dataset", "real", split="train")
example = ds_sim_val[0]
print(example["scene"]) # Scene name
example["image"].show() # RGB image (PIL)
example["front_mask"].show() # Front mask (PIL)
example["back_mask"].show() # Back mask (PIL)
PyTorch DataLoader
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
ds = load_dataset("brittleru/nser-ibvs-mask-splitter-dataset", "sim", split="train")
transform = transforms.Compose([
transforms.Resize((256, 256)),
transforms.ToTensor(),
])
def collate_fn(batch):
images = []
targets = []
for x in batch:
# RGB image (3 channels)
img = transform(x["image"].convert("RGB"))
# Segmentation mask (1 channel)
seg_mask = transform(x["segmentation_mask"].convert("L"))
# Concatenate to get 4-channel input
input_4ch = torch.cat([img, seg_mask], dim=0)
images.append(input_4ch)
# Stack front and back masks as target (2 channels)
front = transform(x["front_mask"].convert("L"))
back = transform(x["back_mask"].convert("L"))
target = torch.cat([front, back], dim=0)
targets.append(target)
return torch.stack(images), torch.stack(targets)
dataloader = DataLoader(ds, batch_size=16, shuffle=True, collate_fn=collate_fn, num_workers=4)
Training the Mask Splitter
# Clone the main repository
git clone https://github.com/SpaceTime-Vision-Robotics-Laboratory/mask-splitter.git
# Install requirements
# Run train script (check mask-splitter README.md for additional arguments)
python runnable/train_splitter_network.py --data_dir=/path/to/your/data
Inference Example
import cv2
from mask_splitter.nn.infer import MaskSplitterInference
splitter = MaskSplitterInference(
model_path="path/to/mask_splitter.pt",
device="cuda"
)
image = cv2.imread("frame.png")
mask = cv2.imread("segmented.png", cv2.IMREAD_GRAYSCALE)
front_mask, back_mask = splitter.infer(image, mask)
splitter.visualize(image, front_mask, back_mask)
Dataset Statistics
| Split | Domain | Images | With Vehicle | Environment Only |
|---|---|---|---|---|
| Train | Sim | 14,693 | 10,114 | 4,579 |
| Train | Real | 13,760 | 9,118 | 4,642 |
| Val | Sim | 1,123 | 1,123 | - |
| Val | Real | 1,084 | 1,084 | - |
| Total | - | 30,660 | 21,439 | 9,221 |
Simulation Data
Train:
| Scene | Images | Type |
|---|---|---|
around-car-30-45-60-75-90-high-quality |
2,034 | Vehicle |
around-car-30-45-60-75-90-low-quality |
3,212 | Vehicle |
around-car-90-75-60-45-30-low-quality |
4,868 | Vehicle |
just-environment-high-quality |
2,129 | Environment |
just-environment-low-quality |
2,450 | Environment |
Subtotal: 14,693 images
Validation:
| Scene | Images | Type |
|---|---|---|
around-car-45-high-quality |
343 | Vehicle |
around-car-45-low-quality |
391 | Vehicle |
around-car-45-low-quality-car-at-45 |
389 | Vehicle |
Subtotal: 1,123 images
Real-World Data
Train:
| Scene | Images | Type |
|---|---|---|
just-environment-real |
4,642 | Environment |
real-30-45-60-75-90 |
9,118 | Vehicle |
Subtotal: 13,760 images
Validation:
| Scene | Images | Type |
|---|---|---|
real-val |
1,084 | Vehicle |
Subtotal: 1,084 images
Limitations and Biases
- Vehicle category is limited primarily to toy cars.
- Camera viewpoint is drone-mounted (top-down / oblique).
- Lighting conditions are limited by the simulator and indoor real-world captures.
- No nighttime data is included.
Citation
If you use this dataset in your research, please cite:
@InProceedings{Mocanu_2025_ICCV,
author = {Mocanu, Sebastian and Nae, Sebastian-Ion and Barbu, Mihai-Eugen and Leordeanu, Marius},
title = {Efficient Self-Supervised Neuro-Analytic Visual Servoing for Real-time Quadrotor Control},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2025},
pages = {1744-1753}
}
- Downloads last month
- 3,294




