image imagewidth (px) 640 640 |
|---|
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ScanQA 3R Data Processing
04.22 更新
- 整理输出,以及批量下载
- 使用ScanQA 的QA对
QA
路径:ScanQA/data/qa/ScanQA_v1.0_train.json
其中还有的验证集
批量下载数据
scannat 数据集下载
这个数据集里只有800个
cd preprocessing
# -o 设置输出的路径
python download_from_scan_id_txt.py -o ../data/raw_data/scannet --ids_file ../data/qa/scan_id.txt --type .sens --type _vh_clean_2.0.010000.segs.json --type .aggregation.json --type _vh_clean_2.ply
如果需要某个特定的id的话
python download-scannetv2.py -o ../data/raw_data/scannet --type .sens --type _vh_clean_2 --type .0.010000.segs.json --type .aggregation.json --type _vh_clean_2.ply --id scene0000_01
采样图像
# num_frames 可以设置4-8
python export_sampled_frames.py \
--scans_dir ../data/raw_data/scannet/scans \
--output_dir ../data/processed_data/ScanNet \
--train_val_splits_path ./Benchmark \
--num_frames 4 \
--max_workers 8 \
--image_size 480 640
生成voxel
python ScanNet200/preprocess_scannet200.py \
--dataset_root ../data/raw_data/scannet/scans \
--output_root ../data/processed_data/scannet/point_cloud \
--label_map_file ScanNet200/scannetv2-labels.combined.tsv \
--train_val_splits_path ScanNet200/Tasks \
--num_workers 4 \
--voxel_size 0.01 \
--normalize_pointcloud
04.09 更新
04.09 处理了scannet数据集中关于scene0000_00的metadata。 1和2 执行完了之后,直接拿VLM-3R-DATA中的数据筛选了关于scene0000_00对应的QA,我突然发现这个QA非常的复杂,有很多的种类,同时输入的是视频。
数据结构
图像/视频数据
data/processed_data/ScanNet/videos/train
voxel的数据
在VLM-3R/vlm_3r_data_process/data/processed_data/ScanNet/point_cloud/train下
scene0000_00_voxel_0.1.ply后面的数据对应不同的voxel_size
voxel的结构:
这 N 行数据中,每一行代表一个体素块(Voxel)。由于我们做了 np.hstack 拼接,每一行的 8 个数值分别代表:
[:, 0:3] (X, Y, Z): 这是体素的空间坐标。经过前面的处理(通常是除以 voxel_size 后向下取整),这些坐标通常已经变成了离散的整数索引。你可以把它理解为 3D 空间中网格的行、列、层号(比如 [10, 5, -2]),而不是真实的物理米数。
[:, 3:6] (R, G, B): 颜色通道。也就是这个体素块呈现的颜色(通常是 0-255 的整数)。如果一个体素格子里原本包含了多个真实点,由于前面我们使用了 np.unique(..., return_index=True),这里保留的是第一个落入该格子的点的颜色。
[:, 6] (Label): 语义标签(Semantic ID)。比如 3 代表椅子,4 代表桌子。用于语义分割任务。对应的种类在
VLM-3R/vlm_3r_data_process/datasets/ScanNet200/scannet200_constants.py[:, 7] (Instance): 实例 ID(Instance ID)。用于区分同类物体的不同个体。比如场景里有两把椅子,它们的 Label 都是 3,但 Instance ID 可能是 101 和 102。
读取voxel
def read_custom_ply(filepath):
"""
第一步:读取包含自定义 label 和 instance_id 的 PLY 文件
"""
print(f"正在读取文件: {filepath}")
with open(filepath, 'rb') as f:
plydata = PlyData.read(f)
vertex_data = plydata['vertex'].data
# 提取各个字段
x = vertex_data['x']
y = vertex_data['y']
z = vertex_data['z']
r = vertex_data['red']
g = vertex_data['green']
b = vertex_data['blue']
label = vertex_data['label']
instance = vertex_data['instance_id']
# 拼装回 N x 8 的矩阵
voxel_pc = np.vstack((x, y, z, r, g, b, label, instance)).T
return voxel_pc
可视化可以运行
python vis_data_my.py
如果这些都不符合要求执行修改voxel_size.
python datasets/ScanNet200/preprocess_scannet200.py \
--dataset_root ./data/raw_data/scannet/scans \
--output_root ./data/processed_data/ScanNet/point_cloud \
--label_map_file ./data/raw_data/scannet/scannetv2-labels.combined.tsv \
--train_val_splits_path datasets/ScanNet200/Tasks \
--num_workers 4 \
--voxel_size 0.1
ScanQA: 3D Question Answering for Spatial Scene Understanding

This is the official repository of our paper ScanQA: 3D Question Answering for Spatial Scene Understanding (CVPR 2022) by Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, and Motoki Kawanabe.
Abstract
We propose a new 3D spatial understanding task for 3D question answering (3D-QA). In the 3D-QA task, models receive visual information from the entire 3D scene of a rich RGB-D indoor scan and answer given textual questions about the 3D scene. Unlike the 2D-question answering of visual question answering, the conventional 2D-QA models suffer from problems with spatial understanding of object alignment and directions and fail in object localization from the textual questions in 3D-QA. We propose a baseline model for 3D-QA, called the ScanQA model, which learns a fused descriptor from 3D object proposals and encoded sentence embeddings. This learned descriptor correlates language expressions with the underlying geometric features of the 3D scan and facilitates the regression of 3D bounding boxes to determine the described objects in textual questions. We collected human-edited question-answer pairs with free-form answers grounded in 3D objects in each 3D scene. Our new ScanQA dataset contains over 41k question-answer pairs from 800 indoor scenes obtained from the ScanNet dataset. To the best of our knowledge, ScanQA is the first large-scale effort to perform object-grounded question answering in 3D environments.
Installation
Please refer to installation guide.
Dataset
Please refer to data preparation for preparing the ScanNet v2 and ScanQA datasets.
Usage
Training
Start training the ScanQA model with RGB values:
python scripts/train.py --use_color --tag <tag_name>For more training options, please run
scripts/train.py -h.
Inference
Evaluation of trained ScanQA models with the val dataset:
python scripts/eval.py --folder <folder_name> --qa --forcecorresponds to the folder under outputs/ with the timestamp + .
Scoring with the val dataset:
python scripts/score.py --folder <folder_name>Prediction with the test dataset:
python scripts/predict.py --folder <folder_name> --test_type test_w_obj (or test_wo_obj)The ScanQA benchmark is hosted on EvalAI. Please submit the
outputs/<folder_name>/pred.test_w_obj.jsonandpred.test_wo_obj.jsonto this site for the evaluation of the test with and without objects.
Citation
If you find our work helpful for your research. Please consider citing our paper.
@inproceedings{azuma_2022_CVPR,
title={ScanQA: 3D Question Answering for Spatial Scene Understanding},
author={Azuma, Daichi and Miyanishi, Taiki and Kurita, Shuhei and Kawanabe, Motoaki},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
Acknowledgements
We would like to thank facebookresearch/votenet for the 3D object detection and daveredrum/ScanRefer for the 3D localization codebase.
License
ScanQA is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
Copyright (c) 2022 Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, Motoaki Kawanabe
- Downloads last month
- 46

