JeasLee commited on
Commit
8fc7a55
·
verified ·
1 Parent(s): b54b9ae

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RoboInterVLM: Vision-Language Model Checkpoints for RoboInter Manipulation Suite
2
+
3
+ Model checkpoints of **RoboInterVLM**, developed as part of the [RoboInter](https://github.com/InternRobotics/RoboInter) project. These models are fine-tuned on the [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA) dataset for intermediate representation understanding and generation in robotic manipulation.
4
+
5
+ ## Available Checkpoints
6
+
7
+ | Checkpoint | Base Model | Architecture | Parameters | Description |
8
+ |---|---|---|---|---|
9
+ | `RoboInterVLM_qwenvl25_3b` | [Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) | Qwen2.5-VL | ~3B | Lightweight Qwen2.5VL model, suitable for efficient deployment |
10
+ | `RoboInterVLM_qwenvl25_7b` | [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) | Qwen2.5-VL | ~7B | Larger Qwen2.5-VL backbone for stronger performance |
11
+ | `RoboInterVLM_llava_one_vision_7B` | [LLaVA-OneVision-Qwen2-7B](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov) | LLaVA-OneVision (SigLIP + Qwen2) | ~7B | LLaVA-OneVision backbone with SigLIP vision encoder |
12
+
13
+ All checkpoints are stored in `safetensors` format with `bfloat16` precision.
14
+
15
+ ## Supported Tasks
16
+
17
+ These models are jointly trained on general VQA and three categories of our curated VQA tasks:
18
+
19
+ - **Generation**: Predicting intermediate representations such as trajectory waypoints, gripper bounding boxes, contact points/boxes, object bounding boxes (current & final), etc.
20
+ - **Understanding**: Multiple-choice visual reasoning about contact states, grasp poses, object grounding, trajectory selection, movement directions, etc.
21
+ - **Task Planning**: High-level task planning including next-step prediction, action primitive recognition, success determination, etc.
22
+
23
+ ## Usage
24
+
25
+ ### Qwen2.5-VL Checkpoints
26
+ For loading and inference with the Qwen2.5-VL checkpoint, please refer to the [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL) codebase. We provide a fast loading example below:
27
+
28
+ ```python
29
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
30
+
31
+ model_path = "InternRobotics/RoboInterVLM_qwenvl25_3b" # or RoboInterVLM_qwenvl25_7b
32
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
33
+ model_path, torch_dtype="auto", device_map="auto"
34
+ )
35
+ processor = AutoProcessor.from_pretrained(model_path)
36
+ ```
37
+
38
+ ### LLaVA-OneVision Checkpoint
39
+
40
+ For loading and inference with the LLaVA-OneVision checkpoint, please refer to the [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV) codebase, as it requires custom model classes.
41
+
42
+ ### Training & Evaluation
43
+
44
+ For full training and evaluation pipelines, please refer to:
45
+
46
+ - **Qwen2.5-VL models**: [RoboInterVLM-QwenVL](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-QwenVL)
47
+ - **LLaVA-OneVision model**: [RoboInterVLM-LLaVAOV](https://github.com/InternRobotics/RoboInter/tree/main/RoboInterVLM/RoboInterVLM-LLaVAOV)
48
+ - **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)
49
+
50
+ ## Related Resources
51
+
52
+ - **Project**: [RoboInter](https://github.com/InternRobotics/RoboInter)
53
+ - **Annotation Data**: [RoboInter-Data](https://huggingface.co/datasets/InternRobotics/RoboInter-Data)
54
+ - **VQA Dataset**: [RoboInter-VQA](https://huggingface.co/datasets/InternRobotics/RoboInter-VQA)
55
+ ## License
56
+
57
+ Please refer to the original licenses of [RoboInter](https://github.com/InternRobotics/RoboInter), [Qwen2.5-VL](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), and [LLaVA-OneVision](https://huggingface.co/lmms-lab/llava-onevision-qwen2-7b-ov).