File size: 5,209 Bytes
1ee07b0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
license: mit
library_name: diffusers
pipeline_tag: image-to-3d
base_model: wgsxm/PartCrafter
tags:
- partrag
- partcrafter
- diffusers
- image-to-3d
- 3d-generation
- part-level-3d-generation
- retrieval-augmented-generation
- part-retrieval
- rectified-flow
- arxiv:2602.17033
---

# PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing

This repository hosts trained PartRAG weights for the paper:

> **PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing**  
> Peize Li, Zeyu Zhang, Hao Tang  
> arXiv:2602.17033

PartRAG is a retrieval-augmented framework for single-image part-level 3D generation and editing. It builds on the open-source [PartCrafter](https://github.com/wgsxm/PartCrafter) implementation and extends it with the PartRAG retrieval and editing pipeline from the official code repository.

## Links

- Paper: https://arxiv.org/abs/2602.17033
- Project page: https://aigeeksgroup.github.io/PartRAG/
- Code: https://github.com/AIGeeksGroup/PartRAG
- Base project: https://github.com/wgsxm/PartCrafter

## Repository Contents

This Hugging Face repository contains model weights and Diffusers metadata. The runnable code, training scripts, inference scripts, retrieval database builder, editing pipeline, and dataset preprocessing tools are maintained in the official GitHub repository:

```text
https://github.com/AIGeeksGroup/PartRAG
```

The metadata in this repository is aligned with the PartRAG codebase:

- pipeline: `src.pipelines.pipeline_partrag.PartragPipeline`
- transformer: `src.models.transformers.partrag_transformer.PartragDiTModel`
- scheduler: `src.schedulers.scheduling_rectified_flow.RectifiedFlowScheduler`
- VAE: `src.models.autoencoders.autoencoder_kl_triposg.TripoSGVAEModel`

## Model Description

PartRAG generates structured 3D objects from a single RGB image by producing multiple object parts. The framework augments part-level generation with retrieval and contrastive learning:

- part-level image-to-3D generation using a diffusion transformer;
- retrieval-augmented generation over part-level exemplars;
- contrastive objectives for stronger part and object representations;
- masked part-level editing that preserves non-target parts and part transforms.

## Installation

Use the official code repository:

```bash
git clone https://github.com/AIGeeksGroup/PartRAG.git
cd PartRAG
```

Install dependencies following the repository setup:

```bash
bash settings/setup.sh
```

If you prefer to install dependencies manually:

```bash
pip install torch-cluster -f https://data.pyg.org/whl/torch-2.5.1+cu124.html
pip install -r settings/requirements.txt
sudo apt-get install libegl1 libegl1-mesa libgl1-mesa-dev -y
```

## Download Weights

Download this checkpoint into the path expected by the PartRAG scripts:

```bash
huggingface-cli download michaelpopo/PartRAG \
  --local-dir pretrained_weights/PartRAG
```

## Inference

Run inference with the PartRAG checkpoint script from the GitHub repository:

```bash
python scripts/inference_partrag_with_checkpoint.py \
  --image_path <input_image> \
  --num_parts 4 \
  --pretrained_model_path pretrained_weights/PartRAG \
  --checkpoint_path pretrained_weights/PartRAG \
  --output_dir results \
  --render
```

The script exports individual part meshes as `part_XX.glb` and a merged object mesh as `object.glb`.

## Retrieval Database

The retrieval database is not included in this weights repository. Build it with the official PartRAG script:

```bash
python scripts/build_partrag_retrieval_database.py \
  --config configs/partrag_stage1.yaml \
  --output_dir retrieval_database_high_quality \
  --subset_size 1236 \
  --build_faiss
```

Then enable retrieval during checkpoint inference:

```bash
python scripts/inference_partrag_with_checkpoint.py \
  --image_path <input_image> \
  --num_parts 4 \
  --pretrained_model_path pretrained_weights/PartRAG \
  --checkpoint_path pretrained_weights/PartRAG \
  --use_retrieval \
  --database_path retrieval_database_high_quality \
  --num_retrieved_images 3 \
  --output_dir results \
  --render
```

## Editing

Part-level masked editing is provided by the official code repository:

```bash
python scripts/edit_partrag.py \
  --checkpoint_path pretrained_weights/PartRAG \
  --pretrained_path pretrained_weights/PartRAG \
  --input_image <input_image> \
  --num_parts 4 \
  --target_parts 1,3 \
  --edit_text "replace legs" \
  --retrieval_db retrieval_database_high_quality \
  --render
```

## Training Configuration

The checkpoint metadata is provided in `params.yaml`. The paper-protocol training configs in the code repository are:

- `configs/partrag_stage1.yaml`
- `configs/partrag_stage2.yaml`

## Citation

If you use this model, please cite PartRAG:

```bibtex
@article{li2026partrag,
  title={PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing},
  author={Li, Peize and Zhang, Zeyu and Tang, Hao},
  journal={arXiv preprint arXiv:2602.17033},
  year={2026}
}
```

## Attribution

PartRAG builds on the open-source PartCrafter implementation. Upstream-derived components keep the same general module layout and are extended in the official PartRAG codebase with retrieval and editing-specific logic.