| --- |
| license: mit |
| library_name: diffusers |
| pipeline_tag: image-to-3d |
| base_model: wgsxm/PartCrafter |
| tags: |
| - partrag |
| - partcrafter |
| - diffusers |
| - image-to-3d |
| - 3d-generation |
| - part-level-3d-generation |
| - retrieval-augmented-generation |
| - part-retrieval |
| - rectified-flow |
| - arxiv:2602.17033 |
| --- |
| |
| # PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing |
|
|
| This repository hosts trained PartRAG weights for the paper: |
|
|
| > **PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing** |
| > Peize Li, Zeyu Zhang, Hao Tang |
| > arXiv:2602.17033 |
|
|
| PartRAG is a retrieval-augmented framework for single-image part-level 3D generation and editing. It builds on the open-source [PartCrafter](https://github.com/wgsxm/PartCrafter) implementation and extends it with the PartRAG retrieval and editing pipeline from the official code repository. |
|
|
| ## Links |
|
|
| - Paper: https://arxiv.org/abs/2602.17033 |
| - Project page: https://aigeeksgroup.github.io/PartRAG/ |
| - Code: https://github.com/AIGeeksGroup/PartRAG |
| - Base project: https://github.com/wgsxm/PartCrafter |
|
|
| ## Repository Contents |
|
|
| This Hugging Face repository contains model weights and Diffusers metadata. The runnable code, training scripts, inference scripts, retrieval database builder, editing pipeline, and dataset preprocessing tools are maintained in the official GitHub repository: |
|
|
| ```text |
| https://github.com/AIGeeksGroup/PartRAG |
| ``` |
|
|
| The metadata in this repository is aligned with the PartRAG codebase: |
|
|
| - pipeline: `src.pipelines.pipeline_partrag.PartragPipeline` |
| - transformer: `src.models.transformers.partrag_transformer.PartragDiTModel` |
| - scheduler: `src.schedulers.scheduling_rectified_flow.RectifiedFlowScheduler` |
| - VAE: `src.models.autoencoders.autoencoder_kl_triposg.TripoSGVAEModel` |
|
|
| ## Model Description |
|
|
| PartRAG generates structured 3D objects from a single RGB image by producing multiple object parts. The framework augments part-level generation with retrieval and contrastive learning: |
|
|
| - part-level image-to-3D generation using a diffusion transformer; |
| - retrieval-augmented generation over part-level exemplars; |
| - contrastive objectives for stronger part and object representations; |
| - masked part-level editing that preserves non-target parts and part transforms. |
|
|
| ## Installation |
|
|
| Use the official code repository: |
|
|
| ```bash |
| git clone https://github.com/AIGeeksGroup/PartRAG.git |
| cd PartRAG |
| ``` |
|
|
| Install dependencies following the repository setup: |
|
|
| ```bash |
| bash settings/setup.sh |
| ``` |
|
|
| If you prefer to install dependencies manually: |
|
|
| ```bash |
| pip install torch-cluster -f https://data.pyg.org/whl/torch-2.5.1+cu124.html |
| pip install -r settings/requirements.txt |
| sudo apt-get install libegl1 libegl1-mesa libgl1-mesa-dev -y |
| ``` |
|
|
| ## Download Weights |
|
|
| Download this checkpoint into the path expected by the PartRAG scripts: |
|
|
| ```bash |
| huggingface-cli download michaelpopo/PartRAG \ |
| --local-dir pretrained_weights/PartRAG |
| ``` |
|
|
| ## Inference |
|
|
| Run inference with the PartRAG checkpoint script from the GitHub repository: |
|
|
| ```bash |
| python scripts/inference_partrag_with_checkpoint.py \ |
| --image_path <input_image> \ |
| --num_parts 4 \ |
| --pretrained_model_path pretrained_weights/PartRAG \ |
| --checkpoint_path pretrained_weights/PartRAG \ |
| --output_dir results \ |
| --render |
| ``` |
|
|
| The script exports individual part meshes as `part_XX.glb` and a merged object mesh as `object.glb`. |
|
|
| ## Retrieval Database |
|
|
| The retrieval database is not included in this weights repository. Build it with the official PartRAG script: |
|
|
| ```bash |
| python scripts/build_partrag_retrieval_database.py \ |
| --config configs/partrag_stage1.yaml \ |
| --output_dir retrieval_database_high_quality \ |
| --subset_size 1236 \ |
| --build_faiss |
| ``` |
|
|
| Then enable retrieval during checkpoint inference: |
|
|
| ```bash |
| python scripts/inference_partrag_with_checkpoint.py \ |
| --image_path <input_image> \ |
| --num_parts 4 \ |
| --pretrained_model_path pretrained_weights/PartRAG \ |
| --checkpoint_path pretrained_weights/PartRAG \ |
| --use_retrieval \ |
| --database_path retrieval_database_high_quality \ |
| --num_retrieved_images 3 \ |
| --output_dir results \ |
| --render |
| ``` |
|
|
| ## Editing |
|
|
| Part-level masked editing is provided by the official code repository: |
|
|
| ```bash |
| python scripts/edit_partrag.py \ |
| --checkpoint_path pretrained_weights/PartRAG \ |
| --pretrained_path pretrained_weights/PartRAG \ |
| --input_image <input_image> \ |
| --num_parts 4 \ |
| --target_parts 1,3 \ |
| --edit_text "replace legs" \ |
| --retrieval_db retrieval_database_high_quality \ |
| --render |
| ``` |
|
|
| ## Training Configuration |
|
|
| The checkpoint metadata is provided in `params.yaml`. The paper-protocol training configs in the code repository are: |
|
|
| - `configs/partrag_stage1.yaml` |
| - `configs/partrag_stage2.yaml` |
|
|
| ## Citation |
|
|
| If you use this model, please cite PartRAG: |
|
|
| ```bibtex |
| @article{li2026partrag, |
| title={PartRAG: Retrieval-Augmented Part-Level 3D Generation and Editing}, |
| author={Li, Peize and Zhang, Zeyu and Tang, Hao}, |
| journal={arXiv preprint arXiv:2602.17033}, |
| year={2026} |
| } |
| ``` |
|
|
| ## Attribution |
|
|
| PartRAG builds on the open-source PartCrafter implementation. Upstream-derived components keep the same general module layout and are extended in the official PartRAG codebase with retrieval and editing-specific logic. |
|
|