Diffusers
stable-diffusion-xl
inpainting
virtual try-on
How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("yisol/IDM-VTON-DC", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Check out more codes on our github repository!

IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild

This is an official implementation of paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild'

πŸ€— Try our huggingface Demo

teaser  teaser2 

TODO LIST

  • demo model
  • inference code
  • training code

Acknowledgements

For the demo, GPUs are supported from zerogpu, and auto masking generation codes are based on OOTDiffusion and DCI-VTON.
Parts of the code are based on IP-Adapter.

Citation

@article{choi2024improving,
  title={Improving Diffusion Models for Virtual Try-on},
  author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo},
  journal={arXiv preprint arXiv:2403.05139},
  year={2024}
}

License

The codes and checkpoints in this repository are under the CC BY-NC-SA 4.0 license.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using yisol/IDM-VTON-DC 2

Paper for yisol/IDM-VTON-DC