Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

doa12
/
ip2p_lora_without_MoE

Diffusers
Safetensors
Model card Files Files and versions
xet
Community

Instructions to use doa12/ip2p_lora_without_MoE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use doa12/ip2p_lora_without_MoE with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("doa12/ip2p_lora_without_MoE", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
ip2p_lora_without_MoE
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
doa12's picture
doa12
End of training
9a14bef verified 9 months ago
  • checkpoint-5000
    End of training 9 months ago
  • .gitattributes
    1.52 kB
    initial commit 9 months ago
  • pytorch_lora_weights.safetensors
    12.8 MB
    xet
    End of training 9 months ago