Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Duplicated from  patrolli/AnimateAnyone

codingggasdfasf
/
video-animator

Diffusers
ONNX
Model card Files Files and versions
xet
Community
1

Instructions to use codingggasdfasf/video-animator with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use codingggasdfasf/video-animator with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("codingggasdfasf/video-animator", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
video-animator / src /pipelines /__pycache__
Ctrl+K
Ctrl+K
  • 3 contributors
History: 1 commit
root
setting up model
dd31ccf almost 2 years ago
  • __init__.cpython-310.pyc
    138 Bytes
    setting up model almost 2 years ago
  • context.cpython-310.pyc
    2.05 kB
    setting up model almost 2 years ago
  • pipeline_pose2vid_long.cpython-310.pyc
    11.7 kB
    setting up model almost 2 years ago
  • utils.cpython-310.pyc
    1.02 kB
    setting up model almost 2 years ago