Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

cerspense
/
zeroscope_v1_320s

Diffusers
TextToVideoSDPipeline
Text-to-Video
Model card Files Files and versions
xet
Community
1

Instructions to use cerspense/zeroscope_v1_320s with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use cerspense/zeroscope_v1_320s with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v1_320s", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
zeroscope_v1_320s
Ctrl+K
Ctrl+K
  • 2 contributors
History: 8 commits
cerspense's picture
cerspense
Update README.md
97c6804 almost 3 years ago
  • scheduler
    init commit almost 3 years ago
  • text_encoder
    init commit almost 3 years ago
  • tokenizer
    init commit almost 3 years ago
  • unet
    init commit almost 3 years ago
  • vae
    init commit almost 3 years ago
  • .gitattributes
    1.48 kB
    initial commit almost 3 years ago
  • README.md
    398 Bytes
    Update README.md almost 3 years ago
  • model_index.json
    384 Bytes
    changed null to diffusers almost 3 years ago
  • zeroscope_v1_320s.pth
    2.82 GB
    xet
    init commit almost 3 years ago