Instructions to use stabilityai/stable-diffusion-3.5-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-diffusion-3.5-medium with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
CLIP maximum sequence length
#36
by simepy - opened
As my prompt is long but less than 512 I have this error:
Token indices sequence length is longer than the specified maximum sequence length for this model (151 > 77). Running this sequence through the model will result in indexing errors
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ["......, ......."]
This is my pipeline:
output = shared_pipeline.pipeline(
image_input.prompt,
negative_prompt="",
height=512,
width=512,
num_inference_steps=20,
guidance_scale=3,
generator=shared_pipeline.generator,
max_sequence_length=512,
)
I added max_sequence_length=512 but nothing change.
Do you know how can I fix this issue ?