Buckets:

hf-doc-build/doc-dev / diffusers /pr_12448 /en /using-diffusers /unconditional_image_generation.md
|
download
raw
1.7 kB

Unconditional image generation

Unconditional image generation generates images that look like a random sample from the training data the model was trained on because the denoising process is not guided by any additional context like text or image.

To get started, use the DiffusionPipeline to load the anton-l/ddpm-butterflies-128 checkpoint to generate images of butterflies. The DiffusionPipeline downloads and caches all the model components required to generate an image.

from diffusers import DiffusionPipeline

generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128").to("cuda")
image = generator().images[0]
image

Want to generate images of something else? Take a look at the training guide to learn how to train a model to generate your own images.

The output image is a PIL.Image object that can be saved:

image.save("generated_image.png")

You can also try experimenting with the num_inference_steps parameter, which controls the number of denoising steps. More denoising steps typically produce higher quality images, but it'll take longer to generate. Feel free to play around with this parameter to see how it affects the image quality.

image = generator(num_inference_steps=100).images[0]
image

Try out the Space below to generate an image of a butterfly!

Xet Storage Details

Size:
1.7 kB
·
Xet hash:
cf83b8d9632e6c77d04f5749c75462fcbda8ae09460b31b35537ccc98e6e07e0

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.