Buckets:
Text-to-(RGB, depth)
LDM3D was proposed in LDM3D: Latent Diffusion Model for 3D by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, and Vasudev Lal. LDM3D generates an image and a depth map from a given text prompt unlike the existing text-to-image diffusion models such as Stable Diffusion which only generates an image. With almost the same number of parameters, LDM3D achieves to create a latent space that can compress both the RGB images and the depth maps.
Two checkpoints are available for use:
- ldm3d-original. The original checkpoint used in the paper
- ldm3d-4c. The new version of LDM3D using 4 channels inputs instead of 6-channels inputs and finetuned on higher resolution images.
The abstract from the paper is:
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences. A short video summarizing the approach can be found at this url.
Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
StableDiffusionLDM3DPipeline[[diffusers.StableDiffusionLDM3DPipeline]]
diffusers.StableDiffusionLDM3DPipeline[[diffusers.StableDiffusionLDM3DPipeline]]
__call__diffusers.StableDiffusionLDM3DPipeline.__call__https://github.com/huggingface/diffusers/blob/vr_12652/src/diffusers/pipelines/stable_diffusion_ldm3d/pipeline_stable_diffusion_ldm3d.py#L747[{"name": "prompt", "val": ": str | list[str] = None"}, {"name": "height", "val": ": int | None = None"}, {"name": "width", "val": ": int | None = None"}, {"name": "num_inference_steps", "val": ": int = 49"}, {"name": "timesteps", "val": ": list = None"}, {"name": "sigmas", "val": ": list = None"}, {"name": "guidance_scale", "val": ": float = 5.0"}, {"name": "negative_prompt", "val": ": str | list[str] | None = None"}, {"name": "num_images_per_prompt", "val": ": int | None = 1"}, {"name": "eta", "val": ": float = 0.0"}, {"name": "generator", "val": ": torch._C.Generator | list[torch._C.Generator] | None = None"}, {"name": "latents", "val": ": torch.Tensor | None = None"}, {"name": "prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "negative_prompt_embeds", "val": ": torch.Tensor | None = None"}, {"name": "ip_adapter_image", "val": ": PIL.Image.Image | numpy.ndarray | torch.Tensor | list[PIL.Image.Image] | list[numpy.ndarray] | list[torch.Tensor] | None = None"}, {"name": "ip_adapter_image_embeds", "val": ": list[torch.Tensor] | None = None"}, {"name": "output_type", "val": ": str | None = 'pil'"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "cross_attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "guidance_rescale", "val": ": float = 0.0"}, {"name": "clip_skip", "val": ": int | None = None"}, {"name": "callback_on_step_end", "val": ": typing.Optional[typing.Callable[[int, int], NoneType]] = None"}, {"name": "callback_on_step_end_tensor_inputs", "val": ": list = ['latents']"}, {"name": "**kwargs", "val": ""}]- prompt (str or list[str], optional) --
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.
- height (
int, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor) -- The height in pixels of the generated image. - width (
int, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor) -- The width in pixels of the generated image. - num_inference_steps (
int, optional, defaults to 50) -- The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - timesteps (
list[int], optional) -- Custom timesteps to use for the denoising process with schedulers which support atimestepsargument in theirset_timestepsmethod. If not defined, the default behavior whennum_inference_stepsis passed will be used. Must be in descending order. - sigmas (
list[float], optional) -- Custom sigmas to use for the denoising process with schedulers which support asigmasargument in theirset_timestepsmethod. If not defined, the default behavior whennum_inference_stepsis passed will be used. - guidance_scale (
float, optional, defaults to 5.0) -- A higher guidance scale value encourages the model to generate images closely linked to the textpromptat the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1. - negative_prompt (
strorlist[str], optional) -- The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embedsinstead. Ignored when not using guidance (guidance_scale 0[StableDiffusionPipelineOutput](/docs/diffusers/pr_12652/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) ortupleIfreturn_dictisTrue, [StableDiffusionPipelineOutput](/docs/diffusers/pr_12652/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) is returned, otherwise atupleis returned where the first element is a list with the generated images and the second element is a list ofbool`s indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content.
The call function to the pipeline for generation.
Examples:
>>> from diffusers import StableDiffusionLDM3DPipeline
>>> pipe = StableDiffusionLDM3DPipeline.from_pretrained("Intel/ldm3d-4c")
>>> pipe = pipe.to("cuda")
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> output = pipe(prompt)
>>> rgb_image, depth_image = output.rgb, output.depth
>>> rgb_image[0].save("astronaut_ldm3d_rgb.jpg")
>>> depth_image[0].save("astronaut_ldm3d_depth.png")
Parameters:
prompt (str or list[str], optional) : The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds.
height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) : The height in pixels of the generated image.
width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) : The width in pixels of the generated image.
num_inference_steps (int, optional, defaults to 50) : The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
timesteps (list[int], optional) : Custom timesteps to use for the denoising process with schedulers which support a timesteps argument in their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed will be used. Must be in descending order.
sigmas (list[float], optional) : Custom sigmas to use for the denoising process with schedulers which support a sigmas argument in their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed will be used.
guidance_scale (float, optional, defaults to 5.0) : A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
negative_prompt (str or list[str], optional) : The prompt or prompts to guide what to not include in image generation. If not defined, you need to pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1).
num_images_per_prompt (int, optional, defaults to 1) : The number of images to generate per prompt.
eta (float, optional, defaults to 0.0) : Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers.
generator (torch.Generator or list[torch.Generator], optional) : A torch.Generator to make generation deterministic.
latents (torch.Tensor, optional) : Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor is generated by sampling using the supplied random generator.
prompt_embeds (torch.Tensor, optional) : Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from the prompt input argument.
negative_prompt_embeds (torch.Tensor, optional) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, negative_prompt_embeds are generated from the negative_prompt input argument.
ip_adapter_image : (PipelineImageInput, optional): Optional image input to work with IP Adapters.
ip_adapter_image_embeds (list[torch.Tensor], optional) : Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape (batch_size, num_images, emb_dim). It should contain the negative image embedding if do_classifier_free_guidance is set to True. If not provided, embeddings are computed from the ip_adapter_image input argument.
output_type (str, optional, defaults to "pil") : The output format of the generated image. Choose between PIL.Image or np.array.
return_dict (bool, optional, defaults to True) : Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple.
cross_attention_kwargs (dict, optional) : A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in self.processor.
clip_skip (int, optional) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
callback_on_step_end (Callable, optional) : A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by callback_on_step_end_tensor_inputs.
callback_on_step_end_tensor_inputs (list, optional) : The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list will be passed as callback_kwargs argument. You will only be able to include variables listed in the ._callback_tensor_inputs attribute of your pipeline class.
Returns:
[StableDiffusionPipelineOutput](/docs/diffusers/pr_12652/en/api/pipelines/stable_diffusion/gligen#diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput) or tuple``
If return_dict is True, StableDiffusionPipelineOutput is returned,
otherwise a tuple is returned where the first element is a list with the generated images and the
second element is a list of bools indicating whether the corresponding generated image contains
"not-safe-for-work" (nsfw) content.
encode_prompt[[diffusers.StableDiffusionLDM3DPipeline.encode_prompt]]
Encodes the prompt into text encoder hidden states.
Parameters:
prompt (str or list[str], optional) : prompt to be encoded
device : (torch.device): torch device
num_images_per_prompt (int) : number of images that should be generated per prompt
do_classifier_free_guidance (bool) : whether to use classifier free guidance or not
negative_prompt (str or list[str], optional) : The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
prompt_embeds (torch.Tensor, optional) : Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
negative_prompt_embeds (torch.Tensor, optional) : Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
lora_scale (float, optional) : A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
clip_skip (int, optional) : Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
get_guidance_scale_embedding[[diffusers.StableDiffusionLDM3DPipeline.get_guidance_scale_embedding]]
Parameters:
w (torch.Tensor) : Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings.
embedding_dim (int, optional, defaults to 512) : Dimension of the embeddings to generate.
dtype (torch.dtype, optional, defaults to torch.float32) : Data type of the generated embeddings.
Returns:
torch.Tensor
Embedding vectors with shape (len(w), embedding_dim).
LDM3DPipelineOutput[[diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput]]
diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput[[diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput]]
Output class for Stable Diffusion pipelines.
__call__diffusers.pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput.call[{"name": "*args", "val": ""}, {"name": "**kwargs", "val": ""}] Call self as a function.
Parameters:
rgb (list[PIL.Image.Image] or np.ndarray) : list of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).
depth (list[PIL.Image.Image] or np.ndarray) : list of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels).
nsfw_content_detected (list[bool]) : list indicating whether the corresponding generated image contains "not-safe-for-work" (nsfw) content or None if safety checking could not be performed.
Upscaler
LDM3D-VR is an extended version of LDM3D.
The abstract from the paper is: Latent diffusion models have proven to be state-of-the-art in the creation and manipulation of visual outputs. However, as far as we know, the generation of depth maps jointly with RGB is still limited. We introduce LDM3D-VR, a suite of diffusion models targeting virtual reality development that includes LDM3D-pano and LDM3D-SR. These models enable the generation of panoramic RGBD based on textual prompts and the upscaling of low-resolution inputs to high-resolution RGBD, respectively. Our models are fine-tuned from existing pretrained models on datasets containing panoramic/high-resolution RGB images, depth maps and captions. Both models are evaluated in comparison to existing related methods
Two checkpoints are available for use:
- ldm3d-pano. This checkpoint enables the generation of panoramic images and requires the StableDiffusionLDM3DPipeline pipeline to be used.
- ldm3d-sr. This checkpoint enables the upscaling of RGB and depth images. Can be used in cascade after the original LDM3D pipeline using the StableDiffusionUpscaleLDM3DPipeline from communauty pipeline.
Xet Storage Details
- Size:
- 18 kB
- Xet hash:
- a659e64b0737064e8de8377176d45d5e60d9a870ec70b92fc73171ecaef6f3a0
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.