Buckets:
HiDreamImageTransformer2DModel
A Transformer model for image-like data from HiDream-I1.
The model can be loaded with the following code snippet.
from diffusers import HiDreamImageTransformer2DModel
transformer = HiDreamImageTransformer2DModel.from_pretrained("HiDream-ai/HiDream-I1-Full", subfolder="transformer", torch_dtype=torch.bfloat16)
Loading GGUF quantized checkpoints for HiDream-I1
GGUF checkpoints for the HiDreamImageTransformer2DModel can be loaded using ~FromOriginalModelMixin.from_single_file
import torch
from diffusers import GGUFQuantizationConfig, HiDreamImageTransformer2DModel
ckpt_path = "https://huggingface.co/city96/HiDream-I1-Dev-gguf/blob/main/hidream-i1-dev-Q2_K.gguf"
transformer = HiDreamImageTransformer2DModel.from_single_file(
ckpt_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=torch.bfloat16
)
HiDreamImageTransformer2DModel[[diffusers.HiDreamImageTransformer2DModel]]
diffusers.HiDreamImageTransformer2DModel[[diffusers.HiDreamImageTransformer2DModel]]
Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
The output of Transformer2DModel.
Parameters:
sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) : The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.
Xet Storage Details
- Size:
- 2.15 kB
- Xet hash:
- cdf6e8689917e101431b8a88a8ff19ed314fd3844b06a0bc44b22613f39a97ed
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.