# JoyImageEditTransformer3DModel

The model can be loaded with the following code snippet.

```python
from diffusers import JoyImageEditTransformer3DModel

transformer = JoyImageEditTransformer3DModel.from_pretrained("jdopensource/JoyAI-Image-Edit-Diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## JoyImageEditTransformer3DModel[[diffusers.JoyImageEditTransformer3DModel]]

#### diffusers.JoyImageEditTransformer3DModel[[diffusers.JoyImageEditTransformer3DModel]]

[Source](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/transformers/transformer_joyimage.py#L372)

JoyImage Transformer model for image generation / editing.

Dual-stream DiT architecture with WAN-style conditioning embeddings and custom rotary position embeddings.

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

#### diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

[Source](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/modeling_outputs.py#L21)

The output of [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel).

**Parameters:**

sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/main/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) : The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability distributions for the unnoised latent pixels.

