```CODE: import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.1", dtype=torch.bfloat16, device_map="cuda") prompt = "Turn this cat into a dog" input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png") image = pipe(image=input_image, prompt=prompt).images[0] ``` ERROR: Traceback (most recent call last): File "/tmp/FireRedTeam_FireRed-Image-Edit-1.1_0iZMQUp.py", line 28, in pipe = DiffusionPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.1", dtype=torch.bfloat16, device_map="cuda") File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn return fn(*args, **kwargs) File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/diffusers/pipelines/pipeline_utils.py", line 1049, in from_pretrained loaded_sub_model = load_sub_model( library_name=library_name, ...<22 lines>... quantization_config=quantization_config, ) File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 885, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs) File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn return fn(*args, **kwargs) File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1312, in from_pretrained ) = cls._load_pretrained_model( ~~~~~~~~~~~~~~~~~~~~~~~~~~^ model, ^^^^^^ ...<14 lines>... disable_mmap=disable_mmap, ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1652, in _load_pretrained_model _caching_allocator_warmup(model, expanded_device_map, dtype, hf_quantizer) ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/.cache/uv/environments-v2/5f991ed4fb3ad9cc/lib/python3.13/site-packages/diffusers/models/model_loading_utils.py", line 758, in _caching_allocator_warmup _ = torch.empty(warmup_elems, dtype=dtype, device=device, requires_grad=False) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.05 GiB. GPU 0 has a total capacity of 22.03 GiB of which 5.91 GiB is free. Including non-PyTorch memory, this process has 16.11 GiB memory in use. Of the allocated memory 15.93 GiB is allocated by PyTorch, and 2.12 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)