File size: 3,184 Bytes
6dd45c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
```CODE: 
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.0", dtype=torch.bfloat16, device_map="cuda")

prompt = "Turn this cat into a dog"
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")

image = pipe(image=input_image, prompt=prompt).images[0]
```

ERROR: 
Traceback (most recent call last):
  File "/tmp/FireRedTeam_FireRed-Image-Edit-1.0_04YcQbF.py", line 28, in <module>
    pipe = DiffusionPipeline.from_pretrained("FireRedTeam/FireRed-Image-Edit-1.0", dtype=torch.bfloat16, device_map="cuda")
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/diffusers/pipelines/pipeline_utils.py", line 1021, in from_pretrained
    loaded_sub_model = load_sub_model(
        library_name=library_name,
    ...<21 lines>...
        quantization_config=quantization_config,
    )
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 876, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
    return fn(*args, **kwargs)
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1296, in from_pretrained
    ) = cls._load_pretrained_model(
        ~~~~~~~~~~~~~~~~~~~~~~~~~~^
        model,
        ^^^^^^
    ...<13 lines>...
        is_parallel_loading_enabled=is_parallel_loading_enabled,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    )
    ^
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/diffusers/models/modeling_utils.py", line 1635, in _load_pretrained_model
    _caching_allocator_warmup(model, expanded_device_map, dtype, hf_quantizer)
    ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/tmp/.cache/uv/environments-v2/9df62dfb1ee82937/lib/python3.13/site-packages/diffusers/models/model_loading_utils.py", line 751, in _caching_allocator_warmup
    _ = torch.empty(warmup_elems, dtype=dtype, device=device, requires_grad=False)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.05 GiB. GPU 0 has a total capacity of 22.03 GiB of which 21.84 GiB is free. Including non-PyTorch memory, this process has 186.00 MiB memory in use. Of the allocated memory 0 bytes is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)