text stringlengths 0 3.84k |
|---|
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, Intern... |
During handling of the above exception, another exception occurred: |
Traceback (most recent call last): |
File "/tmp/.cache/uv/environments-v2/5826d58e4a076461/lib/python3.13/site-packages/transformers/pipelines/base.py", line 310, in infer_framework_load_model |
model = model_class.from_pretrained(model, **fp32_kwargs) |
File "/tmp/.cache/uv/environments-v2/5826d58e4a076461/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 603, in from_pretrained |
raise ValueError( |
...<2 lines>... |
) |
ValueError: Unrecognized configuration class <class 'transformers_modules.kakaocorp.kanana-1.5-v-3b-instruct.5dc44a339df77ebc6107411826db82820ef0f36a.configuration.KananaVConfig'> for this kind of AutoModel: AutoModelForImageTextToText. |
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, GotOcr2Config, IdeficsConfig, Idefics2Config, Idefics3Config, InstructBlipConfig, Intern... |
Traceback (most recent call last): |
File "/tmp/kakaocorp_kanana-1.5-v-3b-instruct_1M9iNX0.py", line 12, in <module> |
model = AutoModelForVision2Seq.from_pretrained("kakaocorp/kanana-1.5-v-3b-instruct", trust_remote_code=True, torch_dtype="auto"), |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
File "/tmp/.cache/uv/environments-v2/510ad75317545b08/lib/python3.13/site-packages/transformers/models/auto/modeling_auto.py", line 2165, in from_pretrained |
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) |
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
File "/tmp/.cache/uv/environments-v2/510ad75317545b08/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 582, in from_pretrained |
model_class = get_class_from_dynamic_module( |
class_ref, pretrained_model_name_or_path, code_revision=code_revision, **hub_kwargs, **kwargs |
) |
File "/tmp/.cache/uv/environments-v2/510ad75317545b08/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 570, in get_class_from_dynamic_module |
final_module = get_cached_module_file( |
repo_id, |
...<8 lines>... |
repo_type=repo_type, |
) |
File "/tmp/.cache/uv/environments-v2/510ad75317545b08/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 393, in get_cached_module_file |
modules_needed = check_imports(resolved_module_file) |
File "/tmp/.cache/uv/environments-v2/510ad75317545b08/lib/python3.13/site-packages/transformers/dynamic_module_utils.py", line 225, in check_imports |
raise ImportError( |
...<2 lines>... |
) |
ImportError: This modeling file requires the following packages that were not found in your environment: einops, timm. Run `pip install einops timm` |
Traceback (most recent call last): |
File "/tmp/lmms-lab_LLaVA-OneVision-1.5-8B-Instruct_08WPyxp.py", line 16, in <module> |
pipe = pipeline("image-text-to-text", model="lmms-lab/LLaVA-OneVision-1.5-8B-Instruct", trust_remote_code=True) |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1028, in pipeline |
framework, model = infer_framework_load_model( |
~~~~~~~~~~~~~~~~~~~~~~~~~~^ |
adapter_path if adapter_path is not None else model, |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
...<5 lines>... |
**model_kwargs, |
^^^^^^^^^^^^^^^ |
) |
^ |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/pipelines/base.py", line 333, in infer_framework_load_model |
raise ValueError( |
f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n" |
) |
ValueError: Could not load model lmms-lab/LLaVA-OneVision-1.5-8B-Instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForImageTextToText'>,). See the original errors: |
while loading with AutoModelForImageTextToText, an error is thrown: |
Traceback (most recent call last): |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/pipelines/base.py", line 293, in infer_framework_load_model |
model = model_class.from_pretrained(model, **kwargs) |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained |
raise ValueError( |
...<2 lines>... |
) |
ValueError: Unrecognized configuration class <class 'transformers_modules.lmms-lab.LLaVA-OneVision-1.5-8B-Instruct.ad0b3fcb0ae3453201072ab280550f3fcf0e806b.configuration_llavaonevision1_5.Llavaonevision1_5Config'> for this kind of AutoModel: AutoModelForImageTextToText. |
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idef... |
During handling of the above exception, another exception occurred: |
Traceback (most recent call last): |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/pipelines/base.py", line 311, in infer_framework_load_model |
model = model_class.from_pretrained(model, **fp32_kwargs) |
File "/tmp/.cache/uv/environments-v2/6bec991289c4b833/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 607, in from_pretrained |
raise ValueError( |
...<2 lines>... |
) |
ValueError: Unrecognized configuration class <class 'transformers_modules.lmms-lab.LLaVA-OneVision-1.5-8B-Instruct.ad0b3fcb0ae3453201072ab280550f3fcf0e806b.configuration_llavaonevision1_5.Llavaonevision1_5Config'> for this kind of AutoModel: AutoModelForImageTextToText. |
Model type should be one of AriaConfig, AyaVisionConfig, BlipConfig, Blip2Config, ChameleonConfig, Cohere2VisionConfig, DeepseekVLConfig, DeepseekVLHybridConfig, Emu3Config, EvollaConfig, Florence2Config, FuyuConfig, Gemma3Config, Gemma3nConfig, GitConfig, Glm4vConfig, Glm4vMoeConfig, GotOcr2Config, IdeficsConfig, Idef... |
Everything was good in lmms-lab_LLaVA-OneVision-1.5-8B-Instruct_1.txt |
No suitable GPU found for meituan-longcat/LongCat-Flash-Chat-FP8 | 2721.14 GB VRAM requirement |
No suitable GPU found for meituan-longcat/LongCat-Flash-Chat | 1360.52 GB VRAM requirement |
No suitable GPU found for meituan-longcat/LongCat-Flash-Thinking | 1360.52 GB VRAM requirement |
Everything was good in meta-llama_Llama-3.2-3B-Instruct_0.txt |
Everything was good in meta-llama_Llama-3.2-3B-Instruct_1.txt |
Everything was good in meta-llama_Meta-Llama-3-8B-Instruct_0.txt |
Traceback (most recent call last): |
File "/tmp/microsoft_Phi-4-mini-flash-reasoning_0WYtVxq.py", line 13, in <module> |
pipe = pipeline("text-generation", model="microsoft/Phi-4-mini-flash-reasoning", trust_remote_code=True) |
File "/tmp/.cache/uv/environments-v2/4926ae9e86f11c3f/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1008, in pipeline |
framework, model = infer_framework_load_model( |
~~~~~~~~~~~~~~~~~~~~~~~~~~^ |
adapter_path if adapter_path is not None else model, |
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
...<5 lines>... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.