runtime error
Exit code: 1. Reason: cls(config, *model_args, **model_kwargs) File "/root/.cache/huggingface/modules/transformers_modules/nvidia/llama_hyphen_nemotron_hyphen_embed_hyphen_vl_hyphen_1b_hyphen_v2/964a06114b12eeae956ab70726dd829c30687a2f/modeling_llama_nemotron_vl.py", line 256, in __init__ super().__init__(config) ~~~~~~~~~~~~~~~~^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 2076, in __init__ self.config._attn_implementation_internal = self._check_and_adjust_attn_implementation( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ self.config._attn_implementation, is_init_check=True ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 2686, in _check_and_adjust_attn_implementation applicable_attn_implementation = self.get_correct_attn_implementation( applicable_attn_implementation, is_init_check ) File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 2725, in get_correct_attn_implementation raise e File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 2722, in get_correct_attn_implementation self._sdpa_can_dispatch(is_init_check) ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 2574, in _sdpa_can_dispatch raise ValueError( ...<3 lines>... ) ValueError: LlamaNemotronVLModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument `attn_implementation="eager"` meanwhile. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")`
Container logs:
Fetching error logs...