Instructions to use hf-internal-testing/tiny-random-LlavaForConditionalGeneration-chat-template with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-internal-testing/tiny-random-LlavaForConditionalGeneration-chat-template with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("hf-internal-testing/tiny-random-LlavaForConditionalGeneration-chat-template", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| { | |
| "image_token": "<image>", | |
| "num_additional_image_tokens": 0, | |
| "patch_size": 14, | |
| "processor_class": "LlavaProcessor", | |
| "vision_feature_select_strategy": "full" | |
| } | |