Eagle
Collection
Eagle is a family of frontier vision-language models with data-centric strategies. The model supports both HD image and long-context video input. • 16 items • Updated • 41
How to use nvidia/VideoITG-8B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="nvidia/VideoITG-8B")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
pipe(text=messages) # Load model directly
from transformers import EagleQwenG
model = EagleQwenG.from_pretrained("nvidia/VideoITG-8B", dtype="auto")How to use nvidia/VideoITG-8B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "nvidia/VideoITG-8B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "nvidia/VideoITG-8B",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker model run hf.co/nvidia/VideoITG-8B
How to use nvidia/VideoITG-8B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "nvidia/VideoITG-8B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "nvidia/VideoITG-8B",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "nvidia/VideoITG-8B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "nvidia/VideoITG-8B",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'How to use nvidia/VideoITG-8B with Docker Model Runner:
docker model run hf.co/nvidia/VideoITG-8B
[🌐Homepage] [💻GitHub] [📜Tech Report] [🤗VideoITG-40K]
VideoITG-8B is a multimodal video understanding model trained with instructed temporal grounding, equipped with the ability to enhance Video Large Language Models through intelligent frame selection. The model tackles the complexities of real-world video scenarios by aligning frame sampling with user instructions. Please check our paper for more details.
| Model | Base Model | Frames | LongVideoBench | MLVU | VideoMME | CG-Bench |
|---|---|---|---|---|---|---|
| VideoITG-7B | InternVL2.5-8B | 32 | 61.9 (+2.9%) | 75.0 (+7.8%) | 67.3 (+4.0%) | 46.7 (+7.0%) |
| VideoITG-7B | InternVL2.5-26B | 32 | 63.0 (+1.0%) | 78.9 (+6.1%) | 69.9 (+2.5%) | 48.7 (+6.0%) |
| VideoITG-7B | LLaVA-Video-7B | 32 | 61.6 (+3.6%) | 74.6 (+8.6%) | 66.1 (+3.0%) | 42.8 (+9.0%) |
| VideoITG-7B | LLaVA-Video-7B | 64 | 60.9 (+7.4%) | 76.3 (+7.6%) | 66.4 (+1.9%) | 42.9 (+8.1%) |
If you find this project useful, please cite our work:
@article{wang2025videoitg,
title = {VideoITG: Multimodal Video Understanding with Instructed Temporal Grounding},
author = {Shihao Wang and Guo Chen and De-An Huang and Zhiqi Li and Minghan Li and Guilin Liu and Jose M. Alvarez and Lei Zhang and Zhiding Yu},
journal = {arXiv preprint arXiv:2507.13353},
year = {2025}
}