File size: 3,758 Bytes
0161e74
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# FOCUS: Efficient Keyframe Selection for Long Video Understanding

> 🎉 **NEWS**: Our paper has been accepted by ICLR 2026! 
> 
> 📄 [Read the paper on OpenReview](https://openreview.net/forum?id=1OQKqLFcbB)

![FOCUS Framework](fig/framework.png)

Multimodal large language models (MLLMs) represent images and video frames as visual tokens. Scaling from single images to hour-long videos, however, inflates the token budget far beyond practical limits. Popular pipelines therefore either uniformly subsample or apply keyframe selection with retrieval-style scoring using smaller vision-language models. However, these keyframe selection methods still rely on pre-filtering before selection to reduce the inference cost and can miss the most informative moments.

We propose **FOCUS**, *Frame-Optimistic Confidence Upper-bound Selection*, a training-free, model-agnostic keyframe selection module that selects query-relevant frames under a strict token budget. FOCUS formulates keyframe selection as a combinatorial pure-exploration (CPE) problem in multi-armed bandits: it treats short temporal clips as arms, and uses empirical means and Bernstein confidence radius to identify informative regions while preserving exploration of uncertain areas. The resulting two-stage exploration-exploitation procedure reduces from a sequential policy with theoretical guarantees, first identifying high-value temporal regions, then selecting top-scoring frames within each region.

On two long-video question-answering benchmarks, FOCUS delivers substantial accuracy improvements while processing less than 2% of video frames. For videos longer than 20 minutes, it achieves an 11.9% gain in accuracy on LongVideoBench, demonstrating its effectiveness as a keyframe selection method and providing a simple and general solution for scalable long-video understanding with MLLMs.


## Installation

1. First, follow the installation instructions from the [AKS repository](https://github.com/ncTimTang/AKS) to set up the environment and dependencies.

2. Then install the additional requirements:
```bash
pip install -r requirements.txt
```

## Usage

Run FOCUS keyframe extraction on LongVideoBench:

```bash
python select_keyframe.py \
    --dataset_name longvideobench \
    --dataset_path ./datasets/longvideobench \
    --output_dir focus_blip \
    --num_keyframes 64 \
    --batch_size 32 \
    --blip_model large
```

## Evaluation

For evaluation, please follow the evaluation setup from the [lmms-eval repository](https://github.com/EvolvingLMMs-Lab/lmms-eval) and use the evaluation scripts provided in the [AKS repository](https://github.com/ncTimTang/AKS).

## Output

FOCUS generates the following outputs:

- `selected_frames.json`: Selected keyframe indices for each video
- `sampling_details.json`: Detailed sampling information including:
  - Coarse and fine sampling results
  - Arm information and FOCUS scores
  - Arm selection probabilities
  - Video metadata
- `extraction_stats.json`: Statistics about the extraction process


## Citation

If you find FOCUS useful for your research, please cite our paper:

```bibtex
@inproceedings{
ziruiz2026focus,
title={{FOCUS}: Efficient Keyframe Selection for Long Video Understanding},
author={Zirui Zhu and Hailun Xu and Yang Luo and Yong Liu and Kanchan Sarkar and Zhenheng Yang and Yang You},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=1OQKqLFcbB}
}
```

## Acknowledgments

This work builds upon the excellent research from:
- [AKS: Adaptive Keyframe Sampling](https://github.com/ncTimTang/AKS) for the evaluation framework
- [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for multimodal evaluation