Dataset Viewer
Auto-converted to Parquet Duplicate
length
string
context
string
input
string
positive_outputs
list
negative_outputs
list
page_length
string
8k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Как называется роман, который перевёл Василий Кириллыч Тредиаковский?
[ "Езда на остров любви" ]
[]
10p
8k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Как при рождении назвали будущего императора Петра III?
[ "Карл Ульрих" ]
[]
10p
8k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Какой княжеский род имел сильное влияние на Петра II?
[ "Долгорукие" ]
[]
10p
8k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Сколько членов было в совете, позвавшем на трон Анну Иоанновну?
[ "восемь", "8" ]
[]
10p
16k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Как называется роман, который перевёл Василий Кириллыч Тредиаковский?
[ "Езда на остров любви" ]
[]
20p
16k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Как при рождении назвали будущего императора Петра III?
[ "Карл Ульрих" ]
[]
20p
16k
После смерти Екатерины не последовало ни беспорядков, ни телодвижений со стороны гвардии. Императрица успела составить завещание, которым, в частности, отказывала престол российский малолетнему Петру Алексеевичу. Правда, моментально возникли слухи, что этот тестамент (как тогда говорили на польский манер) – поддельный....
Какой княжеский род имел сильное влияние на Петра II?
[ "Долгорукие" ]
[]
20p
16k
"После смерти Екатерины не последовало ни беспорядков,(...TRUNCATED)
"Сколько членов было в совете, позвавшем на трон Анну И(...TRUNCATED)
[ "восемь", "8" ]
[]
20p
32k
"После смерти Екатерины не последовало ни беспорядков,(...TRUNCATED)
"Как называется роман, который перевёл Василий Кириллы(...TRUNCATED)
[ "Езда на остров любви" ]
[]
40p
32k
"После смерти Екатерины не последовало ни беспорядков,(...TRUNCATED)
"Как при рождении назвали будущего императора Петра III?(...TRUNCATED)
[ "Карл Ульрих" ]
[]
40p
End of preview. Expand in Data Studio

LIBRA: Long Input Benchmark for Russian Analysis

LIBRA

LIBRA (Long Input Benchmark for Russian Analysis) is designed to evaluate the capabilities of large language models (LLMs) in understanding and processing long texts in Russian. This benchmark includes 18 datasets adapted for different tasks and complexities. The tasks are divided into four complexity groups and allow evaluation across various context lengths ranging from 4k up to 512k tokens.

For model comparison and results, see the LIBRA Leaderboard. The benchmark is described in detail in our paper.

NOTE: This is a new benchmark version released in May 2026. The original version (described in the paper) can be found here. We strongly encourage using the current version as it contains cleaned and extended datasets with ensured data quality.

LIBRA Mini

Running a full LIBRA evaluation can be prohibitively expensive and time-consuming due to the large number of datasets and long context lengths involved. Moreover, some of the included tasks have become less informative as benchmarks, with modern models achieving near-saturated scores on them.

To address this, we introduce LIBRA Mini — a compact, curated subset of 6 datasets selected from the full benchmark. These datasets represent the most challenging and diagnostically informative tasks in LIBRA, covering diverse task types and complexity levels (see Task Description for more information):

  • ruBABILongQA3 — multi-fact reasoning over long contexts
  • ruSciPassageCount — counting unique paragraphs in extended scientific texts
  • LibrusecMHQA — multi-hop QA with information spread across multiple text parts
  • LongContextMultiQ — multi-hop QA based on Wikidata and Wikipedia
  • ru2WikiMultihopQA — multi-hop reasoning across multiple Wikipedia articles
  • MatreshkaNames — identifying persons in dialogues based on discussed topics

LIBRA Mini uses the same Exact Match (EM) metric and evaluation methodology as the full benchmark. Results for LIBRA Mini are reported in a dedicated section on the leaderboard.

We recommend using LIBRA Mini as the primary evaluation suite for model comparisons, while the full LIBRA benchmark remains available for comprehensive analysis.

Dataset Structure

The datasets are divided into subsets based on context lengths. The table below shows the number of examples per context length for each dataset. Datasets included in LIBRA Mini are highlighted in bold. Note that not all datasets cover the full range of context lengths — some are designed for specific length ranges that best suit their task type.

Task 4k 8k 16k 32k 64k 128k 256k 512k Total
— Group I —
passkey 200 200 200 200 200 200 200 200 1600
passkey_with_librusec 200 200 200 200 200 200 200 200 1600
— Group II —
librusec_history - 32 32 32 32 - - - 128
matreshka_names 150 150 145 150 45 - - - 640
matreshka_yes_no 300 300 300 300 290 280 - - 1770
ru_quality - 18 184 - - - - - 202
ru_sci_abstract_retrieval 209 210 210 206 185 200 200 - 1420
ru_sci_fi - - - 216 213 - - - 429
ru_tpo - 900 - - - - - - 900
— Group III —
librusec_mhqa - 384 - - - - - - 384
long_context_multiq 158 121 83 109 41 - - - 512
ru_2wikimultihopqa - 147 384 369 - - - - 900
ru_babilong_qa1 99 99 99 94 91 99 200 98 879
ru_babilong_qa2 85 76 77 73 65 99 200 98 773
ru_babilong_qa3 60 68 69 65 65 100 198 98 723
ru_babilong_qa4 78 85 83 87 75 99 200 98 805
ru_babilong_qa5 99 99 98 96 96 99 200 98 885
— Group IV —
ru_sci_passage_count 104 99 100 87 91 90 43 60 674

Task Description

The benchmark tasks are organized into four complexity groups, ranging from simple retrieval to complex reasoning. The grouping reflects both the cognitive difficulty of the task and the degree to which models must integrate information across the full context window. Group I serves as a basic sanity check, verifying that a model can process long inputs at all. Groups II and III progressively require deeper language understanding, multi-step reasoning, and the ability to locate and combine information from distant parts of the context. Group IV represents the most demanding tasks, requiring complex reasoning that goes beyond standard question answering formats. The total score on the leaderboard is computed across all four groups.

Group I: Simple Information Retrieval (sanity check)

This group includes the most simple tasks which serve as a sanity check for models to work with such amount of tokens.

  • Passkey: Extract a relevant piece of code number from a long text fragment. Based on the original PassKey test from the LongLLaMA's GitHub repo.
  • PasskeyWithLibrusec: Similar to Passkey but with added noise from Librusec texts.

Group II: Question Answering and Multiple Choice

This group consists of standard QA and multiple choice tasks adapted for the long-context setting.

  • MatreshkaNames: Identify the person in dialogues based on the discussed topic. We used Matreshka dataset and Russian Names dataset to create this and the next task.
  • MatreshkaYesNo: Indicate whether a specific topic was mentioned in the dialog.
  • LibrusecHistory: Answer questions based on historical texts. Ideologically similar to the PassageRetrieval dataset from LongBench.
  • ruSciFi: Answer true/false based on context and general world knowledge. Translation of SciFi dataset from L-Eval which originally was based on SF-Gram.
  • ruSciAbstractRetrieval: Retrieve relevant paragraphs from scientific abstracts.
  • ruTPO: Multiple-choice questions similar to TOEFL exams. Translation of the TPO dataset from L-Eval.
  • ruQuALITY: Multiple-choice QA tasks based on detailed texts. Created by translating the QuALITY dataset from L-Eval.

Group III: Multi-hop Question Answering

This group includes long-context multi-hop QA problems where the answer requires combining multiple pieces of information distributed across the context.

  • ruBABILongQA (1-5): 5 long-context reasoning tasks for QA using facts hidden among irrelevant information.
  • LongContextMultiQ: Multi-hop QA based on Wikidata and Wikipedia.
  • LibrusecMHQA: Multi-hop QA requiring information distributed across several text parts.
  • ru2WikiMultihopQA: Translation of the 2WikiMultihopQA dataset from LongBench.

Group IV: Complex Reasoning and Mathematical Problems

This group includes the most complex long-context tasks, which span beyond multiple-choice and multi-hop QA. At this point Group IV comprises only one task and we invite the community to contribute to it.

  • ruSciPassageCount: Count unique paragraphs in a long text. Uses the basic idea of the original PassageCount dataset from LongBench.

Metrics

We use Exact Match (EM) as a primary metric for all tasks. EM is used to evaluate the accuracy of the model's responses by comparing the predicted answers to the ground truth.

Changes from the Original Version

This version of LIBRA includes both automatic and manual improvements over the original release. All datasets underwent automatic quality filtering to ensure consistency and reliability of annotations. In addition, several datasets were manually revised and extended with the help of human annotators: LibrusecMHQA, LongContextMultiQ, MatreshkaNames, MatreshkaYesNo, ru2WikiMultihopQA, ruSciFi, and ruTPO received targeted corrections and additional examples. The datasets ruGSM100 and ruQasper were removed from the benchmark as they did not meet the updated quality criteria.

The maximum supported context length has been extended from 128k to 512k tokens. The following datasets now include examples at longer context lengths not present in the original version: ruBABILongQA (1–5), ruSciAbstractRetrieval, ruSciPassageCount, LongContextMultiQ, MatreshkaYesNo, Passkey, and PasskeyWithLibrusec.

Evaluation

Starting from this version, LIBRA supports evaluation via lm-evaluation-harness — a widely adopted framework for standardized LLM evaluation. Both full LIBRA and compact LIBRA Mini evaluations are supported.

To get started:

pip install lm-eval[vllm]

Run evaluation on the full LIBRA benchmark:

lm_eval --model vllm \
        --model_args pretrained=Qwen/Qwen3-30B-A3B,max_model_len=262144 \
        --tasks libra \
        --apply_chat_template \
        --device cuda:0

Run evaluation on LIBRA Mini only:

lm_eval --model vllm \
        --model_args pretrained=Qwen/Qwen3-30B-A3B,max_model_len=262144 \
        --tasks libra_mini \
        --apply_chat_template \
        --device cuda:0

For the full list of configuration options and instructions on adding new models, please refer to the lm-evaluation-harness documentation.

Citation

@misc{churin2024longinputbenchmarkrussian,
      title={Long Input Benchmark for Russian Analysis}, 
      author={Igor Churin and Murat Apishev and Maria Tikhonova and Denis Shevelev and Aydar Bulatov and Yuri Kuratov and Sergei Averkiev and Alena Fenogenova},
      year={2024},
      eprint={2408.02439},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.02439}, 
}
Downloads last month
109

Space using ai-forever/LIBRA 1

Paper for ai-forever/LIBRA