IndiaFinBench / README.md
Rajveer-code's picture
Update README.md
adcc31e verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: task_type
      dtype: string
    - name: difficulty
      dtype: string
    - name: source
      dtype: string
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: reference_answer
      dtype: string
    - name: source_document
      dtype: string
  splits:
    - name: test
      num_bytes: 296135
      num_examples: 324
    - name: dev
      num_bytes: 73010
      num_examples: 82
  download_size: 165242
  dataset_size: 369145
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: dev
        path: data/dev-*
license: cc-by-4.0
task_categories:
  - question-answering
  - text-classification
language:
  - en
tags:
  - finance
  - legal
  - regulatory
  - india
  - benchmark
  - llm-evaluation
  - sebi
  - rbi
pretty_name: IndiaFinBench
size_categories:
  - n<1K

IndiaFinBench

An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text

Rajveer Singh Pall — Gyan Ganga Institute of Technology and Sciences, Jabalpur, India

GitHub Paper License: CC BY 4.0


Dataset Summary

IndiaFinBench is, to our knowledge, the first publicly available evaluation benchmark for assessing large language model (LLM) performance on Indian financial regulatory text. Existing financial NLP benchmarks draw exclusively from Western corpora — SEC filings, US earnings reports, and English-language financial news — leaving a significant gap in coverage of non-Western regulatory frameworks.

IndiaFinBench addresses this gap with 406 expert-annotated question-answer pairs drawn from 192 regulatory documents sourced directly from the Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI), spanning documents from 1992 to 2026.

The benchmark covers four task types that probe distinct reasoning capabilities required for Indian regulatory text:

Task Type Code Items Description
Regulatory Interpretation REG 174 Identify correct rules, thresholds, or scope from regulatory passages
Numerical Reasoning NUM 92 Perform arithmetic over figures embedded in regulatory text
Contradiction Detection CON 62 Determine whether two regulatory passages contradict each other
Temporal Reasoning TMP 78 Order regulatory events, identify which circular was operative at a given time
Total 406

Key Results

Twelve models evaluated under zero-shot conditions on the full 406-item benchmark:

Model REG NUM CON TMP Overall
Gemini 2.5 Flash 93.1% 84.8% 88.7% 88.5% 89.7%
Qwen3-32B 85.1% 77.2% 90.3% 92.3% 85.5%
LLaMA-3.3-70B 86.2% 75.0% 95.2% 79.5% 83.7%
Llama 4 Scout 17B 86.2% 66.3% 98.4% 84.6% 83.3%
Kimi K2 89.1% 65.2% 91.9% 75.6% 81.5%
LLaMA-3-8B 79.9% 64.1% 93.5% 78.2% 78.1%
GPT-OSS 120B 79.9% 59.8% 95.2% 76.9% 77.1%
GPT-OSS 20B 79.9% 58.7% 95.2% 76.9% 76.8%
Gemini 2.5 Pro 89.7% 48.9% 93.5% 64.1% 76.1%
Mistral-7B 79.9% 66.3% 80.6% 74.4% 75.9%
DeepSeek R1 70B 72.4% 69.6% 96.8% 70.5% 75.1%
Gemma 4 E4B 83.9% 50.0% 72.6% 62.8% 70.4%
Human Baseline (non-specialist) 55.6% 44.4% 83.3% 66.7% 60.0%

All models substantially outperform the non-specialist human baseline. Numerical reasoning is the most discriminative task (35.9 percentage-point spread across models).


Dataset Details

Source Documents

Source Documents Types
SEBI (sebi.gov.in) 92 Circulars, master circulars, regulations, orders
RBI (rbi.org.in) 100 Circulars, monetary policy statements, master directions
Total 192

Difficulty Distribution

Difficulty Items Description
Easy 160 (39.4%) Single-step extraction from context
Medium 182 (44.8%) Multi-clause reasoning or calculation
Hard 64 (15.8%) Multi-instrument tracking or complex arithmetic

Splits

The dataset is split into test (324 items, 79.8%) and dev (82 items, 20.2%).

Split Items
test 324
dev 82
Total 406

Data Fields

Field Type Description
id string Unique item identifier (e.g., REG_001, NUM_042)
task_type string One of: regulatory_interpretation, numerical_reasoning, contradiction_detection, temporal_reasoning
difficulty string One of: easy, medium, hard
source string Regulatory body: SEBI or RBI
context string Regulatory passage(s) provided to the model (80–500 words). For contradiction detection items, contains Passage A and Passage B separated by a delimiter
question string The question to be answered from the context
reference_answer string Gold-standard reference answer
source_document string Filename of the source regulatory document

Annotation and Validation

All 406 QA pairs were authored by a domain expert in Indian financial regulation. Every item was individually reviewed to ensure:

  • The answer is unambiguously derivable from the provided context
  • The question has exactly one correct answer
  • The context is sufficient without external knowledge

Model-based secondary validation (LLaMA-3.3-70B, 150-item subset): 90.7% agreement, κ = 0.918 on contradiction detection.

Human inter-annotator agreement (second human annotator, 60-item sample): 76.7% overall agreement, κ = 0.611 for contradiction detection (substantial agreement per Landis & Koch 1977).


Evaluation Protocol

Models are evaluated under zero-shot, context-only conditions. The scoring pipeline applies four stages in sequence:

  1. Exact match after case-normalisation and punctuation stripping
  2. Fuzzy token match using RapidFuzz token_set_ratio ≥ 0.72
  3. Numerical extraction match for items where extracted number sets agree
  4. Yes/No match for contradiction detection (leading word comparison)

Full evaluation code and all model predictions are available at: https://github.com/rajveerpall/IndiaFinBench


Limitations

  • All evaluation is zero-shot; few-shot or chain-of-thought prompting may improve performance
  • The benchmark does not currently cover Hindi–English code-switched regulatory text
  • Coverage is limited to SEBI and RBI; extension to IRDAI, PFRDA, and commodity regulation is planned
  • The benchmark evaluates short extractive responses, not longer-form regulatory reasoning or document summarisation
  • The dataset is a snapshot of documents as of early 2026; regulatory frameworks evolve continuously

Citation

If you use IndiaFinBench in your research, please cite:

@article{pall2025indiafinbench,
  title={IndiaFinBench: An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text},
  author={Pall, Rajveer Singh},
  journal={arXiv preprint},
  year={2025},
  url={https://github.com/rajveerpall/IndiaFinBench}
}

License

This dataset is released under CC BY 4.0. All source documents are publicly available from sebi.gov.in and rbi.org.in and carry no copyright restrictions on research use.


Contact

Rajveer Singh Pall — rajveer.singhpall.cb23@ggits.net
Gyan Ganga Institute of Technology and Sciences, Jabalpur, India