File size: 8,040 Bytes
e121057
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0292e40
 
904e1da
0292e40
 
 
 
e121057
 
 
 
 
904e1da
 
c1d7199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e121057
c1d7199
 
 
 
 
 
 
 
adcc31e
c1d7199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: task_type
    dtype: string
  - name: difficulty
    dtype: string
  - name: source
    dtype: string
  - name: context
    dtype: string
  - name: question
    dtype: string
  - name: reference_answer
    dtype: string
  - name: source_document
    dtype: string
  splits:
  - name: test
    num_bytes: 296135
    num_examples: 324
  - name: dev
    num_bytes: 73010
    num_examples: 82
  download_size: 165242
  dataset_size: 369145
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
  - split: dev
    path: data/dev-*
license: cc-by-4.0
task_categories:
- question-answering
- text-classification
language:
- en
tags:
- finance
- legal
- regulatory
- india
- benchmark
- llm-evaluation
- sebi
- rbi
pretty_name: IndiaFinBench
size_categories:
- n<1K
---

# IndiaFinBench

**An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text**

> Rajveer Singh Pall — Gyan Ganga Institute of Technology and Sciences, Jabalpur, India

[![GitHub](https://img.shields.io/badge/GitHub-IndiaFinBench-blue)](https://github.com/rajveerpall/IndiaFinBench)
[![Paper](https://img.shields.io/badge/arXiv-IndiaFinBench-red)](arxiv.org/abs/2604.19298)
[![License: CC BY 4.0](https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by/4.0/)

---

## Dataset Summary

IndiaFinBench is, to our knowledge, the first publicly available evaluation benchmark for assessing large language model (LLM) performance on **Indian financial regulatory text**. Existing financial NLP benchmarks draw exclusively from Western corpora — SEC filings, US earnings reports, and English-language financial news — leaving a significant gap in coverage of non-Western regulatory frameworks.

IndiaFinBench addresses this gap with **406 expert-annotated question-answer pairs** drawn from **192 regulatory documents** sourced directly from the Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI), spanning documents from 1992 to 2026.

The benchmark covers four task types that probe distinct reasoning capabilities required for Indian regulatory text:

| Task Type | Code | Items | Description |
|-----------|------|-------|-------------|
| Regulatory Interpretation | REG | 174 | Identify correct rules, thresholds, or scope from regulatory passages |
| Numerical Reasoning | NUM | 92 | Perform arithmetic over figures embedded in regulatory text |
| Contradiction Detection | CON | 62 | Determine whether two regulatory passages contradict each other |
| Temporal Reasoning | TMP | 78 | Order regulatory events, identify which circular was operative at a given time |
| **Total** | | **406** | |

---

## Key Results

Twelve models evaluated under zero-shot conditions on the full 406-item benchmark:

| Model | REG | NUM | CON | TMP | Overall |
|-------|-----|-----|-----|-----|---------|
| Gemini 2.5 Flash | 93.1% | 84.8% | 88.7% | 88.5% | **89.7%** |
| Qwen3-32B | 85.1% | 77.2% | 90.3% | 92.3% | 85.5% |
| LLaMA-3.3-70B | 86.2% | 75.0% | 95.2% | 79.5% | 83.7% |
| Llama 4 Scout 17B | 86.2% | 66.3% | 98.4% | 84.6% | 83.3% |
| Kimi K2 | 89.1% | 65.2% | 91.9% | 75.6% | 81.5% |
| LLaMA-3-8B | 79.9% | 64.1% | 93.5% | 78.2% | 78.1% |
| GPT-OSS 120B | 79.9% | 59.8% | 95.2% | 76.9% | 77.1% |
| GPT-OSS 20B | 79.9% | 58.7% | 95.2% | 76.9% | 76.8% |
| Gemini 2.5 Pro | 89.7% | 48.9% | 93.5% | 64.1% | 76.1% |
| Mistral-7B | 79.9% | 66.3% | 80.6% | 74.4% | 75.9% |
| DeepSeek R1 70B | 72.4% | 69.6% | 96.8% | 70.5% | 75.1% |
| Gemma 4 E4B | 83.9% | 50.0% | 72.6% | 62.8% | 70.4% |
| **Human Baseline (non-specialist)** | 55.6% | 44.4% | 83.3% | 66.7% | **60.0%** |

All models substantially outperform the non-specialist human baseline. Numerical reasoning is the most discriminative task (35.9 percentage-point spread across models).

---

## Dataset Details

### Source Documents

| Source | Documents | Types |
|--------|-----------|-------|
| SEBI (sebi.gov.in) | 92 | Circulars, master circulars, regulations, orders |
| RBI (rbi.org.in) | 100 | Circulars, monetary policy statements, master directions |
| **Total** | **192** | |

### Difficulty Distribution

| Difficulty | Items | Description |
|------------|-------|-------------|
| Easy | 160 (39.4%) | Single-step extraction from context |
| Medium | 182 (44.8%) | Multi-clause reasoning or calculation |
| Hard | 64 (15.8%) | Multi-instrument tracking or complex arithmetic |

### Splits

The dataset is split into **test** (324 items, 79.8%) and **dev** (82 items, 20.2%).

| Split | Items |
|-------|-------|
| test | 324 |
| dev | 82 |
| **Total** | **406** |

---

## Data Fields

| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique item identifier (e.g., `REG_001`, `NUM_042`) |
| `task_type` | string | One of: `regulatory_interpretation`, `numerical_reasoning`, `contradiction_detection`, `temporal_reasoning` |
| `difficulty` | string | One of: `easy`, `medium`, `hard` |
| `source` | string | Regulatory body: `SEBI` or `RBI` |
| `context` | string | Regulatory passage(s) provided to the model (80–500 words). For contradiction detection items, contains Passage A and Passage B separated by a delimiter |
| `question` | string | The question to be answered from the context |
| `reference_answer` | string | Gold-standard reference answer |
| `source_document` | string | Filename of the source regulatory document |

---

## Annotation and Validation

All 406 QA pairs were authored by a domain expert in Indian financial regulation. Every item was individually reviewed to ensure:
- The answer is unambiguously derivable from the provided context
- The question has exactly one correct answer
- The context is sufficient without external knowledge

**Model-based secondary validation** (LLaMA-3.3-70B, 150-item subset): 90.7% agreement, κ = 0.918 on contradiction detection.

**Human inter-annotator agreement** (second human annotator, 60-item sample): 76.7% overall agreement, κ = 0.611 for contradiction detection (substantial agreement per Landis & Koch 1977).

---

## Evaluation Protocol

Models are evaluated under **zero-shot, context-only** conditions. The scoring pipeline applies four stages in sequence:

1. **Exact match** after case-normalisation and punctuation stripping
2. **Fuzzy token match** using RapidFuzz `token_set_ratio ≥ 0.72`
3. **Numerical extraction match** for items where extracted number sets agree
4. **Yes/No match** for contradiction detection (leading word comparison)

Full evaluation code and all model predictions are available at: [https://github.com/rajveerpall/IndiaFinBench](https://github.com/rajveerpall/IndiaFinBench)

---

## Limitations

- All evaluation is zero-shot; few-shot or chain-of-thought prompting may improve performance
- The benchmark does not currently cover Hindi–English code-switched regulatory text
- Coverage is limited to SEBI and RBI; extension to IRDAI, PFRDA, and commodity regulation is planned
- The benchmark evaluates short extractive responses, not longer-form regulatory reasoning or document summarisation
- The dataset is a snapshot of documents as of early 2026; regulatory frameworks evolve continuously

---

## Citation

If you use IndiaFinBench in your research, please cite:

```bibtex
@article{pall2025indiafinbench,
  title={IndiaFinBench: An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text},
  author={Pall, Rajveer Singh},
  journal={arXiv preprint},
  year={2025},
  url={https://github.com/rajveerpall/IndiaFinBench}
}
```

---

## License

This dataset is released under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). All source documents are publicly available from sebi.gov.in and rbi.org.in and carry no copyright restrictions on research use.

---

## Contact

Rajveer Singh Pall — rajveer.singhpall.cb23@ggits.net  
Gyan Ganga Institute of Technology and Sciences, Jabalpur, India