| --- |
| license: apache-2.0 |
| language: |
| - zh |
| tags: |
| - medical |
| - ecom |
| - video |
| - legal |
| size_categories: |
| - 10K<n<100K |
| configs: |
| - config_name: ecom |
| data_files: |
| - split: train |
| path: qrelData/ecom_dev.tsv |
| - config_name: medical |
| data_files: |
| - split: train |
| path: qrelData/medical_dev.tsv |
| - config_name: video |
| data_files: |
| - split: train |
| path: qrelData/video_dev.tsv |
| - config_name: law |
| data_files: |
| - split: train |
| path: qrelData/law_dev.tsv |
| --- |
| |
| # Dataset Card for Agentic Retrieval Benchmark |
|
|
| In recent years, there has been a large body of work in the field of text retrieval. However, existing studies are often scattered across different datasets, and their comparisons are partial and fragmented, lacking a comprehensive benchmark for evaluating retrieval performance. |
|
|
| This dataset is provided as part of the Agentic Retrieval Benchmark. |
|
|
| The project aims to establish a reproducible benchmark for LLM-augmented text retrieval. Under a unified dataset and pipeline, it compares the performance of traditional BM25 and vector-based retrieval methods, and evaluates the contribution of each component in the “query rewriting + vectorization” process. |
|
|
| ## Dataset Description |
|
|
| The dataset is constructed from two open-source datasets: |
|
|
| * **Multi-CPR**: Covers three application scenarios (medical, e-commerce, and video), formatted as single-turn question-answer pairs. |
| * **LexRAG**: Focuses on Chinese legal consultation scenarios, formatted as multi-turn dialogues. |
|
|
| From the Multi-CPR dataset, we sample 1,000 queries and approximately 10,000 corpus passages for each of the three scenarios, using their corresponding indices as ground truth. |
|
|
| For the LexRAG dataset, we adopt a dialogue-history-plus-current-question setup. That is, each query consists of the full conversation history up to the current turn, combined with the latest user question. |
|
|
| The data has been cleaned and preprocessed, and can be directly used as input for query rewriting and evaluation scripts. |
|
|
| * Query data is stored in `.\data\rawData\xxx_query.txt` |
| * Passage data is stored in `.\data\rawData\xxx_subset.tsv` |
| * Ground truth labels/indices are stored in `.\data\qrelData\xxx_dev.tsv` |
|
|
| # Dataset Card for Agentic Retrieval Benchmark |
|
|
| 近年来,在文本检索领域已有大量相关工作,但现有工作大多分散在不同的数据集上,且结果对比片面,缺少一个较为全面的文本检索性能指标测试基准。 |
|
|
| 本数据集为 Agentic Retrieval Benchmark 的配套数据。 |
|
|
| 项目旨在为大模型增强的文本检索建立可复现的基准:在统一的数据与流水线下比较传统 BM25 与向量检索的性能指标,并检验“重写 + 向量化”过程中各环节的贡献。 |
|
|
| ## Dataset Description |
|
|
| 项目数据集来自 2 个开放数据集: |
| - Multi-CPR:包含三个应用场景(医疗、电商、视频),数据格式为单轮问答 |
| - LexRAG:包含中文法律咨询场景,数据格式为多轮对话 |
|
|
| 我们从 Multi-CPR 数据集的 3 个场景中分别提取了 1000 条 query 和约 10000 条 corpus,并以对应的索引作为 groundtruth。 |
|
|
| 对于 LexRAG 数据集,我们选取了其中的对话历史+最新问题场景,即每次用于 query 的文本等于该次会话的前面所有轮次问答历史+当前最新一轮的提问。 |
|
|
| 数据已经完成清洗和预处理,可直接用作重写和评估脚本的输入。 |
|
|
| - query 数据存放在`.\data\rawData\xxx_query.txt` |
|
|
| - passage 数据`.\data\rawData\xxx_subset.tsv` |
|
|
| - groundtruth 标签/索引存放在`.\data\qrelData\xxx_dev.tsv` |
|
|
|
|