| | --- |
| | license: odc-by |
| | --- |
| | # OpenData-Benchmark-ITA |
| |
|
| | ## Overview |
| | **OpenData-Benchmark-ITA** is a multiple-choice benchmark dataset designed to evaluate the capability of Large Language Models (LLMs) to understand, retrieve, and reason over public Open Data published by European government portals. |
| |
|
| | The current release focuses exclusively on Italian Open Data and is based on datasets published on the official Italian government portal, **data.gov.it**. Future releases will extend the benchmark to include harmonized governmental Open Data from additional European countries, starting with France, Spain, and Germany. |
| |
|
| | The dataset is released under the **ODC-BY (Open Data Commons Attribution)** license, enabling broad reuse for research, evaluation, and benchmarking purposes beyond its original project scope. |
| |
|
| | --- |
| |
|
| | ## Benchmark Objective |
| | The primary objective of OpenData-Benchmark-ITA is to assess the *effective knowledge and practical usability* of Italian governmental Open Data by LLMs developed within the **Villanova project**. |
| |
|
| | Rather than testing general language understanding, the benchmark focuses on: |
| | - Familiarity with real-world Open Data resources |
| | - Ability to interpret dataset metadata |
| | - Capability to answer content-based questions grounded in actual public datasets |
| |
|
| | This makes the benchmark particularly suitable for evaluating domain adaptation, retrieval-augmented generation pipelines, and public-sector–oriented AI systems. |
| |
|
| | --- |
| |
|
| | ## Dataset Composition |
| | The benchmark is structured as a **multiple-choice question-answering task**. Each question is grounded in the content or metadata of a specific Open Data resource. |
| |
|
| | - **Number of datasets sampled:** 500 |
| | - **Source portal:** data.gov.it |
| | - **Total datasets available on portal:** ~65,000 |
| | - **Data formats:** Primarily CSV for data files, paired with JSON metadata |
| |
|
| | Each benchmark item is derived from a *pair* consisting of: |
| | 1. A structured data file (mainly CSV) |
| | 2. The corresponding official metadata in JSON format |
| |
|
| | --- |
| |
|
| | ## Data Origin and Curation Process |
| | The dataset is **manually curated** following a structured and quality-driven workflow. |
| |
|
| | The process includes: |
| | - Systematic sampling from the Italian Open Data portal |
| | - Manual verification of dataset relevance and accessibility |
| | - Careful inspection and cleaning of metadata |
| | - Manual design and validation of multiple-choice questions to ensure clarity, correctness, and grounding in the source data |
| |
|
| | This approach ensures that the benchmark reflects realistic usage scenarios of public Open Data and avoids synthetic or purely artificial artifacts. |
| |
|
| | --- |
| |
|
| | ## Selection Criteria |
| | The selection of datasets followed clear, content-oriented criteria: |
| |
|
| | - Alignment with the objectives of the Villanova project |
| | - Preference for datasets enabling automated processing and analysis |
| | - Priority given to machine-readable formats, particularly CSV |
| | - Availability of complete and well-structured metadata |
| |
|
| | The final sample consists of **500 dataset–metadata pairs**, each suitable for downstream benchmarking and evaluation tasks. |
| |
|
| | --- |
| |
|
| | ## Collection Period |
| | No strict temporal constraints were applied during dataset selection. |
| |
|
| | However, preference was given to **"live" datasets**, identified by: |
| | - A recent or regularly updated modification date |
| | - Ongoing relevance in terms of data production and maintenance |
| |
|
| | This choice increases the realism of the benchmark when used to evaluate models intended for interaction with up-to-date public data sources. |
| |
|