Datasets:
File size: 4,598 Bytes
fcdd86f ed7b922 fcdd86f ed7b922 fcdd86f ed7b922 fcdd86f ed7b922 fcdd86f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 | ---
dataset_info:
features:
- name: file_name
dtype: string
- name: source_file
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: question_id
dtype: int32
- name: answer
dtype: string
- name: answer_choices
list: string
- name: correct_choice_idx
dtype: int32
- name: image
dtype: image
- name: video
dtype: video
- name: media_type
dtype: string
splits:
- name: test
num_examples: 1120
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
license: mit
task_categories:
- visual-question-answering
language:
- en
size_categories:
- 1K<n<10K
tags:
- engineering
- simulation
- stratified-subset
---
# OpenSeeSimE-Structural-Mini
A **stratified 1% subset** of [`cmudrc/OpenSeeSimE-Structural`](https://huggingface.co/datasets/cmudrc/OpenSeeSimE-Structural) for evaluating vision-language models at a reduced compute footprint while preserving the joint distribution of simulation type, question type, media type, and question id.
## Subset Provenance
- **Parent dataset**: [`cmudrc/OpenSeeSimE-Structural`](https://huggingface.co/datasets/cmudrc/OpenSeeSimE-Structural) (102,678 rows total)
- **Rows in this subset**: **1,120** (1.09% of parent)
- **Source classes**: `Beams`, `Dog Bone`, `Hip Implant`, `Pressure Vessel`, `Wall Bracket`
- **Parquet shards**: 1 | **Storage**: ~1.73 GB
- **Sampling**: per-stratum shuffle with `numpy.random.default_rng(42)`, then take `ceil(n * fraction)` from each stratum. Any non-empty stratum contributes at least 1 row.
- **Strata**: `(source_file, question_type, media_type, question_id)` — all four jointly.
- **Nesting**: the 1% subset is a literal subset of the 10% subset (same shuffled prefix is taken for every fraction).
## Composition
### By `source_file`
| source_file | rows | pct |
|:----------------|-------:|------:|
| Beams | 240 | 21.43 |
| Dog Bone | 220 | 19.64 |
| Hip Implant | 220 | 19.64 |
| Pressure Vessel | 220 | 19.64 |
| Wall Bracket | 220 | 19.64 |
### By `media_type`
| media_type | rows |
|:-------------|-------:|
| image | 560 |
| video | 560 |
### By `(source_file, question_type)`
| source_file | Binary | Multiple Choice | Spatial | Total |
|:----------------|---------:|------------------:|----------:|--------:|
| Beams | 72 | 120 | 48 | 240 |
| Dog Bone | 66 | 110 | 44 | 220 |
| Hip Implant | 66 | 110 | 44 | 220 |
| Pressure Vessel | 66 | 110 | 44 | 220 |
| Wall Bracket | 66 | 110 | 44 | 220 |
## Feature Schema
Identical to the parent dataset. See [`cmudrc/OpenSeeSimE-Structural`](https://huggingface.co/datasets/cmudrc/OpenSeeSimE-Structural) for full documentation of simulation generation, ground-truth extraction, preprocessing, limitations, and intended use.
```python
{
'file_name': str, # Unique identifier
'source_file': str, # Base simulation model
'question': str, # Question text
'question_type': str, # 'Binary', 'Multiple Choice', 'Spatial'
'question_id': int, # Question identifier (1-20)
'answer': str, # Ground truth answer
'answer_choices': list[str], # Options
'correct_choice_idx': int, # Index of correct answer
'image': Image, # PIL Image (1920x1440) or null for video rows
'video': Video, # Video bytes or null for image rows
'media_type': str, # 'image' or 'video'
}
```
## Intended Use
- Benchmark evaluation of vision-language models on engineering simulation question answering at reduced compute cost
- Smoke-testing of evaluation pipelines before running the full benchmark
- Comparative studies where storage or bandwidth constraints matter
## License
MIT — same as parent. Free for academic and commercial use with attribution.
## Citation
```bibtex
@article{ezemba2024opensesime,
title={OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations},
author={Ezemba, Jessica and Pohl, Jason and Tucker, Conrad and McComb, Christopher},
year={2025}
}
```
## Contact
**Jessica Ezemba** — jezemba@andrew.cmu.edu
Department of Mechanical Engineering, Carnegie Mellon University
|