Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 15 new columns ({'tool_use', 'batch_api', 'fine_tuning', 'file_upload', 'safety_filters', 'json_mode', 'multilingual', 'system_prompt', 'streaming', 'function_calling', 'web_search', 'vision', 'image_generation', 'embedding', 'code_execution'}) and 8 missing columns ({'multilingual_rank', 'reasoning_rank', 'humaneval_score', 'mmlu_score', 'coding_rank', 'arena_elo', 'math_score', 'overall_tier'}).
This happened while the csv dataset builder was generating data using
hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026/llm-features-2026.csv (at revision f777c54b5309a30a76fcc6069e2a0037f33f4b6f), [/tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-benchmarks-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-benchmarks-2026.csv), /tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-features-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-features-2026.csv), /tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-rate-limits-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-rate-limits-2026.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1800, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
self._write_table(pa_table, writer_batch_size=writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2281, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2227, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
provider: string
model: string
vision: bool
function_calling: bool
json_mode: bool
streaming: bool
batch_api: bool
fine_tuning: bool
system_prompt: bool
tool_use: bool
image_generation: bool
code_execution: bool
web_search: bool
file_upload: bool
embedding: bool
multilingual: bool
safety_filters: bool
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2245
to
{'provider': Value('string'), 'model': Value('string'), 'mmlu_score': Value('float64'), 'humaneval_score': Value('float64'), 'math_score': Value('float64'), 'arena_elo': Value('int64'), 'coding_rank': Value('int64'), 'reasoning_rank': Value('int64'), 'multilingual_rank': Value('int64'), 'overall_tier': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1802, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 15 new columns ({'tool_use', 'batch_api', 'fine_tuning', 'file_upload', 'safety_filters', 'json_mode', 'multilingual', 'system_prompt', 'streaming', 'function_calling', 'web_search', 'vision', 'image_generation', 'embedding', 'code_execution'}) and 8 missing columns ({'multilingual_rank', 'reasoning_rank', 'humaneval_score', 'mmlu_score', 'coding_rank', 'arena_elo', 'math_score', 'overall_tier'}).
This happened while the csv dataset builder was generating data using
hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026/llm-features-2026.csv (at revision f777c54b5309a30a76fcc6069e2a0037f33f4b6f), [/tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-benchmarks-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-benchmarks-2026.csv), /tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-features-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-features-2026.csv), /tmp/hf-datasets-cache/medium/datasets/88909843474045-config-parquet-and-info-ComparEdge-llm-api-benchm-6872e516/hub/datasets--ComparEdge--llm-api-benchmark-matrix-2026/snapshots/f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-rate-limits-2026.csv (origin=hf://datasets/ComparEdge/llm-api-benchmark-matrix-2026@f777c54b5309a30a76fcc6069e2a0037f33f4b6f/llm-rate-limits-2026.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
provider string | model string | mmlu_score float64 | humaneval_score float64 | math_score float64 | arena_elo int64 | coding_rank int64 | reasoning_rank int64 | multilingual_rank int64 | overall_tier string |
|---|---|---|---|---|---|---|---|---|---|
OpenAI | gpt-4o | 88.7 | 90.2 | 76.6 | 1,287 | 2 | 3 | 2 | S |
OpenAI | gpt-4o-mini | 82 | 87 | 70.2 | 1,140 | 5 | 8 | 5 | A |
OpenAI | o1 | 92.3 | 92.4 | 96.4 | 1,350 | 1 | 1 | 3 | S+ |
OpenAI | o3-mini | 86.9 | 89.5 | 94.8 | 1,310 | 3 | 2 | 6 | S |
Anthropic | claude-sonnet-4 | 88.5 | 93.7 | 78.3 | 1,295 | 1 | 3 | 2 | S |
Anthropic | claude-haiku-3.5 | 79.8 | 88.1 | 69.5 | 1,160 | 6 | 7 | 4 | A |
Anthropic | claude-opus-4 | 90.2 | 95.1 | 82.7 | 1,330 | 1 | 2 | 1 | S+ |
Google | gemini-2.5-pro | 90.8 | 89.8 | 86.2 | 1,340 | 2 | 1 | 1 | S+ |
Google | gemini-2.5-flash | 85.1 | 86.4 | 78 | 1,250 | 4 | 4 | 3 | S |
Google | gemini-2.0-flash | 82.5 | 84.2 | 72.1 | 1,190 | 7 | 6 | 4 | A |
Meta | llama-3.3-70b | 82 | 81.7 | 68 | 1,180 | 8 | 8 | 7 | A |
Meta | llama-3.1-405b | 86.1 | 84.3 | 73.8 | 1,230 | 5 | 5 | 5 | A+ |
Meta | llama-4-maverick | 87.5 | 86 | 77.2 | 1,260 | 4 | 4 | 4 | S |
Mistral | mistral-large | 84 | 82.5 | 72 | 1,200 | 6 | 6 | 3 | A+ |
Mistral | mistral-small | 78.5 | 79 | 65.3 | 1,120 | 9 | 9 | 6 | B+ |
Mistral | codestral | 75.2 | 90.8 | 58.4 | 1,170 | 2 | 10 | 8 | A |
DeepSeek | deepseek-v3 | 87.1 | 82.6 | 84 | 1,280 | 4 | 3 | 5 | S |
DeepSeek | deepseek-r1 | 89.5 | 85.2 | 94.3 | 1,320 | 3 | 1 | 6 | S+ |
Cohere | command-r-plus | 80.2 | 72.5 | 62.1 | 1,130 | 10 | 9 | 2 | B+ |
Cohere | command-r | 75.8 | 68.3 | 55.7 | 1,050 | 12 | 11 | 4 | B |
xAI | grok-2 | 85.5 | 83 | 71.5 | 1,220 | 5 | 5 | 6 | A+ |
xAI | grok-3-mini | 81 | 80.5 | 78.9 | 1,200 | 7 | 4 | 7 | A |
OpenAI | gpt-4o | null | null | null | null | null | null | null | null |
OpenAI | gpt-4o-mini | null | null | null | null | null | null | null | null |
OpenAI | o1 | null | null | null | null | null | null | null | null |
OpenAI | o3-mini | null | null | null | null | null | null | null | null |
Anthropic | claude-sonnet-4 | null | null | null | null | null | null | null | null |
Anthropic | claude-haiku-3.5 | null | null | null | null | null | null | null | null |
Anthropic | claude-opus-4 | null | null | null | null | null | null | null | null |
Google | gemini-2.5-pro | null | null | null | null | null | null | null | null |
Google | gemini-2.5-flash | null | null | null | null | null | null | null | null |
Google | gemini-2.0-flash | null | null | null | null | null | null | null | null |
Meta | llama-3.3-70b | null | null | null | null | null | null | null | null |
Meta | llama-3.1-405b | null | null | null | null | null | null | null | null |
Meta | llama-4-maverick | null | null | null | null | null | null | null | null |
Mistral | mistral-large | null | null | null | null | null | null | null | null |
Mistral | mistral-small | null | null | null | null | null | null | null | null |
Mistral | codestral | null | null | null | null | null | null | null | null |
DeepSeek | deepseek-v3 | null | null | null | null | null | null | null | null |
DeepSeek | deepseek-r1 | null | null | null | null | null | null | null | null |
Cohere | command-r-plus | null | null | null | null | null | null | null | null |
Cohere | command-r | null | null | null | null | null | null | null | null |
xAI | grok-2 | null | null | null | null | null | null | null | null |
xAI | grok-3-mini | null | null | null | null | null | null | null | null |
OpenAI | gpt-4o | null | null | null | null | null | null | null | null |
OpenAI | gpt-4o-mini | null | null | null | null | null | null | null | null |
OpenAI | o1 | null | null | null | null | null | null | null | null |
OpenAI | o3-mini | null | null | null | null | null | null | null | null |
Anthropic | claude-sonnet-4 | null | null | null | null | null | null | null | null |
Anthropic | claude-haiku-3.5 | null | null | null | null | null | null | null | null |
Anthropic | claude-opus-4 | null | null | null | null | null | null | null | null |
Google | gemini-2.5-pro | null | null | null | null | null | null | null | null |
Google | gemini-2.5-flash | null | null | null | null | null | null | null | null |
Google | gemini-2.0-flash | null | null | null | null | null | null | null | null |
Meta | llama-3.3-70b | null | null | null | null | null | null | null | null |
Meta | llama-3.1-405b | null | null | null | null | null | null | null | null |
Meta | llama-4-maverick | null | null | null | null | null | null | null | null |
Mistral | mistral-large | null | null | null | null | null | null | null | null |
Mistral | mistral-small | null | null | null | null | null | null | null | null |
Mistral | codestral | null | null | null | null | null | null | null | null |
DeepSeek | deepseek-v3 | null | null | null | null | null | null | null | null |
DeepSeek | deepseek-r1 | null | null | null | null | null | null | null | null |
Cohere | command-r-plus | null | null | null | null | null | null | null | null |
Cohere | command-r | null | null | null | null | null | null | null | null |
xAI | grok-2 | null | null | null | null | null | null | null | null |
xAI | grok-3-mini | null | null | null | null | null | null | null | null |
LLM Benchmark & Feature Matrix 2026
Which LLM is best at what? This dataset maps capabilities, performance, and limits of 22 major models.
Unlike pricing datasets, this focuses on what models can do — not just what they cost.
Files
| File | Description |
|---|---|
llm-benchmarks-2026.csv |
MMLU, HumanEval, MATH, Arena ELO, coding/reasoning/multilingual rankings, tier (S+ to B) |
llm-features-2026.csv |
15 binary capabilities: vision, function calling, JSON mode, fine-tuning, tool use, web search, embeddings... |
llm-rate-limits-2026.csv |
Free tier availability, RPM/TPM limits, batch discounts, cached input discounts |
Models Covered
22 models from OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere
Use Cases
- Model selection — Find models that support your required features (e.g., vision + function calling + fine-tuning)
- Performance comparison — Which model scores highest on coding vs reasoning vs multilingual?
- Rate limit planning — Can you stay within free tier? What are the paid RPM limits?
- Tier analysis — S+ tier models vs A tier — is the premium worth it?
Key Insights
- Only Google Gemini supports all 15 features (vision, search, embeddings, fine-tuning, code execution)
- DeepSeek offers 90% cached input discount — massive savings for repetitive workloads
- Groq has highest free tier RPM (30) with lowest latency
- S+ tier models (o1, Claude Opus 4, Gemini 2.5 Pro, DeepSeek R1) all score >89 MMLU
Related
- LLM Rankings — Live leaderboard
- AI Tools Pricing Dataset — 104 AI tools
- Kaggle: Full Pricing Data
- GitHub Open Data
License
CC BY 4.0 — ComparEdge
- Downloads last month
- 25