pdf pdf |
|---|
Dataset Card for AI Helps Finding Best Merging LLMs
Dataset Summary
AI Helps Finding Best Merging LLMs is a prompt-response comparison dataset created by manually submitting the same user-written evaluation template to multiple LLM applications and collecting their responses.
The creator and founder of WithIn Us Ai (Guy Edward DuGan II) known as gss1147 wrote a structured ranking template and fed it to each LLM individually in its own app environment. The template asks models to identify top open-source, fine-tunable LLMs across several categories, including:
- top LLMs by parameter class
- top models trained or fine-tuned on highly respected datasets
- top models trained on many datasets
- best models for Mixture-of-Experts merges
- “best of the best” model recommendations
This makes the dataset useful for studying how different LLMs interpret the same research prompt, what models they recommend, how consistent their rankings are, and how much overlap or disagreement exists across systems.
Dataset Creation
Curation Rationale
This dataset was created to compare how multiple LLMs respond to the same detailed research prompt about:
- open-source LLM quality
- context-window requirements
- fine-tunability and trainability
- benchmark strength
- candidate models for merging and MoE workflows
The core design principle is same prompt, different model/app, allowing side-by-side review of model recommendations and reasoning styles.
Source Data
The source prompt template was written by the dataset creator. The uploaded template includes requirements such as:
- models must be open source and free to use
- models must have context windows from 128k to unlimited
- models must be fine-tunable/trainable
- models must have high benchmarks compared to closed models
It also defines the ranking buckets and comparison sections used across all collected outputs. oai_citation:1‡Facts on llm’s .pdf
Data Collection Process
The collection process is:
- A single template was written by the dataset creator.
- That same template was entered manually into different LLM apps.
- Each app/model produced its own answer.
- Those answers were saved as documents and gathered into this dataset.
Because each response comes from a different app or model environment, the dataset is best understood as a comparative output corpus rather than a normalized benchmark table.
Who Curated the Dataset
Curated by gss1147.
What the Template Asked the Models
The source template asks models to rank LLMs in three size classes:
- 1 Billion & Under
- 3 Billion to 5 Billion
- 7 Billion to 10 Billion
It also asks for:
- 20 LLMs fine-tuned or pre-trained on the most respected datasets
- 20 LLMs fine-tuned or pre-trained on the most datasets
- Top 5 LLMs best suited for Mixture-of-Experts merges
- Top 5 “best of the best” LLMs
These instructions are explicitly present in the creator’s template. oai_citation:2‡Facts on llm’s .pdf
Supported Tasks and Use Cases
This dataset is useful for:
- LLM output comparison
- Prompt consistency studies
- Model recommendation analysis
- Research on ranking agreement/disagreement across LLMs
- LLM self-reported knowledge comparison
- Model-merging research support
- Extracting candidate open-source models for follow-up validation
Possible downstream uses:
- compare which models are repeatedly recommended across assistants
- measure ranking stability across apps
- identify hallucinated versus plausible model suggestions
- build a retrieval layer over multi-LLM research responses
- convert outputs into a structured comparison table
Dataset Structure
Data Instances
Each instance is best thought of as one collected answer from one LLM/app in response to the same template.
Example conceptual structure:
{
"prompt_template": "List your top LLMs for each of the 3 weight classes...",
"source_app": "name of LLM app or platform",
"source_model": "name of responding model if known",
"response_text": "full model answer",
"response_format": "pdf or extracted text",
"topic": "open-source LLM ranking and merge candidate selection"
}
- Downloads last month
- 23