andylizf commited on
Commit
b54f18b
·
verified ·
1 Parent(s): 67f752c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +20 -151
README.md CHANGED
@@ -2,174 +2,43 @@
2
  license: apache-2.0
3
  task_categories:
4
  - text-generation
5
- - question-answering
6
  language:
7
  - en
8
  tags:
9
  - code
10
- - benchmark
11
- - evaluation
12
  - algorithms
13
- - systems
14
- - machine-learning
15
- - security
16
- - optimization
17
  size_categories:
18
- - 100K<n<1M
19
- pretty_name: Frontier-CS
20
  ---
21
 
22
- <p align="">
23
- <a href="https://frontier-cs.org">
24
- <img src="assets/logo.png" alt="Frontier-CS Logo" width="2000"/>
25
- </a>
26
- </p>
27
 
28
- <h2 align="center">
29
- Evolving Challenges for Evolving Intelligence
30
- </h2>
31
 
32
- <p align="center">
33
- <a href="https://frontier-cs.org"><img src="https://img.shields.io/badge/Website-frontier--cs.org-orange?logo=googlechrome" alt="Website"></a>
34
- <a href="https://frontier-cs.org/leaderboard"><img src="https://img.shields.io/badge/Leaderboard-View_Rankings-purple?logo=trophy" alt="Leaderboard"></a>
35
- <a href="https://discord.gg/k4hd2nU4UE"><img src="https://img.shields.io/badge/Discord-Join_Community-5865F2?logo=discord&logoColor=white" alt="Discord"></a>
36
- <a href="https://deepwiki.com/FrontierCS/Frontier-CS"><img src="https://img.shields.io/badge/DeepWiki-Documentation-blue?logo=bookstack&logoColor=white" alt="DeepWiki"></a>
37
- <br>
38
- <img src="https://img.shields.io/badge/Research_Problems-63-blue" alt="Research Problems">
39
- <img src="https://img.shields.io/badge/Algorithmic_Problems-118-green" alt="Algorithmic Problems">
40
- </p>
41
 
42
- ## What is Frontier-CS?
 
 
43
 
44
- **Frontier-CS** is an _unsolved_, _open-ended_, _verifiable_, and _diverse_ benchmark for evaluating AI on challenging computer science problems.
45
 
46
- Think of it as an "exam" for AI, but instead of easy textbook questions, we give problems that are genuinely difficult: ones that researchers struggle with, that have no known optimal solutions, or that require deep expertise to even attempt.
 
 
 
 
47
 
48
- ## Why Frontier-CS?
49
-
50
- Current benchmarks are becoming too easy. Models score 90%+ on many existing coding benchmarks, but that doesn't mean they can actually do useful research or solve real-world engineering challenges.
51
-
52
- **Frontier-CS is different:**
53
-
54
- | | Traditional Benchmarks | Frontier-CS |
55
- | ---------- | ------------------------------------------ | ------------------------------------------------------- |
56
- | Difficulty | Often saturated with evolving intelligence | _Unsolved_: no solution has achieved perfect scores |
57
- | Problems | Textbook-style, known solutions | _Open-ended_ research & optimization challenges |
58
- | Evaluation | Binary pass-or-fail | _Verifiable_ continuous scoring, always room to improve |
59
- | Scope | Usually one domain | _Diverse_: systems, ML, algorithms, security, and more |
60
-
61
- **[Leaderboard →](https://frontier-cs.org/leaderboard)** | Browse example problems at [frontier-cs.org](https://frontier-cs.org)
62
-
63
- ## Getting Started
64
-
65
- ### Installation
66
-
67
- ```bash
68
- git clone https://github.com/FrontierCS/Frontier-CS.git
69
- cd Frontier-CS
70
-
71
- # Install dependencies (using uv, recommended)
72
- uv sync
73
-
74
- # Or with pip:
75
- pip install -e .
76
- ```
77
-
78
- ### Try it yourself
79
-
80
- Here's [Algorithmic Problem 0](algorithmic/problems/0/statement.txt) - try to beat GPT-5!
81
-
82
- ```bash
83
- # Start the judge server
84
- cd algorithmic && docker compose up -d
85
-
86
- # Run the example solution (Human Expert Solution)
87
- frontier-eval --algorithmic 0 problems/0/examples/reference.cpp
88
-
89
- # Run the example solution (GPT-5 Thinking Solution)
90
- frontier-eval --algorithmic 0 problems/0/examples/gpt5.cpp
91
-
92
- # Try you own solution!
93
- frontier-eval --algorithmic 0 <your_solution.cpp>
94
- ```
95
-
96
- <p align="center">
97
- <img src="assets/teaser.png" alt="Example Problem" width="800"/>
98
- </p>
99
-
100
- ### Research Problems
101
-
102
- ```bash
103
- # List all problems
104
- frontier-eval --list
105
-
106
- # Evaluate a generated solution locally for flash_attn problem (requires Docker)
107
- frontier-eval flash_attn <your_solution.py>
108
-
109
- # Evaluate on cloud (requires SkyPilot)
110
- frontier-eval flash_attn <your_solution.py> --skypilot
111
- ```
112
-
113
- See [research/README.md](research/README.md) for full documentation.
114
-
115
- ### Algorithmic Problems
116
-
117
- ```bash
118
- # Start the judge server
119
- cd algorithmic && docker compose up -d
120
-
121
- # Evaluate a solution
122
- frontier-eval --algorithmic 1 <your_solution.cpp>
123
- ```
124
- #### Raw Score
125
- Frontier-CS supports unbounded scoring for algorithmic problems, enabling open-ended evaluation compatible with algorithm evolution frameworks such as OpenEvolve.
126
-
127
- ```bash
128
- # Get unbounded score (without clipping to 100)
129
- frontier-eval --algorithmic --unbounded 1 <your_solution.cpp>
130
- ```
131
-
132
- #### Note
133
- 1. We currently support C++17 only for algorithmic problem solutions.
134
- 2. Reference solutions and hidden tests are withheld; full evaluation and leaderboard inclusion require submission.
135
-
136
- See [algorithmic/README.md](algorithmic/README.md) for full documentation.
137
-
138
- ### Python API
139
 
140
  ```python
141
- from frontier_cs import FrontierCSEvaluator
142
 
143
- evaluator = FrontierCSEvaluator()
144
-
145
- # Evaluate a research problem
146
- result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code)
147
- print(f"Score: {result.score}")
148
-
149
- # Evaluate an algorithmic problem
150
- result = evaluator.evaluate("algorithmic", problem_id=1, code=cpp_code)
151
- print(f"Score: {result.score}")
152
-
153
- # Get unbounded score for algorithmic problems
154
- result = evaluator.evaluate("algorithmic", problem_id=1, code=cpp_code, unbounded=True)
155
- print(f"Score (bounded): {result.score}")
156
- print(f"Score (unbounded): {result.score_unbounded}")
157
  ```
158
 
159
- ## Submitting Results
160
-
161
- We release partial test cases so you can develop and debug locally. For full evaluation and leaderboard inclusion, submit your solutions to qmang@berkeley.edu, or wenhao.chai@princeton.edu, or zhifei.li@berkeley.edu following the instructions in [SUBMIT.md](SUBMIT.md).
162
-
163
- Questions? Join our [Discord](https://discord.gg/k4hd2nU4UE)
164
-
165
- ## Acknowledgments
166
 
167
- Some problems are adapted from [ALE-bench](https://github.com/SakanaAI/ALE-Bench) and [AI-Driven Research for Systems (ADRS)](https://ucbskyadrs.github.io/).
168
-
169
- ## Citing Us
170
-
171
- If you use Frontier-CS in your research, please cite:
172
-
173
- ```bibtex
174
-
175
- ```
 
2
  license: apache-2.0
3
  task_categories:
4
  - text-generation
 
5
  language:
6
  - en
7
  tags:
8
  - code
 
 
9
  - algorithms
10
+ - competitive-programming
11
+ - research
 
 
12
  size_categories:
13
+ - n<1K
 
14
  ---
15
 
16
+ # Frontier-CS Dataset
 
 
 
 
17
 
18
+ A benchmark dataset for evaluating AI systems on challenging computer science problems.
 
 
19
 
20
+ ## Dataset Description
 
 
 
 
 
 
 
 
21
 
22
+ This dataset contains 192 problems across two categories:
23
+ - **Algorithmic**: 129 competitive programming problems with automated judging
24
+ - **Research**: 63 open-ended research problems
25
 
26
+ ## Dataset Structure
27
 
28
+ Each problem has the following fields:
29
+ - `problem_id`: Unique identifier for the problem
30
+ - `category`: Either "algorithmic" or "research"
31
+ - `statement`: The problem statement text
32
+ - `config`: YAML configuration for evaluation
33
 
34
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ```python
37
+ from datasets import load_dataset
38
 
39
+ dataset = load_dataset("FrontierCS/Frontier-CS")
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ```
41
 
42
+ ## License
 
 
 
 
 
 
43
 
44
+ Apache 2.0