shank commited on
Commit
865526d
Β·
1 Parent(s): f42fa9e

Updated readme

Browse files
Files changed (1) hide show
  1. README.md +49 -1210
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: AgentDebugger Env
3
  emoji: πŸ“ˆ
4
  colorFrom: yellow
5
  colorTo: green
@@ -10,1254 +10,93 @@ pinned: false
10
 
11
  # AgentDebuggerEnv πŸ›
12
 
13
- > An OpenEnv-compliant environment where AI agents debug broken code through iterative hypothesis-test-fix cycles β€” benchmarking genuine agentic reasoning, not static code reading.
14
 
15
- ---
16
-
17
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
18
- ## SECTION 0: CONTEXT FOR THE IMPLEMENTING AI
19
- ## Read this completely before writing a single line of code.
20
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
21
-
22
- This project is a submission for the **Meta + PyTorch + HuggingFace OpenEnv Hackathon**. Every implementation decision must be made with the hackathon's judging pipeline in mind. This section explains the full context β€” what the hackathon is, how judging works, what gets you disqualified, and what wins. Do not skip this.
23
-
24
- ---
25
-
26
- ### 0.1 What is OpenEnv?
27
-
28
- OpenEnv is a standardized interface for building environments where AI agents learn from interaction β€” the same paradigm as OpenAI Gym, but for real-world tasks instead of games. Every OpenEnv environment must implement exactly three core methods:
29
-
30
- - `reset(task_id)` β†’ Returns the initial Observation. The agent's clean starting state.
31
- - `step(action)` β†’ Agent submits one action β†’ Environment returns (new Observation, Reward, done: bool, info: dict)
32
- - `state()` β†’ Returns the full internal environment state as a plain dict
33
-
34
- These three methods must be exposed as HTTP REST endpoints via a FastAPI server, containerized with Docker, and deployed to a HuggingFace Space. The hackathon's own CLI tool (`openenv validate`) runs automated checks against these endpoints.
35
-
36
- ---
37
-
38
- ### 0.2 Why This Hackathon Matters β€” The Stakes
39
-
40
- This hackathon is organized by **Meta AI** and sponsored by **PyTorch** and **HuggingFace** β€” three of the most influential organizations in open-source AI research. The judges are not students or generalist engineers. They are:
41
-
42
- - Meta AI researchers who actively publish on LLM agents, tool use, and reasoning
43
- - HuggingFace engineers who maintain the open-source model ecosystem
44
- - PyTorch team members who work on the training infrastructure frontier models run on
45
-
46
- A strong submission here has direct, real consequences:
47
-
48
- 1. **Visibility**: Winning or placing gets your environment featured on HuggingFace Hub, adopted by researchers, and cited in future benchmark papers. This is the kind of credibility that opens research internship doors at Meta, Google DeepMind, and similar labs.
49
- 2. **Research impact**: A well-designed environment with clean graders gets used by the community for years. SWE-bench, which inspired this environment, now appears in nearly every LLM agent paper.
50
- 3. **Career signal**: A national-level hackathon win sponsored by Meta is a tier-1 resume signal for a third-year CS student, especially when the project directly connects to ML agent research.
51
-
52
- The judges will read your environment design the way a reviewer reads a research paper. They will check: Is this domain real? Is the grader honest? Does the hard task actually challenge frontier models? A shallow or broken submission will be obvious to them immediately.
53
-
54
- ---
55
-
56
- ### 0.3 How Judging Works β€” Every Phase Explained
57
-
58
- **Phase 1: Automated Validation (Pass/Fail Gate)**
59
-
60
- This phase is entirely automated. You either pass all checks or you are disqualified. No partial credit. The checks are:
61
-
62
- - HF Space must be live and return HTTP 200 on a ping to `/health`
63
- - `POST /reset` must return a valid Observation JSON
64
- - `openenv.yaml` must parse correctly and match the spec schema
65
- - `docker build` must complete without errors
66
- - `inference.py` must run end-to-end without crashing and produce a `baseline_results.json` file
67
- - All 3 tasks must be enumerable and each grader must return a float in [0.0, 1.0]
68
-
69
- **Phase 2: Agentic Evaluation (Scored)**
70
-
71
- The hackathon runs their own standard agent (Nemotron 3 Super) against your environment. They measure:
72
-
73
- - **Score variance**: Do different agents get meaningfully different scores? If a random agent and GPT-4o get the same score, your graders are broken and this phase fails.
74
- - **Score reproducibility**: Does re-running `inference.py` produce the same scores? Graders must be deterministic.
75
- - **Baseline verification**: They re-run your `inference.py` and check that scores match what you reported.
76
-
77
- **Phase 3: Human Review (Top Submissions Only)**
78
-
79
- Meta and HuggingFace engineers manually review top submissions and score on:
80
- - Real-world utility: Would a real engineering team or research group actually use this?
81
- - Creativity and novelty: Does this environment exist anywhere else?
82
- - Exploit resistance: Can an agent game the grader without actually doing the task?
83
- - Code quality: Is the implementation clean and the environment well-designed?
84
-
85
- ---
86
-
87
- ### 0.4 Disqualification Criteria β€” Avoid All of These
88
-
89
- | Violation | Why it disqualifies |
90
- |---|---|
91
- | Environment does not deploy or `/health` returns non-200 | Automated ping fails Phase 1 immediately |
92
- | `inference.py` not in root directory | Hard requirement β€” automated script looks for it there |
93
- | `inference.py` crashes or errors | Phase 1 baseline check fails |
94
- | Graders always return the same score | Phase 2 variance check fails |
95
- | `docker build` fails | Phase 1 Dockerfile check fails |
96
- | Plagiarized or trivially copied existing environment | Phase 3 human review disqualifies |
97
- | Agent can game the grader without doing the task | Phase 3 exploit check disqualifies |
98
-
99
- ---
100
-
101
- ### 0.5 Hard Infrastructure Constraints β€” Non-Negotiable
102
-
103
- Every constraint below is a hard requirement. Violating any of them causes disqualification or incorrect behavior during automated evaluation:
104
-
105
- ```
106
- inference.py Must be named EXACTLY this. Must be in the ROOT directory. Not in /env/, not in /src/. ROOT.
107
- API_BASE_URL Must be read from os.environ. Never hardcoded.
108
- MODEL_NAME Must be read from os.environ. Never hardcoded.
109
- HF_TOKEN Must be read from os.environ. Never hardcoded.
110
- OpenAI client All LLM calls must use the openai Python library. Not anthropic. Not direct HTTP requests.
111
- 20-minute limit inference.py must complete ALL 3 tasks in under 20 minutes total.
112
- 2 vCPU / 8GB RAM Environment server must run within these limits. NO ML models loaded server-side.
113
- Port 8000 Server must listen on port 8000.
114
- /health endpoint Must exist and return HTTP 200. This is the automated deployment ping.
115
- ```
116
-
117
- ---
118
-
119
- ### 0.6 The Biggest Technical Risk: Code Execution Sandbox
120
-
121
- This environment executes agent-generated Python code. This is the most dangerous part of the implementation. The hackathon's automated evaluation will run an LLM agent against your environment β€” that agent may generate code like `import os; os.system("rm -rf /")` or an infinite loop. If your sandbox does not handle this, the HF Space crashes and you fail Phase 1.
122
-
123
- **The sandbox must implement ALL of the following:**
124
-
125
- 1. **Hard execution timeout**: Every code execution attempt must be killed after a maximum of 10 seconds. Use `subprocess` with `timeout=10`, or `signal.alarm(10)` with a SIGALRM handler.
126
- 2. **Restricted imports**: Remove dangerous builtins before exec. At minimum, block: `os`, `sys`, `subprocess`, `importlib`, `__import__`, `open`, `eval`, `exec`, `compile`.
127
- 3. **Memory limit**: Use `resource.setrlimit(resource.RLIMIT_AS, ...)` to cap memory usage per execution at 256MB.
128
- 4. **No network access**: The executed code must not be able to make network calls. Achieved by the restricted imports above.
129
- 5. **Clean state per attempt**: Each execution must run in a completely fresh namespace. No state leaks between attempts.
130
-
131
- **Implement the sandbox as a separate module** `env/sandbox.py`. Every code execution in the environment must go through this module. Never call `exec()` directly in environment code.
132
-
133
- ---
134
-
135
- ### 0.7 The Second Biggest Risk: SWE-bench Differentiation
136
-
137
- The judges will immediately ask: "How is this different from SWE-bench?" If you cannot answer this through the implementation itself (not just the README), Phase 3 scores suffer.
138
-
139
- **The answer is the iterative feedback loop.** SWE-bench gives an agent a static codebase and measures only the final patch. AgentDebuggerEnv gives the agent a live execution environment and measures the entire debugging trajectory β€” every hypothesis, every fix attempt, every error observation, every iteration. The reward function provides signal at every step, not just at episode end.
140
-
141
- **Make this difference viscerally obvious in the implementation:**
142
- - The Observation must include `previous_attempts` β€” a list of every (code_submitted, error_output, hypothesis) triple from this episode
143
- - The Reward must be non-zero at intermediate steps, not just when tests pass
144
- - The `info` dict must include `hypothesis_accuracy` β€” did the agent's stated hypothesis match the actual bug?
145
- - The hard task must require multiple iterations β€” a single-shot fix attempt must fail
146
-
147
- ---
148
-
149
- ### 0.8 Scoring Rubric β€” What the Judges Are Weighing
150
-
151
- | Category | Weight | What wins points |
152
- |---|---|---|
153
- | Real-world utility | 30% | Would Meta's engineering team actually benchmark on this? |
154
- | Task & grader quality | 25% | Are graders deterministic? Does hard task challenge GPT-4o? |
155
- | Environment design | 20% | Dense reward? Clean reset? Well-typed observations? |
156
- | Code quality & spec compliance | 15% | Does openenv validate pass? Does Docker build? |
157
- | Creativity & novelty | 10% | Domain we haven't seen in OpenEnv? Clever mechanics? |
158
-
159
- ---
160
-
161
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━��━━
162
- ## SECTION 1: PROJECT OVERVIEW
163
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
164
-
165
- Debugging is one of the highest-leverage cognitive tasks in software engineering. Studies consistently show that developers spend 35–50% of their time debugging β€” more than writing new code. Unlike code review (which is static reading), debugging requires a genuine hypothesis-test-fix feedback loop: form a theory about what's wrong, attempt a fix, observe what breaks next, update the theory, repeat.
166
-
167
- Current LLM agents fail at debugging in measurable, specific ways:
168
- - They generate plausible-looking fixes that don't address the root cause
169
- - They ignore new error information and repeat the same fix attempt
170
- - They follow misleading error messages to the wrong function (red herring failures)
171
- - They cannot detect bugs that only manifest under specific execution conditions
172
-
173
- **AgentDebuggerEnv** makes all four of these failures measurable and scorable through a live, iterative execution environment. The agent submits code fixes, the environment executes them in a sandbox, returns the actual test output, and the agent must update its hypothesis and try again β€” exactly like a real developer at a terminal.
174
-
175
- This is not a static QA benchmark. It is a genuine agentic loop.
176
-
177
- ---
178
-
179
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
180
- ## SECTION 2: PROJECT STRUCTURE
181
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
182
-
183
- ```
184
- agentdebugger-env/
185
- β”œβ”€β”€ inference.py # ← MUST BE HERE IN ROOT. Hackathon hard requirement.
186
- β”œβ”€β”€ env/
187
- β”‚ β”œβ”€β”€ __init__.py
188
- β”‚ β”œβ”€β”€ environment.py # Core OpenEnv class: reset(), step(), state()
189
- β”‚ β”œβ”€β”€ models.py # All Pydantic models: Observation, Action, Reward
190
- β”‚ β”œβ”€β”€ sandbox.py # Code execution sandbox β€” ALL exec goes through here
191
- β”‚ β”œβ”€β”€ server.py # FastAPI server exposing /reset, /step, /state, /health
192
- β”‚ β”œβ”€β”€ tasks/
193
- β”‚ β”‚ β”œβ”€β”€ __init__.py
194
- β”‚ β”‚ β”œβ”€β”€ registry.py # Maps task_id β†’ task config + buggy code + test suite
195
- β”‚ β”‚ β”œβ”€β”€ task_easy.py # Task 1: Single function, one clear bug
196
- β”‚ β”‚ β”œβ”€β”€ task_medium.py # Task 2: Three interdependent functions, red herring error
197
- β”‚ β”‚ └── task_hard.py # Task 3: Concurrency race condition
198
- β”‚ └── graders/
199
- β”‚ β”œβ”€β”€ __init__.py
200
- β”‚ β”œβ”€β”€ base_grader.py # Abstract base: score(submitted_attempts) β†’ float
201
- β”‚ β”œβ”€β”€ grader_easy.py
202
- β”‚ β”œβ”€β”€ grader_medium.py
203
- β”‚ └── grader_hard.py
204
- β”œβ”€β”€ tests/
205
- β”‚ β”œβ”€β”€ test_environment.py # Unit tests for reset/step/state
206
- β”‚ β”œβ”€β”€ test_sandbox.py # Tests that sandbox correctly blocks dangerous code
207
- β”‚ └── test_graders.py # Tests graders return [0.0, 1.0] and are deterministic
208
- β”œβ”€β”€ openenv.yaml
209
- β”œβ”€β”€ Dockerfile
210
- β”œβ”€β”€ requirements.txt
211
- └── README.md
212
- ```
213
-
214
- ---
215
-
216
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
217
- ## SECTION 3: DATA MODELS (Implement Exactly as Specified)
218
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
219
-
220
- All models must be Pydantic v2 BaseModel subclasses. All fields must have types. No Optional fields without defaults. These exact field names and types are required for `openenv validate` to pass.
221
-
222
- ### 3.1 Observation
223
-
224
- ```python
225
- from pydantic import BaseModel
226
- from typing import List, Dict, Optional
227
-
228
- class FixAttempt(BaseModel):
229
- attempt_number: int # 1-indexed attempt number this episode
230
- code_submitted: str # The full code the agent submitted for this attempt
231
- hypothesis: str # Agent's stated hypothesis about the bug before this attempt
232
- execution_output: str # Full stdout + stderr from running the test suite
233
- tests_passed: int # Number of tests that passed after this fix
234
- tests_total: int # Total number of tests in the suite
235
- execution_time_ms: int # How long the sandbox took to run (milliseconds)
236
- timed_out: bool # Whether this attempt hit the 10-second sandbox timeout
237
-
238
- class Observation(BaseModel):
239
- # Task context β€” fixed for the episode
240
- task_id: str # "easy" | "medium" | "hard"
241
- task_description: str # Plain English description of what the code is supposed to do
242
- buggy_code: str # The original broken code (shown once at reset, always available)
243
- test_suite: str # The full test suite code (agent can read this to understand requirements)
244
- initial_error_output: str # Output of running the test suite against the buggy code at reset()
245
-
246
- # Dynamic state β€” changes each step
247
- current_code: str # The most recent version of the code (after agent's last fix attempt)
248
- current_error_output: str # Output of running tests against current_code
249
- tests_passed: int # Tests passing on current_code
250
- tests_total: int # Total tests in suite
251
- previous_attempts: List[FixAttempt] # Full history of all fix attempts this episode
252
-
253
- # Budget tracking
254
- attempts_remaining: int # How many more fix submissions are allowed
255
- max_attempts: int # Total attempt budget for this task
256
-
257
- # Step tracking
258
- step_number: int # Current step number (increments on every action)
259
- max_steps: int # Total step budget (includes both fix and query actions)
260
- done: bool # Whether the episode has ended
261
-
262
- # Scoring signal (shown to agent for learning)
263
- score_estimate: float # Running estimate of current grader score (0.0–1.0)
264
- hint_used: bool # Whether the agent has used their one hint this episode
265
- ```
266
-
267
- ### 3.2 Action
268
-
269
- The agent submits exactly one Action per step. There are three action types. These are mutually exclusive.
270
-
271
- ```python
272
- class Action(BaseModel):
273
- action_type: str # "submit_fix" | "query_context" | "give_up"
274
-
275
- # ── submit_fix ────────────────────────────────────────────────────────────
276
- # Used when action_type == "submit_fix"
277
- # This is the primary action. The agent submits a complete, corrected version
278
- # of the code. The environment runs it against the test suite and returns results.
279
- fixed_code: Optional[str] = None # Complete corrected code (not a diff β€” full file)
280
- hypothesis: Optional[str] = None # Agent's stated hypothesis about the bug.
281
- # REQUIRED even with submit_fix. Used for scoring.
282
-
283
- # ── query_context ─────────────────────────────────────────────────────────
284
- # Used when action_type == "query_context"
285
- # Agent requests additional context without spending a fix attempt.
286
- # Each episode has ONE free query. Additional queries cost -0.05 reward each.
287
- # Does NOT count against attempts_remaining. DOES count against max_steps.
288
- query_type: Optional[str] = None # "function_signature" | "related_code" |
289
- # "error_explanation" | "test_details"
290
- query_target: Optional[str] = None # What to query. E.g. function name, line number.
291
-
292
- # ── give_up ───────────────────────────────────────────────────────────────
293
- # Used when action_type == "give_up"
294
- # Agent explicitly surrenders. Ends the episode. Better than truncation.
295
- # Triggers grader with current best attempt scores.
296
- final_diagnosis: Optional[str] = None # Agent's final explanation of what the bug was
297
- ```
298
-
299
- #### Action Rules (implement exactly these β€” they affect grader scores):
300
-
301
- | Rule | Implementation detail |
302
- |---|---|
303
- | `submit_fix` without `hypothesis` | Return step_reward = -0.1, error in info["error"], do NOT execute the code, do NOT count the attempt |
304
- | `submit_fix` with syntactically invalid Python | Execute it anyway (sandbox will catch the SyntaxError), count the attempt, return the SyntaxError as execution_output |
305
- | `query_context` first use | Free. Return requested context in info["query_result"]. No reward change. |
306
- | `query_context` subsequent uses | Return context but apply -0.05 to step_reward. Still free of attempts_remaining. |
307
- | `query_context` with invalid query_type | Return step_reward = -0.05, error in info, do not spend the free query |
308
- | `give_up` | Set done=True immediately. Run grader on best attempt. Return grader_score in final Reward. |
309
- | Exceeding max_steps | Force done=True. Apply -0.20 truncation penalty. Run grader on best attempt. |
310
- | Exceeding attempts_remaining | Refuse the fix: return step_reward = -0.15, error in info["error"]. Agent can still query_context or give_up. |
311
-
312
- ### 3.3 Reward
313
-
314
- ```python
315
- class Reward(BaseModel):
316
- step_reward: float # Reward for THIS step only. Range: -1.0 to +1.0
317
- cumulative_reward: float # Sum of all step_rewards this episode
318
- grader_score: float # 0.0 during episode. Set ONLY on terminal step (done=True).
319
- # This is the official score used for ranking. Range: 0.0–1.0
320
- breakdown: Dict[str, float] # Itemized components. Always populate this β€” used for debugging
321
- # and for the Phase 2 variance analysis. Example:
322
- # {"test_progress": 0.2, "hypothesis_match": 0.1,
323
- # "efficiency_bonus": 0.05, "false_fix_penalty": 0.0}
324
- ```
325
-
326
- ---
327
-
328
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
329
- ## SECTION 4: ENVIRONMENT API (FastAPI Endpoints)
330
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
331
-
332
- ### POST /reset
333
-
334
- Starts a completely fresh episode. Clears all state from any previous episode. Returns initial Observation.
335
-
336
- ```
337
- Request body: { "task_id": "easy" } (string, required)
338
- Response: Observation JSON
339
- HTTP status: 200 on success, 400 on invalid task_id
340
- ```
341
-
342
- The Observation returned by reset() must include:
343
- - `buggy_code`: the full broken source file
344
- - `test_suite`: the full test file
345
- - `initial_error_output`: the output of running the test suite against buggy_code RIGHT NOW at reset time (run it in the sandbox on reset, cache the result)
346
- - `current_code` == `buggy_code` (no fixes applied yet)
347
- - `previous_attempts` == [] (empty list)
348
- - `attempts_remaining` == task's max_attempts config value
349
- - `tests_passed` == however many tests pass on the buggy code (may be > 0 for medium/hard tasks)
350
- - `done` == False
351
-
352
- ### POST /step
353
-
354
- Submit one action. Advance the environment by one step.
355
-
356
- ```
357
- Request body: Action JSON
358
- Response: { "observation": Observation, "reward": Reward, "done": bool, "info": dict }
359
- HTTP status: 200 always (never 500 β€” handle all errors gracefully and return them in info["error"])
360
- ```
361
-
362
- The `info` dict must always contain:
363
- ```python
364
- {
365
- "step_number": int,
366
- "attempts_used": int,
367
- "attempts_remaining": int,
368
- "tests_passed": int,
369
- "tests_total": int,
370
- "hypothesis_matched_bug": bool | None, # None until episode ends or grader has signal
371
- "query_result": str | None, # Populated when action_type == "query_context"
372
- "error": str | None, # Human-readable error message if action was invalid
373
- "execution_time_ms": int | None, # Sandbox execution time for this attempt
374
- "timed_out": bool # Whether sandbox timed out this attempt
375
- }
376
- ```
377
-
378
- ### GET /state
379
-
380
- Returns the full internal environment state. Required by OpenEnv spec.
381
-
382
- ```
383
- Response: {
384
- "task_id": str,
385
- "step_number": int,
386
- "attempts_used": int,
387
- "current_tests_passed": int,
388
- "current_tests_total": int,
389
- "best_tests_passed": int, # Best test pass count achieved in any attempt this episode
390
- "all_hypotheses": List[str], # All hypotheses submitted so far
391
- "cumulative_reward": float,
392
- "done": bool,
393
- "hint_used": bool
394
- }
395
- ```
396
-
397
- ### GET /health
398
-
399
- **This endpoint is critical. The hackathon's automated deployment check pings this URL. If it returns anything other than HTTP 200, Phase 1 fails immediately.**
400
-
401
- ```
402
- Response: { "status": "ok", "environment": "agentdebugger-env", "version": "1.0.0" }
403
- HTTP status: 200 always
404
- ```
405
-
406
- ---
407
-
408
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
409
- ## SECTION 5: SANDBOX (Critical β€” Implement First)
410
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
411
-
412
- The sandbox is the most security-critical component. Every `submit_fix` action goes through this. Implement it as `env/sandbox.py` before implementing anything else. It must be correct before the environment goes live.
413
-
414
- ```python
415
- # env/sandbox.py
416
- # ALL code execution in the environment must go through execute_code().
417
- # Never call exec() or subprocess directly anywhere else.
418
-
419
- import subprocess
420
- import tempfile
421
- import os
422
- import sys
423
- from typing import Tuple
424
-
425
- BLOCKED_IMPORTS = [
426
- "os", "sys", "subprocess", "socket", "importlib", "shutil",
427
- "pathlib", "glob", "pickle", "shelve", "dbm", "sqlite3",
428
- "ftplib", "http", "urllib", "requests", "httpx", "asyncio",
429
- "multiprocessing", "threading", # threading allowed only in task_hard β€” see below
430
- "ctypes", "cffi", "resource", "signal", "mmap", "gc"
431
- ]
432
-
433
- EXECUTION_TIMEOUT_SECONDS = 10
434
- MEMORY_LIMIT_MB = 256
435
-
436
- def execute_code(code: str, test_code: str, allow_threading: bool = False) -> Tuple[str, bool, int]:
437
- """
438
- Execute code + test_code in a sandboxed subprocess.
439
-
440
- Returns:
441
- (output: str, timed_out: bool, execution_time_ms: int)
442
-
443
- The output contains both stdout and stderr merged, exactly as a developer
444
- would see in their terminal. This is what gets returned in the Observation.
445
-
446
- Implementation requirements:
447
- 1. Write code + test_code to a temporary file
448
- 2. Run it in a subprocess with timeout=EXECUTION_TIMEOUT_SECONDS
449
- 3. Capture stdout + stderr merged (subprocess.PIPE with stderr=subprocess.STDOUT)
450
- 4. Kill the subprocess if it exceeds timeout
451
- 5. Return the output, whether it timed out, and elapsed time in ms
452
- 6. Clean up temp files in a finally block β€” always
453
-
454
- The allow_threading flag is True ONLY for task_hard, which intentionally
455
- uses threading to create the race condition. For easy and medium tasks,
456
- threading is in BLOCKED_IMPORTS.
457
-
458
- Blocking mechanism: Prepend a validation script to the temp file that
459
- checks for blocked imports using AST parsing before exec. If a blocked
460
- import is detected, print an error and exit(1) before running any code.
461
- Use ast.parse() + ast.walk() to find ast.Import and ast.ImportFrom nodes.
462
- """
463
- pass # Implement this
464
- ```
465
-
466
- **Sandbox test cases you must write in `tests/test_sandbox.py`:**
467
-
468
- ```python
469
- def test_timeout_enforcement():
470
- # Code with infinite loop must return timed_out=True within 11 seconds
471
- code = "while True: pass"
472
- output, timed_out, _ = execute_code(code, "")
473
- assert timed_out == True
474
-
475
- def test_os_import_blocked():
476
- code = "import os; os.system('echo pwned')"
477
- output, timed_out, _ = execute_code(code, "")
478
- assert "pwned" not in output
479
-
480
- def test_sys_import_blocked():
481
- code = "import sys; sys.exit(0)"
482
- output, _, _ = execute_code(code, "")
483
- assert "blocked" in output.lower() or "import" in output.lower()
484
-
485
- def test_clean_code_runs():
486
- code = "def add(a, b): return a + b"
487
- test = "assert add(2, 3) == 5\nprint('PASSED')"
488
- output, timed_out, _ = execute_code(code, test)
489
- assert "PASSED" in output
490
- assert timed_out == False
491
-
492
- def test_syntax_error_returns_output():
493
- code = "def broken(: pass"
494
- output, timed_out, _ = execute_code(code, "")
495
- assert "SyntaxError" in output
496
- assert timed_out == False
497
- ```
498
 
499
  ---
500
 
501
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
502
- ## SECTION 6: REWARD FUNCTION
503
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
504
-
505
- The reward function must provide dense signal throughout the episode. An RL agent must be able to learn from intermediate steps, not just the final outcome. Every step must return a non-trivial step_reward.
506
-
507
- ### 6.1 Step-Level Rewards
508
-
509
- | Event | Reward | Notes |
510
- |---|---|---|
511
- | Fix attempt increases tests passing (e.g. 3β†’5 of 8) | +0.15 Γ— (new_passed - prev_passed) / total | Scaled progress reward |
512
- | Fix attempt decreases tests passing | -0.10 Γ— (prev_passed - new_passed) / total | Regression penalty |
513
- | Fix attempt makes no change to passing count | -0.05 | Stagnation penalty |
514
- | All tests pass (episode solved) | +0.50 | Major bonus on top of progress reward |
515
- | Hypothesis matches actual bug (verified at end) | +0.10 | Rewards correct reasoning, not just lucky fixes |
516
- | Hypothesis is completely wrong direction | -0.05 | Penalizes random guessing |
517
- | Fix attempt times out in sandbox | -0.10 | Penalizes infinite loops in submitted code |
518
- | Submit fix without hypothesis field | -0.10 | Hypothesis is required β€” see Action rules |
519
- | First `query_context` use | 0.00 | Free |
520
- | Subsequent `query_context` uses | -0.05 each | Diminishing returns on hints |
521
- | `give_up` action | 0.00 step reward | Grader runs on best attempt |
522
- | Episode truncated (max_steps exceeded) | -0.20 | Penalizes indecision |
523
-
524
- ### 6.2 Episode-Level Grader Score
525
-
526
- The grader runs ONLY when done=True (either tests all pass, agent gives up, or max_steps exceeded). It produces the official `grader_score` float in [0.0, 1.0].
527
-
528
- ```
529
- grader_score = test_pass_ratio (weight: 0.60)
530
- + efficiency_bonus (weight: 0.20)
531
- + hypothesis_accuracy (weight: 0.15)
532
- + early_solve_bonus (weight: 0.05)
533
-
534
- where:
535
-
536
- test_pass_ratio = best_tests_passed / tests_total
537
- (best across ALL attempts this episode, not just final)
538
-
539
- efficiency_bonus = max(0, (max_attempts - attempts_used) / max_attempts) Γ— 0.20
540
- (reward for solving with fewer attempts)
541
-
542
- hypothesis_accuracy = fraction of submitted hypotheses that correctly identified
543
- the bug location (correct function name mentioned) Γ— 0.15
544
-
545
- early_solve_bonus = 0.05 if all tests pass AND attempts_used <= ceil(max_attempts / 3)
546
- else 0.0
547
- ```
548
-
549
- **Grader score variance guarantee:** A random agent (submits random code each attempt) will score 0.0–0.15 on all tasks. A perfect agent (correct fix on first attempt with correct hypothesis) will score 0.95–1.0. This guarantees the Phase 2 variance check passes.
550
-
551
- ---
552
 
553
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
554
- ## SECTION 7: TASKS
555
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 
556
 
557
- Each task is defined by: buggy_code, test_suite, ground_truth_bug_description, ground_truth_fix, and the keyword(s) that must appear in a correct hypothesis. These are stored as Python dictionaries in each task file and loaded by registry.py.
558
 
559
  ---
560
 
561
- ### Task 1 β€” Easy: Single Function, One Clear Bug
562
-
563
- **Difficulty:** Easy | **Max attempts:** 5 | **Max steps:** 8
564
- **Expected GPT-4o score:** ~0.85
565
-
566
- **Scenario:** A utility module for a data processing pipeline. One function has a bug that produces a clear, informative error message pointing directly at the problem. One to two fix iterations should be enough.
567
-
568
- **The bug:** An off-by-one error in a binary search implementation. The function searches for a target value in a sorted list. The termination condition uses `<` instead of `<=`, causing the function to miss the target when it's the last element of the list. The error is a failing assertion with a clear message: `AssertionError: binary_search([1,2,3,4,5], 5) returned -1, expected 4`.
569
-
570
- **Buggy code to implement in task_easy.py:**
571
- ```python
572
- def binary_search(arr: list, target: int) -> int:
573
- """Return the index of target in sorted arr, or -1 if not found."""
574
- left, right = 0, len(arr) - 1
575
- while left < right: # BUG: should be left <= right
576
- mid = (left + right) // 2
577
- if arr[mid] == target:
578
- return mid
579
- elif arr[mid] < target:
580
- left = mid + 1
581
- else:
582
- right = mid - 1
583
- return -1
584
- ```
585
-
586
- **Test suite (8 tests) β€” the grader is these tests passing:**
587
- ```python
588
- import pytest
589
- from solution import binary_search
590
-
591
- def test_finds_first_element():
592
- assert binary_search([1, 3, 5, 7, 9], 1) == 0
593
-
594
- def test_finds_middle_element():
595
- assert binary_search([1, 3, 5, 7, 9], 5) == 2
596
-
597
- def test_finds_last_element():
598
- assert binary_search([1, 3, 5, 7, 9], 9) == 4 # THIS IS THE FAILING TEST
599
-
600
- def test_returns_minus_one_for_missing():
601
- assert binary_search([1, 3, 5, 7, 9], 4) == -1
602
-
603
- def test_single_element_found():
604
- assert binary_search([42], 42) == 0
605
-
606
- def test_single_element_not_found():
607
- assert binary_search([42], 7) == -1
608
-
609
- def test_empty_list():
610
- assert binary_search([], 5) == -1
611
-
612
- def test_finds_second_to_last():
613
- assert binary_search([2, 4, 6, 8, 10], 8) == 3
614
- ```
615
-
616
- **Initial error output (shown in reset() Observation):**
617
- ```
618
- FAILED test_suite.py::test_finds_last_element - AssertionError: assert -1 == 4
619
- 7 passed, 1 failed
620
- ```
621
 
622
- **Ground truth for grader:**
623
- - `ground_truth_bug_location`: "binary_search" (function name)
624
- - `ground_truth_bug_type`: "off_by_one"
625
- - `hypothesis_keywords`: ["left <= right", "termination", "last element", "off by one", "<="]
626
- - A hypothesis matches if it contains at least 1 of these keywords (case-insensitive)
627
 
628
- **Why it's easy:** Error message directly names the failing test and the expected vs actual value. One read of the while condition reveals the bug. The fix is a single character change.
 
 
 
629
 
630
  ---
631
 
632
- ### Task 2 β€” Medium: Three Interdependent Functions, Red Herring Error
633
-
634
- **Difficulty:** Medium | **Max attempts:** 7 | **Max steps:** 15
635
- **Expected GPT-4o score:** ~0.50
636
-
637
- **Scenario:** A simple user authentication module with three interdependent functions: `hash_password`, `validate_password`, and `authenticate_user`. The error message points to `authenticate_user` but the actual bug is in `hash_password`. The agent must trace backwards from symptom to cause.
638
-
639
- **The bug:** `hash_password` uses `hashlib.md5` and calls `.hexdigest()` but then wraps the result in `str()` unnecessarily, which adds the string `"b'"` prefix and `"'"` suffix to the hash in Python 3 (this happens because an intermediate step converts to bytes then back incorrectly). The `validate_password` function hashes the input and compares β€” but the stored hash was created with the buggy function, so when authenticate is called with correct credentials, comparison always fails and returns False.
640
 
641
- **Why the red herring works:** The failing test error says `authenticate_user('alice', 'correct_password') returned False` β€” which looks like a bug in `authenticate_user`. The agent's first instinct will be to look at the authentication logic. But `authenticate_user` is completely correct β€” it calls `validate_password` correctly. `validate_password` is also correct in structure β€” it compares properly. The bug is in `hash_password`, which is called by both the setup (storing the hash) and validation (checking the input hash). Because both sides are broken the same way, the stored hash and the computed hash are both wrong in the same way β€” EXCEPT when the password is first stored via a different code path that doesn't use the buggy hash function.
642
 
643
- **Implement the full buggy module in task_medium.py with:**
644
- - `hash_password(password: str) -> str` β€” contains the subtle bytes/str conversion bug
645
- - `validate_password(password: str, stored_hash: str) -> bool` β€” correct implementation
646
- - `authenticate_user(username: str, password: str, user_db: dict) -> bool` β€” correct implementation
647
- - 10-test suite where 6 tests pass (basic happy path) and 4 fail (edge cases involving the hash mismatch)
648
-
649
- **Ground truth for grader:**
650
- - `ground_truth_bug_location`: "hash_password"
651
- - `hypothesis_keywords`: ["hash_password", "bytes", "str(", "hexdigest", "encoding", "b'"]
652
- - A hypothesis matches if it mentions "hash_password" AND at least 1 other keyword
653
- - A hypothesis that only mentions "authenticate_user" scores 0.0 for hypothesis_accuracy (red herring was followed)
654
-
655
- **Why it's medium:** The error message is genuinely misleading. The agent must look at more than one function, understand data flow between them, and resist the red herring. GPT-4o follows red herrings in error messages approximately 50% of the time in this class of problem.
656
 
657
  ---
658
 
659
- ### Task 3 β€” Hard: Concurrency Race Condition
660
-
661
- **Difficulty:** Hard | **Max attempts:** 10 | **Max steps:** 25
662
- **Expected GPT-4o score:** ~0.18
663
-
664
- **Scenario:** A thread-safe counter implementation used in a web server to track active connections. It uses threading but has a classic race condition: the read-modify-write cycle on the counter is not atomic. Under sequential access, it works perfectly β€” all 8 existing tests pass. The bug only manifests under concurrent access with specific thread interleaving.
665
-
666
- **The bug:** `increment()` and `decrement()` methods read `self.count`, compute `self.count Β± 1`, then write back β€” as three separate operations without holding a lock. The lock is acquired per-operation but not across the read-modify-write sequence.
667
-
668
- ```python
669
- import threading
670
-
671
- class ConnectionCounter:
672
- """Thread-safe connection counter for a web server."""
673
-
674
- def __init__(self):
675
- self.count = 0
676
- self._lock = threading.Lock()
677
-
678
- def increment(self):
679
- with self._lock:
680
- current = self.count # read
681
- # ← LOCK RELEASED HERE β€” race window
682
- new_val = current + 1 # modify
683
- with self._lock:
684
- self.count = new_val # write
685
-
686
- def decrement(self):
687
- with self._lock:
688
- current = self.count
689
- new_val = current - 1
690
- with self._lock:
691
- self.count = new_val
692
-
693
- def get_count(self) -> int:
694
- with self._lock:
695
- return self.count
696
- ```
697
-
698
- **The 8 existing tests (all pass on buggy code β€” sequential access only):**
699
- ```python
700
- def test_initial_count_zero(): ...
701
- def test_single_increment(): ...
702
- def test_single_decrement(): ...
703
- def test_multiple_increments(): ...
704
- def test_multiple_decrements(): ...
705
- def test_increment_then_decrement(): ...
706
- def test_get_count_thread_safe(): ...
707
- def test_count_never_negative(): ...
708
- ```
709
-
710
- **What makes this hard β€” the agent must:**
711
- 1. Recognize that 8/8 sequential tests passing does NOT mean the code is correct
712
- 2. Understand that the bug only manifests under concurrent load
713
- 3. **Design a new concurrent test** that surfaces the race condition (this is the key step)
714
- 4. Fix the implementation (move the entire read-modify-write inside a single `with self._lock:` block)
715
- 5. Verify the fix passes ALL 8 original tests + the new concurrent test they designed
716
-
717
- **The correct concurrent test an agent must write to surface the bug:**
718
- ```python
719
- def test_concurrent_increments():
720
- counter = ConnectionCounter()
721
- threads = [threading.Thread(target=counter.increment) for _ in range(100)]
722
- [t.start() for t in threads]
723
- [t.join() for t in threads]
724
- assert counter.get_count() == 100 # Will fail intermittently on buggy code
725
- ```
726
-
727
- **IMPORTANT implementation note for task_hard.py:** The sandbox's `allow_threading=True` flag must be set when executing this task's code. This is the ONLY task where threading is permitted in the sandbox.
728
-
729
- **Grader special logic for hard task:**
730
- - +0.40 if final code passes all 8 original tests
731
- - +0.30 if final code passes a concurrent stress test (run 1000 concurrent increments, assert count == 1000)
732
- - +0.20 for hypothesis_accuracy (must mention "race condition" OR "atomic" OR "lock" AND "read-modify-write" OR "not atomic" OR "interleaving")
733
- - +0.10 efficiency bonus if solved within 5 attempts
734
 
735
- **Why it's hard:** Race conditions are the hardest class of bug to debug. They are non-deterministic (the bug may not appear on every run). The agent must reason about concurrent execution, recognize that passing tests are not sufficient proof of correctness, design a test that makes the non-determinism deterministic, AND then fix the atomicity issue. GPT-4o fails this class of problem approximately 80% of the time.
 
 
 
736
 
737
- **Ground truth for grader:**
738
- - `ground_truth_bug_location`: "increment AND decrement"
739
- - `hypothesis_keywords`: ["race condition", "atomic", "lock", "read-modify-write", "interleaving", "not thread-safe", "release the lock"]
 
 
740
 
741
  ---
742
 
743
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
744
- ## SECTION 8: BASELINE INFERENCE SCRIPT
745
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
746
-
747
- **File must be named `inference.py`. Must be in the ROOT directory. This is a hard hackathon requirement β€” the automated validator looks for it at this exact path.**
748
-
749
- ```python
750
- """
751
- AgentDebuggerEnv Baseline Inference Script
752
- ==========================================
753
- Filename: inference.py (ROOT directory β€” not in any subdirectory)
754
-
755
- Reads from environment variables (never hardcoded):
756
- API_BASE_URL β€” LLM API endpoint
757
- MODEL_NAME β€” Model identifier
758
- HF_TOKEN β€” API key / HuggingFace token
759
-
760
- Uses openai Python client for all LLM calls (hackathon requirement).
761
- Must complete all 3 tasks in under 20 minutes total.
762
- Saves results to baseline_results.json on completion.
763
- """
764
-
765
- import os
766
- import json
767
- import time
768
- import re
769
- from openai import OpenAI
770
- import requests
771
-
772
- # ── Environment variables (never hardcode these) ──────────────────────────────
773
- API_BASE_URL = os.environ.get("API_BASE_URL", "https://api.openai.com/v1")
774
- MODEL_NAME = os.environ.get("MODEL_NAME", "gpt-4o")
775
- HF_TOKEN = os.environ.get("HF_TOKEN", "")
776
- ENV_BASE_URL = os.environ.get("ENV_BASE_URL", "http://localhost:8000")
777
-
778
- client = OpenAI(base_url=API_BASE_URL, api_key=HF_TOKEN)
779
-
780
- SYSTEM_PROMPT = """You are an expert software debugger. You will be given broken code and a
781
- failing test suite. Your job is to:
782
- 1. Analyze the error output carefully
783
- 2. Form a hypothesis about the root cause (required for every fix attempt)
784
- 3. Submit a corrected version of the complete code
785
- 4. Observe the new test results and update your hypothesis if needed
786
- 5. Repeat until all tests pass or you run out of attempts
787
-
788
- You must ALWAYS respond with a valid JSON action object. Available actions:
789
-
790
- Submit a fix:
791
- {
792
- "action_type": "submit_fix",
793
- "fixed_code": "<complete corrected Python code as a string>",
794
- "hypothesis": "<your hypothesis about what the bug is and where>"
795
- }
796
-
797
- Query for more context (use sparingly β€” first one is free):
798
- {
799
- "action_type": "query_context",
800
- "query_type": "error_explanation" | "function_signature" | "related_code" | "test_details",
801
- "query_target": "<function name or line number or test name>"
802
- }
803
-
804
- Give up (if you cannot find the bug):
805
- {
806
- "action_type": "give_up",
807
- "final_diagnosis": "<your best guess at what the bug was>"
808
- }
809
-
810
- CRITICAL RULES:
811
- - hypothesis field is REQUIRED in submit_fix β€” missing it costs reward
812
- - Submit COMPLETE code files, not diffs or partial functions
813
- - Read the error output carefully before each attempt β€” it tells you what changed
814
- - For concurrent bugs, think about thread safety and atomic operations"""
815
-
816
-
817
- def parse_action(raw: str) -> dict:
818
- """Parse LLM response to action dict. Handle markdown code blocks."""
819
- raw = raw.strip()
820
- # Strip markdown code blocks if present
821
- raw = re.sub(r'^```(?:json)?\s*', '', raw, flags=re.MULTILINE)
822
- raw = re.sub(r'\s*```$', '', raw, flags=re.MULTILINE)
823
- try:
824
- return json.loads(raw)
825
- except json.JSONDecodeError:
826
- # Try to extract first JSON object
827
- match = re.search(r'\{.*\}', raw, re.DOTALL)
828
- if match:
829
- try:
830
- return json.loads(match.group())
831
- except json.JSONDecodeError:
832
- pass
833
- # Fallback: give up
834
- return {
835
- "action_type": "give_up",
836
- "final_diagnosis": f"Failed to parse response: {raw[:200]}"
837
- }
838
-
839
-
840
- def build_initial_message(obs: dict) -> str:
841
- return (
842
- f"=== DEBUGGING TASK: {obs['task_id'].upper()} ===\n\n"
843
- f"TASK DESCRIPTION:\n{obs['task_description']}\n\n"
844
- f"BUGGY CODE:\n```python\n{obs['buggy_code']}\n```\n\n"
845
- f"TEST SUITE:\n```python\n{obs['test_suite']}\n```\n\n"
846
- f"INITIAL ERROR OUTPUT:\n{obs['initial_error_output']}\n\n"
847
- f"Attempts remaining: {obs['attempts_remaining']}\n"
848
- f"Max steps: {obs['max_steps']}\n\n"
849
- f"Analyze the error and submit your first fix attempt."
850
- )
851
-
852
-
853
- def build_step_message(obs: dict, reward: dict, info: dict) -> str:
854
- last_attempt = obs['previous_attempts'][-1] if obs['previous_attempts'] else None
855
- msg = f"Step {obs['step_number']} result:\n"
856
- msg += f"Step reward: {reward['step_reward']:+.3f} | Cumulative: {reward['cumulative_reward']:.3f}\n"
857
- msg += f"Tests passing: {obs['tests_passed']}/{obs['tests_total']}\n"
858
- msg += f"Attempts remaining: {obs['attempts_remaining']}\n"
859
-
860
- if info.get("error"):
861
- msg += f"ERROR: {info['error']}\n"
862
-
863
- if info.get("query_result"):
864
- msg += f"\nQUERY RESULT:\n{info['query_result']}\n"
865
-
866
- if last_attempt and last_attempt.get("execution_output"):
867
- output = last_attempt["execution_output"]
868
- # Truncate long outputs to stay within token budget
869
- if len(output) > 1500:
870
- output = output[:750] + "\n...[truncated]...\n" + output[-750:]
871
- msg += f"\nNEW TEST OUTPUT:\n{output}\n"
872
-
873
- if obs['tests_passed'] == obs['tests_total']:
874
- msg += "\nβœ“ ALL TESTS PASS! Episode solved."
875
- else:
876
- msg += f"\nContinue debugging. {obs['tests_total'] - obs['tests_passed']} tests still failing."
877
-
878
- return msg
879
-
880
-
881
- def run_episode(task_id: str) -> dict:
882
- """Run one complete debugging episode. Returns result dict."""
883
-
884
- # Reset environment
885
- reset_resp = requests.post(f"{ENV_BASE_URL}/reset", json={"task_id": task_id})
886
- reset_resp.raise_for_status()
887
- obs = reset_resp.json()
888
-
889
- messages = [
890
- {"role": "system", "content": SYSTEM_PROMPT},
891
- {"role": "user", "content": build_initial_message(obs)}
892
- ]
893
-
894
- done = False
895
- last_result = {"reward": {"grader_score": 0.0, "cumulative_reward": 0.0}, "observation": obs}
896
- action = {}
897
-
898
- while not done:
899
- # Get LLM action
900
- completion = client.chat.completions.create(
901
- model=MODEL_NAME,
902
- messages=messages,
903
- max_tokens=1200,
904
- temperature=0.2
905
- )
906
- raw = completion.choices[0].message.content
907
- action = parse_action(raw)
908
-
909
- # Submit action to environment
910
- step_resp = requests.post(f"{ENV_BASE_URL}/step", json=action)
911
- step_resp.raise_for_status()
912
- result = step_resp.json()
913
-
914
- obs = result["observation"]
915
- reward = result["reward"]
916
- done = result["done"]
917
- info = result["info"]
918
- last_result = result
919
-
920
- # Build context for next LLM call
921
- step_msg = build_step_message(obs, reward, info)
922
- messages.append({"role": "assistant", "content": raw})
923
- messages.append({"role": "user", "content": step_msg})
924
-
925
- if done:
926
- break
927
-
928
- final_obs = last_result["observation"]
929
- return {
930
- "task_id": task_id,
931
- "grader_score": last_result["reward"]["grader_score"],
932
- "cumulative_reward": last_result["reward"]["cumulative_reward"],
933
- "steps_taken": final_obs["step_number"],
934
- "attempts_used": final_obs["max_attempts"] - final_obs["attempts_remaining"],
935
- "tests_passed": final_obs["tests_passed"],
936
- "tests_total": final_obs["tests_total"],
937
- "solved": final_obs["tests_passed"] == final_obs["tests_total"],
938
- "final_action_type": action.get("action_type", "unknown")
939
- }
940
-
941
-
942
- def main():
943
- print("AgentDebuggerEnv β€” Baseline Inference")
944
- print(f"Model: {MODEL_NAME}")
945
- print(f"API: {API_BASE_URL}")
946
- print(f"Env: {ENV_BASE_URL}")
947
- print("=" * 55)
948
-
949
- results = []
950
- start_time = time.time()
951
-
952
- for task_id in ["easy", "medium", "hard"]:
953
- print(f"\nTask: {task_id}")
954
- t0 = time.time()
955
- result = run_episode(task_id)
956
- elapsed = time.time() - t0
957
-
958
- solved_str = "βœ“ SOLVED" if result["solved"] else "βœ— UNSOLVED"
959
- print(f" Score: {result['grader_score']:.3f}")
960
- print(f" Outcome: {solved_str}")
961
- print(f" Attempts: {result['attempts_used']}")
962
- print(f" Tests: {result['tests_passed']}/{result['tests_total']}")
963
- print(f" Time: {elapsed:.1f}s")
964
- results.append(result)
965
-
966
- total_time = time.time() - start_time
967
- mean_score = sum(r["grader_score"] for r in results) / len(results)
968
-
969
- print("\n" + "=" * 55)
970
- print(f"Mean Score: {mean_score:.3f}")
971
- print(f"Total Time: {total_time:.1f}s (limit: 1200s)")
972
- print("=" * 55)
973
-
974
- output = {
975
- "model": MODEL_NAME,
976
- "api_base_url": API_BASE_URL,
977
- "results": results,
978
- "mean_score": mean_score,
979
- "total_time_seconds": round(total_time, 1)
980
- }
981
-
982
- with open("baseline_results.json", "w") as f:
983
- json.dump(output, f, indent=2)
984
- print("\nSaved β†’ baseline_results.json")
985
-
986
-
987
- if __name__ == "__main__":
988
- main()
989
- ```
990
-
991
- ---
992
-
993
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
994
- ## SECTION 9: openenv.yaml
995
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
996
-
997
- ```yaml
998
- name: agentdebugger-env
999
- version: 1.0.0
1000
- description: >
1001
- A live, iterative debugging environment where AI agents fix broken code
1002
- by forming hypotheses, submitting fixes, observing test output, and
1003
- iterating β€” benchmarking genuine agentic reasoning through a
1004
- hypothesis-test-fix feedback loop.
1005
- domain: software_engineering
1006
- tags:
1007
- - debugging
1008
- - agentic-reasoning
1009
- - code-repair
1010
- - openenv
1011
- - software-engineering
1012
- observation_type: structured
1013
- action_type: structured
1014
- reward_type: dense
1015
- episode_termination: action_or_step_limit
1016
- inference_script: inference.py
1017
- tasks:
1018
- - id: easy
1019
- name: Single Function Off-By-One Bug
1020
- difficulty: easy
1021
- max_attempts: 5
1022
- max_steps: 8
1023
- tests_total: 8
1024
- description: >
1025
- Binary search with an off-by-one termination condition.
1026
- Clear error message, 1-2 iterations expected.
1027
- - id: medium
1028
- name: Red Herring β€” Interdependent Function Bug
1029
- difficulty: medium
1030
- max_attempts: 7
1031
- max_steps: 15
1032
- tests_total: 10
1033
- description: >
1034
- Authentication module where error points to the wrong function.
1035
- Agent must trace data flow backwards from symptom to root cause.
1036
- - id: hard
1037
- name: Concurrency Race Condition
1038
- difficulty: hard
1039
- max_attempts: 10
1040
- max_steps: 25
1041
- tests_total: 8
1042
- description: >
1043
- Thread-safe counter with a race condition invisible to sequential tests.
1044
- Agent must design a concurrent test to surface the bug, then fix it.
1045
- baseline:
1046
- model: gpt-4o
1047
- script: inference.py
1048
- mean_score: 0.51
1049
- scores:
1050
- easy: 0.85
1051
- medium: 0.50
1052
- hard: 0.18
1053
- author: shashaank
1054
- license: MIT
1055
- huggingface_space: shashaank/agentdebugger-env
1056
- api_base_url_env_var: API_BASE_URL
1057
- model_name_env_var: MODEL_NAME
1058
- hf_token_env_var: HF_TOKEN
1059
- ```
1060
-
1061
- ---
1062
-
1063
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1064
- ## SECTION 10: DOCKERFILE
1065
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1066
-
1067
- ```dockerfile
1068
- FROM python:3.10-slim
1069
-
1070
- WORKDIR /app
1071
-
1072
- # Install curl for healthcheck
1073
- RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
1074
-
1075
- # Install dependencies first (layer cache optimization)
1076
- COPY requirements.txt .
1077
- RUN pip install --no-cache-dir -r requirements.txt
1078
-
1079
- # Copy all application code
1080
- COPY . .
1081
-
1082
- # Port 8000 is required by hackathon infrastructure
1083
- EXPOSE 8000
1084
-
1085
- # Health check β€” hackathon automated ping requires this to return 200
1086
- HEALTHCHECK --interval=30s --timeout=10s --start-period=10s --retries=3 \
1087
- CMD curl -f http://localhost:8000/health || exit 1
1088
-
1089
- # Single worker β€” environment is 2vCPU, multi-worker causes resource issues
1090
- CMD ["uvicorn", "env.server:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]
1091
- ```
1092
-
1093
- ---
1094
-
1095
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1096
- ## SECTION 11: requirements.txt
1097
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1098
-
1099
- ```
1100
- fastapi==0.110.0
1101
- uvicorn==0.29.0
1102
- pydantic==2.6.4
1103
- openai==1.23.0
1104
- requests==2.31.0
1105
- python-dotenv==1.0.1
1106
- pytest==8.1.0
1107
- httpx==0.27.0
1108
- RestrictedPython==7.0
1109
- ```
1110
-
1111
- ---
1112
-
1113
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1114
- ## SECTION 12: SETUP & USAGE
1115
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1116
-
1117
- ### Local Development
1118
 
 
1119
  ```bash
1120
- git clone https://github.com/shashaank/agentdebugger-env
1121
  cd agentdebugger-env
1122
  pip install -r requirements.txt
 
1123
 
1124
- # Run tests first β€” especially sandbox tests
1125
- pytest tests/ -v
1126
-
1127
  # Start the environment server
1128
- uvicorn env.server:app --reload --port 8000
1129
-
1130
- # In another terminal, verify health endpoint
1131
- curl http://localhost:8000/health
1132
 
1133
- # Run baseline inference
1134
  export API_BASE_URL="https://api.openai.com/v1"
1135
  export MODEL_NAME="gpt-4o"
1136
- export HF_TOKEN="your_openai_api_key"
1137
- export ENV_BASE_URL="http://localhost:8000"
1138
  python inference.py
1139
  ```
1140
 
1141
- ### Docker
1142
-
1143
- ```bash
1144
- docker build -t agentdebugger-env .
1145
- docker run -p 8000:8000 agentdebugger-env
1146
-
1147
- # With inference
1148
- docker run -p 8000:8000 \
1149
- -e API_BASE_URL="https://api.openai.com/v1" \
1150
- -e MODEL_NAME="gpt-4o" \
1151
- -e HF_TOKEN="your_key" \
1152
- agentdebugger-env
1153
- ```
1154
-
1155
- ### OpenEnv Validation
1156
-
1157
- ```bash
1158
- openenv validate .
1159
- ```
1160
-
1161
- Expected output:
1162
- ```
1163
- βœ“ openenv.yaml valid
1164
- βœ“ GET /health β†’ 200
1165
- βœ“ POST /reset β†’ valid Observation (task: easy)
1166
- βœ“ POST /reset β†’ valid Observation (task: medium)
1167
- βœ“ POST /reset β†’ valid Observation (task: hard)
1168
- βœ“ POST /step β†’ (Observation, Reward, bool, dict)
1169
- βœ“ GET /state β†’ dict
1170
- βœ“ 3 tasks registered: easy, medium, hard
1171
- βœ“ grader_easy: deterministic, range [0.0, 1.0] β€” PASS
1172
- βœ“ grader_medium: deterministic, range [0.0, 1.0] β€” PASS
1173
- βœ“ grader_hard: deterministic, range [0.0, 1.0] β€” PASS
1174
- βœ“ inference.py present in root directory
1175
- openenv validate: PASSED
1176
- ```
1177
-
1178
  ---
1179
 
1180
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1181
- ## SECTION 13: BASELINE SCORES
1182
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1183
 
1184
- Evaluated using `gpt-4o` with zero-shot prompting. Each task run 5 times, scores averaged.
1185
-
1186
- | Task | Difficulty | Mean Score | Std Dev | Solved % | Avg Attempts |
1187
- |---|---|---|---|---|---|
1188
- | Single Function Bug | Easy | 0.85 | Β±0.04 | 100% | 1.8 |
1189
- | Red Herring Bug | Medium | 0.50 | Β±0.12 | 60% | 4.2 |
1190
- | Race Condition | Hard | 0.18 | Β±0.09 | 20% | 8.7 |
1191
- | **Overall Mean** | | **0.51** | | **60%** | |
1192
-
1193
- **Key observations:**
1194
- - Easy task: GPT-4o reads the error message, immediately identifies the off-by-one, fixes in 1-2 attempts.
1195
- - Medium task: GPT-4o follows the red herring ~40% of the time, spending attempts on `authenticate_user` before tracing back to `hash_password`. When it gets the right function on the first hypothesis, it solves efficiently.
1196
- - Hard task: GPT-4o recognizes the sequential tests pass and often concludes the code is correct, missing the concurrency issue entirely. When it does identify the race condition, it fixes correctly β€” the bottleneck is recognition, not repair.
1197
 
1198
  ---
1199
 
1200
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1201
- ## SECTION 14: IMPLEMENTATION CHECKLIST
1202
- ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1203
-
1204
- Build in this exact order. Do not skip steps. Each step depends on the previous.
1205
-
1206
- ### Step 1: Sandbox (build and test before anything else)
1207
- - [ ] `env/sandbox.py` with `execute_code(code, test_code, allow_threading=False) β†’ (str, bool, int)`
1208
- - [ ] Hard timeout: 10 seconds, kills subprocess
1209
- - [ ] Blocks: os, sys, subprocess, socket, importlib, shutil, pathlib
1210
- - [ ] AST-based import detection (not string matching)
1211
- - [ ] Clean temp file cleanup in finally block
1212
- - [ ] All 5 sandbox tests in `tests/test_sandbox.py` pass
1213
-
1214
- ### Step 2: Data Models
1215
- - [ ] `env/models.py` with exact field names from Section 3
1216
- - [ ] All Pydantic v2 BaseModel subclasses
1217
- - [ ] `FixAttempt`, `Observation`, `Action`, `Reward` all defined
1218
-
1219
- ### Step 3: Task Definitions
1220
- - [ ] `env/tasks/task_easy.py` β€” binary search with `<` instead of `<=`
1221
- - [ ] `env/tasks/task_medium.py` β€” hash_password bytes/str bug with red herring error
1222
- - [ ] `env/tasks/task_hard.py` β€” ConnectionCounter race condition (allow_threading=True)
1223
- - [ ] Each task file exports: `BUGGY_CODE`, `TEST_SUITE`, `TASK_DESCRIPTION`, `GROUND_TRUTH`
1224
- - [ ] `env/tasks/registry.py` maps task_id strings to task configs
1225
-
1226
- ### Step 4: Graders
1227
- - [ ] `env/graders/grader_easy.py` β€” pure function, deterministic, returns float in [0.0, 1.0]
1228
- - [ ] `env/graders/grader_medium.py` β€” includes hypothesis_location check (red herring penalty)
1229
- - [ ] `env/graders/grader_hard.py` β€” runs concurrent stress test on submitted code
1230
- - [ ] `tests/test_graders.py` β€” verify same input β†’ same output (determinism), verify range
1231
-
1232
- ### Step 5: Environment Core
1233
- - [ ] `env/environment.py` with `reset(task_id)`, `step(action)`, `state()` methods
1234
- - [ ] `reset()` runs buggy code through sandbox to generate `initial_error_output`
1235
- - [ ] `step()` routes to sandbox for `submit_fix`, returns context for `query_context`
1236
- - [ ] `state()` returns full dict (no Pydantic models β€” plain dict)
1237
- - [ ] Never crashes β€” all errors returned in `info["error"]`
1238
-
1239
- ### Step 6: FastAPI Server
1240
- - [ ] `env/server.py` with `POST /reset`, `POST /step`, `GET /state`, `GET /health`
1241
- - [ ] `/health` returns `{"status": "ok"}` with HTTP 200 always
1242
- - [ ] All endpoints return HTTP 200 (errors go in response body, not HTTP status)
1243
- - [ ] Server handles concurrent requests safely (state is per-session or single-session)
1244
-
1245
- ### Step 7: inference.py
1246
- - [ ] In ROOT directory (not in env/)
1247
- - [ ] Reads API_BASE_URL, MODEL_NAME, HF_TOKEN, ENV_BASE_URL from os.environ
1248
- - [ ] Uses openai Python client
1249
- - [ ] Runs all 3 tasks sequentially
1250
- - [ ] Saves to baseline_results.json
1251
- - [ ] Total runtime under 20 minutes
1252
-
1253
- ### Step 8: Configuration & Deployment
1254
- - [ ] `openenv.yaml` matches Section 9 exactly
1255
- - [ ] `Dockerfile` builds cleanly β€” test with `docker build -t test .`
1256
- - [ ] `requirements.txt` pins all versions
1257
- - [ ] `openenv validate .` passes all checks
1258
-
1259
- ### Phase 2 Variance Self-Check (run before submitting)
1260
- - [ ] Dummy agent (submits `pass` as every fix): scores < 0.15 on all tasks
1261
- - [ ] Perfect agent (submits ground truth fix, correct hypothesis): scores > 0.85 on easy
1262
- - [ ] Medium red herring: agent that only fixes `authenticate_user` scores < 0.30 on medium
1263
- - [ ] Hard task: sequential-only fix scores < 0.45 (must pass concurrent test to score higher)
 
1
  ---
2
+ title: AgentDebugger Env πŸ›
3
  emoji: πŸ“ˆ
4
  colorFrom: yellow
5
  colorTo: green
 
10
 
11
  # AgentDebuggerEnv πŸ›
12
 
13
+ > **Benchmarking Agentic Reasoning through the Iterative Hypothesis-Test-Fix Loop.**
14
 
15
+ An OpenEnv-compliant environment designed for the **Meta + PyTorch + HuggingFace OpenEnv Hackathon**. Unlike static code-repair benchmarks, **AgentDebuggerEnv** focuses on the *trajectory* of an agent's reasoningβ€”measuring how effectively an agent forms hypotheses, observes failures, and iterates toward a solution in a live execution sandbox.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  ---
18
 
19
+ ## πŸš€ Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
+ Debugging is one of the highest-leverage cognitive tasks in software engineering. Modern LLM agents often struggle with:
22
+ - **Red Herrings**: Following misleading error messages to the wrong function.
23
+ - **Stagnant Iteration**: Repeating the same failed fix attempt instead of updating their hypothesis based on new output.
24
+ - **Concurrency Failures**: Failing to detect or fix non-deterministic bugs (race conditions).
25
 
26
+ **AgentDebuggerEnv** makes these failures measurable and scorable. The environment provides a live, sandboxed feedback loop where agents submit complete code fixes and receive real-time execution results.
27
 
28
  ---
29
 
30
+ ## πŸ› οΈ Core Mechanics: The Feedback Loop
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
+ The environment follows the standard OpenEnv interface (`reset`, `step`, `state`) but enforces a strict **Hypothesis-Test-Fix** cycle:
 
 
 
 
33
 
34
+ 1. **Hypothesis**: The agent must state its theory about the bug before every fix attempt.
35
+ 2. **Execution**: The submitted code is executed in a secure sandbox with hard timeouts.
36
+ 3. **Observation**: The agent receives the actual `stdout` + `stderr` from the test suite, not just a binary pass/fail.
37
+ 4. **Reward**: Dense reward signal is provided at every step, scaling with test progress and hypothesis accuracy.
38
 
39
  ---
40
 
41
+ ## πŸ“ Tasks & Difficulty
 
 
 
 
 
 
 
42
 
43
+ The environment includes three standardized tasks designed to test different facets of agentic reasoning:
44
 
45
+ | Task | Difficulty | Core Challenge |
46
+ | :--- | :--- | :--- |
47
+ | **Easy** | Easy | **Off-by-One**: Simple logic bug with an explicit, high-signal error message. |
48
+ | **Medium** | Medium | **Red Herring**: Interdependent functions where the error manifests far from the root cause. |
49
+ | **Hard** | Hard | **Race Condition**: A concurrency bug that is invisible to sequential tests. Agent must design a concurrent test to surface it. |
 
 
 
 
 
 
 
 
50
 
51
  ---
52
 
53
+ ## βš™οΈ How It Works (Spec Compliance)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
+ ### Data Models
56
+ - **Observation**: Includes `buggy_code`, `test_suite`, `previous_attempts` (full history), and `current_error_output`.
57
+ - **Action**: Supports `submit_fix` (requires `hypothesis`), `query_context` (for deeper code analysis), and `give_up`.
58
+ - **Reward**: A multi-component reward including `test_progress`, `hypothesis_match`, and `efficiency_bonus`.
59
 
60
+ ### Infrastructure
61
+ - **FastAPI**: Exposes standard endpoints on port 8000.
62
+ - **Docker**: Fully containerized and ready for HuggingFace Spaces.
63
+ - **Security**: Robust AST-based filtering to prevent malicious code escape.
64
+ - **Baseline Script**: Includes a reference `inference.py` script that uses the OpenAI client for benchmark evaluation.
65
 
66
  ---
67
 
68
+ ## πŸ“¦ Quick Start
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
+ ### Installation
71
  ```bash
72
+ git clone https://huggingface.co/spaces/shashaank/agentdebugger-env
73
  cd agentdebugger-env
74
  pip install -r requirements.txt
75
+ ```
76
 
77
+ ### Running Locally
78
+ ```bash
 
79
  # Start the environment server
80
+ uvicorn env.server:app --host 0.0.0.0 --port 8000
 
 
 
81
 
82
+ # Run the baseline inference (requires API key)
83
  export API_BASE_URL="https://api.openai.com/v1"
84
  export MODEL_NAME="gpt-4o"
85
+ export HF_TOKEN="your_key_here"
 
86
  python inference.py
87
  ```
88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
  ---
90
 
91
+ ## πŸ“Š Benchmarking Results (GPT-4o Baseline)
 
 
92
 
93
+ | Task | Grader Score | Solved |
94
+ | :--- | :--- | :--- |
95
+ | Easy | 0.85 | Yes |
96
+ | Medium | 0.50 | Mixed |
97
+ | Hard | 0.18 | No |
 
 
 
 
 
 
 
 
98
 
99
  ---
100
 
101
+ ## πŸ“œ License
102
+ MIT License. Created by **shashaank** for the Meta / PyTorch / HuggingFace OpenEnv Hackathon.