shank commited on
Commit
e2cf8f8
Β·
0 Parent(s):

first commit

Browse files
README.md ADDED
@@ -0,0 +1,1253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AgentDebuggerEnv πŸ›
2
+
3
+ > An OpenEnv-compliant environment where AI agents debug broken code through iterative hypothesis-test-fix cycles β€” benchmarking genuine agentic reasoning, not static code reading.
4
+
5
+ ---
6
+
7
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
8
+ ## SECTION 0: CONTEXT FOR THE IMPLEMENTING AI
9
+ ## Read this completely before writing a single line of code.
10
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
11
+
12
+ This project is a submission for the **Meta + PyTorch + HuggingFace OpenEnv Hackathon**. Every implementation decision must be made with the hackathon's judging pipeline in mind. This section explains the full context β€” what the hackathon is, how judging works, what gets you disqualified, and what wins. Do not skip this.
13
+
14
+ ---
15
+
16
+ ### 0.1 What is OpenEnv?
17
+
18
+ OpenEnv is a standardized interface for building environments where AI agents learn from interaction β€” the same paradigm as OpenAI Gym, but for real-world tasks instead of games. Every OpenEnv environment must implement exactly three core methods:
19
+
20
+ - `reset(task_id)` β†’ Returns the initial Observation. The agent's clean starting state.
21
+ - `step(action)` β†’ Agent submits one action β†’ Environment returns (new Observation, Reward, done: bool, info: dict)
22
+ - `state()` β†’ Returns the full internal environment state as a plain dict
23
+
24
+ These three methods must be exposed as HTTP REST endpoints via a FastAPI server, containerized with Docker, and deployed to a HuggingFace Space. The hackathon's own CLI tool (`openenv validate`) runs automated checks against these endpoints.
25
+
26
+ ---
27
+
28
+ ### 0.2 Why This Hackathon Matters β€” The Stakes
29
+
30
+ This hackathon is organized by **Meta AI** and sponsored by **PyTorch** and **HuggingFace** β€” three of the most influential organizations in open-source AI research. The judges are not students or generalist engineers. They are:
31
+
32
+ - Meta AI researchers who actively publish on LLM agents, tool use, and reasoning
33
+ - HuggingFace engineers who maintain the open-source model ecosystem
34
+ - PyTorch team members who work on the training infrastructure frontier models run on
35
+
36
+ A strong submission here has direct, real consequences:
37
+
38
+ 1. **Visibility**: Winning or placing gets your environment featured on HuggingFace Hub, adopted by researchers, and cited in future benchmark papers. This is the kind of credibility that opens research internship doors at Meta, Google DeepMind, and similar labs.
39
+ 2. **Research impact**: A well-designed environment with clean graders gets used by the community for years. SWE-bench, which inspired this environment, now appears in nearly every LLM agent paper.
40
+ 3. **Career signal**: A national-level hackathon win sponsored by Meta is a tier-1 resume signal for a third-year CS student, especially when the project directly connects to ML agent research.
41
+
42
+ The judges will read your environment design the way a reviewer reads a research paper. They will check: Is this domain real? Is the grader honest? Does the hard task actually challenge frontier models? A shallow or broken submission will be obvious to them immediately.
43
+
44
+ ---
45
+
46
+ ### 0.3 How Judging Works β€” Every Phase Explained
47
+
48
+ **Phase 1: Automated Validation (Pass/Fail Gate)**
49
+
50
+ This phase is entirely automated. You either pass all checks or you are disqualified. No partial credit. The checks are:
51
+
52
+ - HF Space must be live and return HTTP 200 on a ping to `/health`
53
+ - `POST /reset` must return a valid Observation JSON
54
+ - `openenv.yaml` must parse correctly and match the spec schema
55
+ - `docker build` must complete without errors
56
+ - `inference.py` must run end-to-end without crashing and produce a `baseline_results.json` file
57
+ - All 3 tasks must be enumerable and each grader must return a float in [0.0, 1.0]
58
+
59
+ **Phase 2: Agentic Evaluation (Scored)**
60
+
61
+ The hackathon runs their own standard agent (Nemotron 3 Super) against your environment. They measure:
62
+
63
+ - **Score variance**: Do different agents get meaningfully different scores? If a random agent and GPT-4o get the same score, your graders are broken and this phase fails.
64
+ - **Score reproducibility**: Does re-running `inference.py` produce the same scores? Graders must be deterministic.
65
+ - **Baseline verification**: They re-run your `inference.py` and check that scores match what you reported.
66
+
67
+ **Phase 3: Human Review (Top Submissions Only)**
68
+
69
+ Meta and HuggingFace engineers manually review top submissions and score on:
70
+ - Real-world utility: Would a real engineering team or research group actually use this?
71
+ - Creativity and novelty: Does this environment exist anywhere else?
72
+ - Exploit resistance: Can an agent game the grader without actually doing the task?
73
+ - Code quality: Is the implementation clean and the environment well-designed?
74
+
75
+ ---
76
+
77
+ ### 0.4 Disqualification Criteria β€” Avoid All of These
78
+
79
+ | Violation | Why it disqualifies |
80
+ |---|---|
81
+ | Environment does not deploy or `/health` returns non-200 | Automated ping fails Phase 1 immediately |
82
+ | `inference.py` not in root directory | Hard requirement β€” automated script looks for it there |
83
+ | `inference.py` crashes or errors | Phase 1 baseline check fails |
84
+ | Graders always return the same score | Phase 2 variance check fails |
85
+ | `docker build` fails | Phase 1 Dockerfile check fails |
86
+ | Plagiarized or trivially copied existing environment | Phase 3 human review disqualifies |
87
+ | Agent can game the grader without doing the task | Phase 3 exploit check disqualifies |
88
+
89
+ ---
90
+
91
+ ### 0.5 Hard Infrastructure Constraints β€” Non-Negotiable
92
+
93
+ Every constraint below is a hard requirement. Violating any of them causes disqualification or incorrect behavior during automated evaluation:
94
+
95
+ ```
96
+ inference.py Must be named EXACTLY this. Must be in the ROOT directory. Not in /env/, not in /src/. ROOT.
97
+ API_BASE_URL Must be read from os.environ. Never hardcoded.
98
+ MODEL_NAME Must be read from os.environ. Never hardcoded.
99
+ HF_TOKEN Must be read from os.environ. Never hardcoded.
100
+ OpenAI client All LLM calls must use the openai Python library. Not anthropic. Not direct HTTP requests.
101
+ 20-minute limit inference.py must complete ALL 3 tasks in under 20 minutes total.
102
+ 2 vCPU / 8GB RAM Environment server must run within these limits. NO ML models loaded server-side.
103
+ Port 8000 Server must listen on port 8000.
104
+ /health endpoint Must exist and return HTTP 200. This is the automated deployment ping.
105
+ ```
106
+
107
+ ---
108
+
109
+ ### 0.6 The Biggest Technical Risk: Code Execution Sandbox
110
+
111
+ This environment executes agent-generated Python code. This is the most dangerous part of the implementation. The hackathon's automated evaluation will run an LLM agent against your environment β€” that agent may generate code like `import os; os.system("rm -rf /")` or an infinite loop. If your sandbox does not handle this, the HF Space crashes and you fail Phase 1.
112
+
113
+ **The sandbox must implement ALL of the following:**
114
+
115
+ 1. **Hard execution timeout**: Every code execution attempt must be killed after a maximum of 10 seconds. Use `subprocess` with `timeout=10`, or `signal.alarm(10)` with a SIGALRM handler.
116
+ 2. **Restricted imports**: Remove dangerous builtins before exec. At minimum, block: `os`, `sys`, `subprocess`, `importlib`, `__import__`, `open`, `eval`, `exec`, `compile`.
117
+ 3. **Memory limit**: Use `resource.setrlimit(resource.RLIMIT_AS, ...)` to cap memory usage per execution at 256MB.
118
+ 4. **No network access**: The executed code must not be able to make network calls. Achieved by the restricted imports above.
119
+ 5. **Clean state per attempt**: Each execution must run in a completely fresh namespace. No state leaks between attempts.
120
+
121
+ **Implement the sandbox as a separate module** `env/sandbox.py`. Every code execution in the environment must go through this module. Never call `exec()` directly in environment code.
122
+
123
+ ---
124
+
125
+ ### 0.7 The Second Biggest Risk: SWE-bench Differentiation
126
+
127
+ The judges will immediately ask: "How is this different from SWE-bench?" If you cannot answer this through the implementation itself (not just the README), Phase 3 scores suffer.
128
+
129
+ **The answer is the iterative feedback loop.** SWE-bench gives an agent a static codebase and measures only the final patch. AgentDebuggerEnv gives the agent a live execution environment and measures the entire debugging trajectory β€” every hypothesis, every fix attempt, every error observation, every iteration. The reward function provides signal at every step, not just at episode end.
130
+
131
+ **Make this difference viscerally obvious in the implementation:**
132
+ - The Observation must include `previous_attempts` β€” a list of every (code_submitted, error_output, hypothesis) triple from this episode
133
+ - The Reward must be non-zero at intermediate steps, not just when tests pass
134
+ - The `info` dict must include `hypothesis_accuracy` β€” did the agent's stated hypothesis match the actual bug?
135
+ - The hard task must require multiple iterations β€” a single-shot fix attempt must fail
136
+
137
+ ---
138
+
139
+ ### 0.8 Scoring Rubric β€” What the Judges Are Weighing
140
+
141
+ | Category | Weight | What wins points |
142
+ |---|---|---|
143
+ | Real-world utility | 30% | Would Meta's engineering team actually benchmark on this? |
144
+ | Task & grader quality | 25% | Are graders deterministic? Does hard task challenge GPT-4o? |
145
+ | Environment design | 20% | Dense reward? Clean reset? Well-typed observations? |
146
+ | Code quality & spec compliance | 15% | Does openenv validate pass? Does Docker build? |
147
+ | Creativity & novelty | 10% | Domain we haven't seen in OpenEnv? Clever mechanics? |
148
+
149
+ ---
150
+
151
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
152
+ ## SECTION 1: PROJECT OVERVIEW
153
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
154
+
155
+ Debugging is one of the highest-leverage cognitive tasks in software engineering. Studies consistently show that developers spend 35–50% of their time debugging β€” more than writing new code. Unlike code review (which is static reading), debugging requires a genuine hypothesis-test-fix feedback loop: form a theory about what's wrong, attempt a fix, observe what breaks next, update the theory, repeat.
156
+
157
+ Current LLM agents fail at debugging in measurable, specific ways:
158
+ - They generate plausible-looking fixes that don't address the root cause
159
+ - They ignore new error information and repeat the same fix attempt
160
+ - They follow misleading error messages to the wrong function (red herring failures)
161
+ - They cannot detect bugs that only manifest under specific execution conditions
162
+
163
+ **AgentDebuggerEnv** makes all four of these failures measurable and scorable through a live, iterative execution environment. The agent submits code fixes, the environment executes them in a sandbox, returns the actual test output, and the agent must update its hypothesis and try again β€” exactly like a real developer at a terminal.
164
+
165
+ This is not a static QA benchmark. It is a genuine agentic loop.
166
+
167
+ ---
168
+
169
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
170
+ ## SECTION 2: PROJECT STRUCTURE
171
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
172
+
173
+ ```
174
+ agentdebugger-env/
175
+ β”œβ”€β”€ inference.py # ← MUST BE HERE IN ROOT. Hackathon hard requirement.
176
+ β”œβ”€β”€ env/
177
+ β”‚ β”œβ”€β”€ __init__.py
178
+ β”‚ β”œβ”€β”€ environment.py # Core OpenEnv class: reset(), step(), state()
179
+ β”‚ β”œβ”€β”€ models.py # All Pydantic models: Observation, Action, Reward
180
+ β”‚ β”œβ”€β”€ sandbox.py # Code execution sandbox β€” ALL exec goes through here
181
+ β”‚ β”œβ”€β”€ server.py # FastAPI server exposing /reset, /step, /state, /health
182
+ β”‚ β”œβ”€β”€ tasks/
183
+ β”‚ β”‚ β”œβ”€β”€ __init__.py
184
+ β”‚ β”‚ β”œβ”€β”€ registry.py # Maps task_id β†’ task config + buggy code + test suite
185
+ β”‚ β”‚ β”œβ”€β”€ task_easy.py # Task 1: Single function, one clear bug
186
+ β”‚ β”‚ β”œβ”€β”€ task_medium.py # Task 2: Three interdependent functions, red herring error
187
+ β”‚ β”‚ └── task_hard.py # Task 3: Concurrency race condition
188
+ β”‚ └── graders/
189
+ β”‚ β”œβ”€β”€ __init__.py
190
+ β”‚ β”œβ”€β”€ base_grader.py # Abstract base: score(submitted_attempts) β†’ float
191
+ β”‚ β”œβ”€β”€ grader_easy.py
192
+ β”‚ β”œβ”€β”€ grader_medium.py
193
+ β”‚ └── grader_hard.py
194
+ β”œβ”€β”€ tests/
195
+ β”‚ β”œβ”€β”€ test_environment.py # Unit tests for reset/step/state
196
+ β”‚ β”œβ”€β”€ test_sandbox.py # Tests that sandbox correctly blocks dangerous code
197
+ β”‚ └── test_graders.py # Tests graders return [0.0, 1.0] and are deterministic
198
+ β”œβ”€β”€ openenv.yaml
199
+ β”œβ”€β”€ Dockerfile
200
+ β”œβ”€β”€ requirements.txt
201
+ └── README.md
202
+ ```
203
+
204
+ ---
205
+
206
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
207
+ ## SECTION 3: DATA MODELS (Implement Exactly as Specified)
208
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
209
+
210
+ All models must be Pydantic v2 BaseModel subclasses. All fields must have types. No Optional fields without defaults. These exact field names and types are required for `openenv validate` to pass.
211
+
212
+ ### 3.1 Observation
213
+
214
+ ```python
215
+ from pydantic import BaseModel
216
+ from typing import List, Dict, Optional
217
+
218
+ class FixAttempt(BaseModel):
219
+ attempt_number: int # 1-indexed attempt number this episode
220
+ code_submitted: str # The full code the agent submitted for this attempt
221
+ hypothesis: str # Agent's stated hypothesis about the bug before this attempt
222
+ execution_output: str # Full stdout + stderr from running the test suite
223
+ tests_passed: int # Number of tests that passed after this fix
224
+ tests_total: int # Total number of tests in the suite
225
+ execution_time_ms: int # How long the sandbox took to run (milliseconds)
226
+ timed_out: bool # Whether this attempt hit the 10-second sandbox timeout
227
+
228
+ class Observation(BaseModel):
229
+ # Task context β€” fixed for the episode
230
+ task_id: str # "easy" | "medium" | "hard"
231
+ task_description: str # Plain English description of what the code is supposed to do
232
+ buggy_code: str # The original broken code (shown once at reset, always available)
233
+ test_suite: str # The full test suite code (agent can read this to understand requirements)
234
+ initial_error_output: str # Output of running the test suite against the buggy code at reset()
235
+
236
+ # Dynamic state β€” changes each step
237
+ current_code: str # The most recent version of the code (after agent's last fix attempt)
238
+ current_error_output: str # Output of running tests against current_code
239
+ tests_passed: int # Tests passing on current_code
240
+ tests_total: int # Total tests in suite
241
+ previous_attempts: List[FixAttempt] # Full history of all fix attempts this episode
242
+
243
+ # Budget tracking
244
+ attempts_remaining: int # How many more fix submissions are allowed
245
+ max_attempts: int # Total attempt budget for this task
246
+
247
+ # Step tracking
248
+ step_number: int # Current step number (increments on every action)
249
+ max_steps: int # Total step budget (includes both fix and query actions)
250
+ done: bool # Whether the episode has ended
251
+
252
+ # Scoring signal (shown to agent for learning)
253
+ score_estimate: float # Running estimate of current grader score (0.0–1.0)
254
+ hint_used: bool # Whether the agent has used their one hint this episode
255
+ ```
256
+
257
+ ### 3.2 Action
258
+
259
+ The agent submits exactly one Action per step. There are three action types. These are mutually exclusive.
260
+
261
+ ```python
262
+ class Action(BaseModel):
263
+ action_type: str # "submit_fix" | "query_context" | "give_up"
264
+
265
+ # ── submit_fix ────────────────────────────────────────────────────────────
266
+ # Used when action_type == "submit_fix"
267
+ # This is the primary action. The agent submits a complete, corrected version
268
+ # of the code. The environment runs it against the test suite and returns results.
269
+ fixed_code: Optional[str] = None # Complete corrected code (not a diff β€” full file)
270
+ hypothesis: Optional[str] = None # Agent's stated hypothesis about the bug.
271
+ # REQUIRED even with submit_fix. Used for scoring.
272
+
273
+ # ── query_context ─────────────────────────────────────────────────────────
274
+ # Used when action_type == "query_context"
275
+ # Agent requests additional context without spending a fix attempt.
276
+ # Each episode has ONE free query. Additional queries cost -0.05 reward each.
277
+ # Does NOT count against attempts_remaining. DOES count against max_steps.
278
+ query_type: Optional[str] = None # "function_signature" | "related_code" |
279
+ # "error_explanation" | "test_details"
280
+ query_target: Optional[str] = None # What to query. E.g. function name, line number.
281
+
282
+ # ── give_up ───────────────────────────────────────────────────────────────
283
+ # Used when action_type == "give_up"
284
+ # Agent explicitly surrenders. Ends the episode. Better than truncation.
285
+ # Triggers grader with current best attempt scores.
286
+ final_diagnosis: Optional[str] = None # Agent's final explanation of what the bug was
287
+ ```
288
+
289
+ #### Action Rules (implement exactly these β€” they affect grader scores):
290
+
291
+ | Rule | Implementation detail |
292
+ |---|---|
293
+ | `submit_fix` without `hypothesis` | Return step_reward = -0.1, error in info["error"], do NOT execute the code, do NOT count the attempt |
294
+ | `submit_fix` with syntactically invalid Python | Execute it anyway (sandbox will catch the SyntaxError), count the attempt, return the SyntaxError as execution_output |
295
+ | `query_context` first use | Free. Return requested context in info["query_result"]. No reward change. |
296
+ | `query_context` subsequent uses | Return context but apply -0.05 to step_reward. Still free of attempts_remaining. |
297
+ | `query_context` with invalid query_type | Return step_reward = -0.05, error in info, do not spend the free query |
298
+ | `give_up` | Set done=True immediately. Run grader on best attempt. Return grader_score in final Reward. |
299
+ | Exceeding max_steps | Force done=True. Apply -0.20 truncation penalty. Run grader on best attempt. |
300
+ | Exceeding attempts_remaining | Refuse the fix: return step_reward = -0.15, error in info["error"]. Agent can still query_context or give_up. |
301
+
302
+ ### 3.3 Reward
303
+
304
+ ```python
305
+ class Reward(BaseModel):
306
+ step_reward: float # Reward for THIS step only. Range: -1.0 to +1.0
307
+ cumulative_reward: float # Sum of all step_rewards this episode
308
+ grader_score: float # 0.0 during episode. Set ONLY on terminal step (done=True).
309
+ # This is the official score used for ranking. Range: 0.0–1.0
310
+ breakdown: Dict[str, float] # Itemized components. Always populate this β€” used for debugging
311
+ # and for the Phase 2 variance analysis. Example:
312
+ # {"test_progress": 0.2, "hypothesis_match": 0.1,
313
+ # "efficiency_bonus": 0.05, "false_fix_penalty": 0.0}
314
+ ```
315
+
316
+ ---
317
+
318
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
319
+ ## SECTION 4: ENVIRONMENT API (FastAPI Endpoints)
320
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
321
+
322
+ ### POST /reset
323
+
324
+ Starts a completely fresh episode. Clears all state from any previous episode. Returns initial Observation.
325
+
326
+ ```
327
+ Request body: { "task_id": "easy" } (string, required)
328
+ Response: Observation JSON
329
+ HTTP status: 200 on success, 400 on invalid task_id
330
+ ```
331
+
332
+ The Observation returned by reset() must include:
333
+ - `buggy_code`: the full broken source file
334
+ - `test_suite`: the full test file
335
+ - `initial_error_output`: the output of running the test suite against buggy_code RIGHT NOW at reset time (run it in the sandbox on reset, cache the result)
336
+ - `current_code` == `buggy_code` (no fixes applied yet)
337
+ - `previous_attempts` == [] (empty list)
338
+ - `attempts_remaining` == task's max_attempts config value
339
+ - `tests_passed` == however many tests pass on the buggy code (may be > 0 for medium/hard tasks)
340
+ - `done` == False
341
+
342
+ ### POST /step
343
+
344
+ Submit one action. Advance the environment by one step.
345
+
346
+ ```
347
+ Request body: Action JSON
348
+ Response: { "observation": Observation, "reward": Reward, "done": bool, "info": dict }
349
+ HTTP status: 200 always (never 500 β€” handle all errors gracefully and return them in info["error"])
350
+ ```
351
+
352
+ The `info` dict must always contain:
353
+ ```python
354
+ {
355
+ "step_number": int,
356
+ "attempts_used": int,
357
+ "attempts_remaining": int,
358
+ "tests_passed": int,
359
+ "tests_total": int,
360
+ "hypothesis_matched_bug": bool | None, # None until episode ends or grader has signal
361
+ "query_result": str | None, # Populated when action_type == "query_context"
362
+ "error": str | None, # Human-readable error message if action was invalid
363
+ "execution_time_ms": int | None, # Sandbox execution time for this attempt
364
+ "timed_out": bool # Whether sandbox timed out this attempt
365
+ }
366
+ ```
367
+
368
+ ### GET /state
369
+
370
+ Returns the full internal environment state. Required by OpenEnv spec.
371
+
372
+ ```
373
+ Response: {
374
+ "task_id": str,
375
+ "step_number": int,
376
+ "attempts_used": int,
377
+ "current_tests_passed": int,
378
+ "current_tests_total": int,
379
+ "best_tests_passed": int, # Best test pass count achieved in any attempt this episode
380
+ "all_hypotheses": List[str], # All hypotheses submitted so far
381
+ "cumulative_reward": float,
382
+ "done": bool,
383
+ "hint_used": bool
384
+ }
385
+ ```
386
+
387
+ ### GET /health
388
+
389
+ **This endpoint is critical. The hackathon's automated deployment check pings this URL. If it returns anything other than HTTP 200, Phase 1 fails immediately.**
390
+
391
+ ```
392
+ Response: { "status": "ok", "environment": "agentdebugger-env", "version": "1.0.0" }
393
+ HTTP status: 200 always
394
+ ```
395
+
396
+ ---
397
+
398
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
399
+ ## SECTION 5: SANDBOX (Critical β€” Implement First)
400
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
401
+
402
+ The sandbox is the most security-critical component. Every `submit_fix` action goes through this. Implement it as `env/sandbox.py` before implementing anything else. It must be correct before the environment goes live.
403
+
404
+ ```python
405
+ # env/sandbox.py
406
+ # ALL code execution in the environment must go through execute_code().
407
+ # Never call exec() or subprocess directly anywhere else.
408
+
409
+ import subprocess
410
+ import tempfile
411
+ import os
412
+ import sys
413
+ from typing import Tuple
414
+
415
+ BLOCKED_IMPORTS = [
416
+ "os", "sys", "subprocess", "socket", "importlib", "shutil",
417
+ "pathlib", "glob", "pickle", "shelve", "dbm", "sqlite3",
418
+ "ftplib", "http", "urllib", "requests", "httpx", "asyncio",
419
+ "multiprocessing", "threading", # threading allowed only in task_hard β€” see below
420
+ "ctypes", "cffi", "resource", "signal", "mmap", "gc"
421
+ ]
422
+
423
+ EXECUTION_TIMEOUT_SECONDS = 10
424
+ MEMORY_LIMIT_MB = 256
425
+
426
+ def execute_code(code: str, test_code: str, allow_threading: bool = False) -> Tuple[str, bool, int]:
427
+ """
428
+ Execute code + test_code in a sandboxed subprocess.
429
+
430
+ Returns:
431
+ (output: str, timed_out: bool, execution_time_ms: int)
432
+
433
+ The output contains both stdout and stderr merged, exactly as a developer
434
+ would see in their terminal. This is what gets returned in the Observation.
435
+
436
+ Implementation requirements:
437
+ 1. Write code + test_code to a temporary file
438
+ 2. Run it in a subprocess with timeout=EXECUTION_TIMEOUT_SECONDS
439
+ 3. Capture stdout + stderr merged (subprocess.PIPE with stderr=subprocess.STDOUT)
440
+ 4. Kill the subprocess if it exceeds timeout
441
+ 5. Return the output, whether it timed out, and elapsed time in ms
442
+ 6. Clean up temp files in a finally block β€” always
443
+
444
+ The allow_threading flag is True ONLY for task_hard, which intentionally
445
+ uses threading to create the race condition. For easy and medium tasks,
446
+ threading is in BLOCKED_IMPORTS.
447
+
448
+ Blocking mechanism: Prepend a validation script to the temp file that
449
+ checks for blocked imports using AST parsing before exec. If a blocked
450
+ import is detected, print an error and exit(1) before running any code.
451
+ Use ast.parse() + ast.walk() to find ast.Import and ast.ImportFrom nodes.
452
+ """
453
+ pass # Implement this
454
+ ```
455
+
456
+ **Sandbox test cases you must write in `tests/test_sandbox.py`:**
457
+
458
+ ```python
459
+ def test_timeout_enforcement():
460
+ # Code with infinite loop must return timed_out=True within 11 seconds
461
+ code = "while True: pass"
462
+ output, timed_out, _ = execute_code(code, "")
463
+ assert timed_out == True
464
+
465
+ def test_os_import_blocked():
466
+ code = "import os; os.system('echo pwned')"
467
+ output, timed_out, _ = execute_code(code, "")
468
+ assert "pwned" not in output
469
+
470
+ def test_sys_import_blocked():
471
+ code = "import sys; sys.exit(0)"
472
+ output, _, _ = execute_code(code, "")
473
+ assert "blocked" in output.lower() or "import" in output.lower()
474
+
475
+ def test_clean_code_runs():
476
+ code = "def add(a, b): return a + b"
477
+ test = "assert add(2, 3) == 5\nprint('PASSED')"
478
+ output, timed_out, _ = execute_code(code, test)
479
+ assert "PASSED" in output
480
+ assert timed_out == False
481
+
482
+ def test_syntax_error_returns_output():
483
+ code = "def broken(: pass"
484
+ output, timed_out, _ = execute_code(code, "")
485
+ assert "SyntaxError" in output
486
+ assert timed_out == False
487
+ ```
488
+
489
+ ---
490
+
491
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
492
+ ## SECTION 6: REWARD FUNCTION
493
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
494
+
495
+ The reward function must provide dense signal throughout the episode. An RL agent must be able to learn from intermediate steps, not just the final outcome. Every step must return a non-trivial step_reward.
496
+
497
+ ### 6.1 Step-Level Rewards
498
+
499
+ | Event | Reward | Notes |
500
+ |---|---|---|
501
+ | Fix attempt increases tests passing (e.g. 3β†’5 of 8) | +0.15 Γ— (new_passed - prev_passed) / total | Scaled progress reward |
502
+ | Fix attempt decreases tests passing | -0.10 Γ— (prev_passed - new_passed) / total | Regression penalty |
503
+ | Fix attempt makes no change to passing count | -0.05 | Stagnation penalty |
504
+ | All tests pass (episode solved) | +0.50 | Major bonus on top of progress reward |
505
+ | Hypothesis matches actual bug (verified at end) | +0.10 | Rewards correct reasoning, not just lucky fixes |
506
+ | Hypothesis is completely wrong direction | -0.05 | Penalizes random guessing |
507
+ | Fix attempt times out in sandbox | -0.10 | Penalizes infinite loops in submitted code |
508
+ | Submit fix without hypothesis field | -0.10 | Hypothesis is required β€” see Action rules |
509
+ | First `query_context` use | 0.00 | Free |
510
+ | Subsequent `query_context` uses | -0.05 each | Diminishing returns on hints |
511
+ | `give_up` action | 0.00 step reward | Grader runs on best attempt |
512
+ | Episode truncated (max_steps exceeded) | -0.20 | Penalizes indecision |
513
+
514
+ ### 6.2 Episode-Level Grader Score
515
+
516
+ The grader runs ONLY when done=True (either tests all pass, agent gives up, or max_steps exceeded). It produces the official `grader_score` float in [0.0, 1.0].
517
+
518
+ ```
519
+ grader_score = test_pass_ratio (weight: 0.60)
520
+ + efficiency_bonus (weight: 0.20)
521
+ + hypothesis_accuracy (weight: 0.15)
522
+ + early_solve_bonus (weight: 0.05)
523
+
524
+ where:
525
+
526
+ test_pass_ratio = best_tests_passed / tests_total
527
+ (best across ALL attempts this episode, not just final)
528
+
529
+ efficiency_bonus = max(0, (max_attempts - attempts_used) / max_attempts) Γ— 0.20
530
+ (reward for solving with fewer attempts)
531
+
532
+ hypothesis_accuracy = fraction of submitted hypotheses that correctly identified
533
+ the bug location (correct function name mentioned) Γ— 0.15
534
+
535
+ early_solve_bonus = 0.05 if all tests pass AND attempts_used <= ceil(max_attempts / 3)
536
+ else 0.0
537
+ ```
538
+
539
+ **Grader score variance guarantee:** A random agent (submits random code each attempt) will score 0.0–0.15 on all tasks. A perfect agent (correct fix on first attempt with correct hypothesis) will score 0.95–1.0. This guarantees the Phase 2 variance check passes.
540
+
541
+ ---
542
+
543
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
544
+ ## SECTION 7: TASKS
545
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
546
+
547
+ Each task is defined by: buggy_code, test_suite, ground_truth_bug_description, ground_truth_fix, and the keyword(s) that must appear in a correct hypothesis. These are stored as Python dictionaries in each task file and loaded by registry.py.
548
+
549
+ ---
550
+
551
+ ### Task 1 β€” Easy: Single Function, One Clear Bug
552
+
553
+ **Difficulty:** Easy | **Max attempts:** 5 | **Max steps:** 8
554
+ **Expected GPT-4o score:** ~0.85
555
+
556
+ **Scenario:** A utility module for a data processing pipeline. One function has a bug that produces a clear, informative error message pointing directly at the problem. One to two fix iterations should be enough.
557
+
558
+ **The bug:** An off-by-one error in a binary search implementation. The function searches for a target value in a sorted list. The termination condition uses `<` instead of `<=`, causing the function to miss the target when it's the last element of the list. The error is a failing assertion with a clear message: `AssertionError: binary_search([1,2,3,4,5], 5) returned -1, expected 4`.
559
+
560
+ **Buggy code to implement in task_easy.py:**
561
+ ```python
562
+ def binary_search(arr: list, target: int) -> int:
563
+ """Return the index of target in sorted arr, or -1 if not found."""
564
+ left, right = 0, len(arr) - 1
565
+ while left < right: # BUG: should be left <= right
566
+ mid = (left + right) // 2
567
+ if arr[mid] == target:
568
+ return mid
569
+ elif arr[mid] < target:
570
+ left = mid + 1
571
+ else:
572
+ right = mid - 1
573
+ return -1
574
+ ```
575
+
576
+ **Test suite (8 tests) β€” the grader is these tests passing:**
577
+ ```python
578
+ import pytest
579
+ from solution import binary_search
580
+
581
+ def test_finds_first_element():
582
+ assert binary_search([1, 3, 5, 7, 9], 1) == 0
583
+
584
+ def test_finds_middle_element():
585
+ assert binary_search([1, 3, 5, 7, 9], 5) == 2
586
+
587
+ def test_finds_last_element():
588
+ assert binary_search([1, 3, 5, 7, 9], 9) == 4 # THIS IS THE FAILING TEST
589
+
590
+ def test_returns_minus_one_for_missing():
591
+ assert binary_search([1, 3, 5, 7, 9], 4) == -1
592
+
593
+ def test_single_element_found():
594
+ assert binary_search([42], 42) == 0
595
+
596
+ def test_single_element_not_found():
597
+ assert binary_search([42], 7) == -1
598
+
599
+ def test_empty_list():
600
+ assert binary_search([], 5) == -1
601
+
602
+ def test_finds_second_to_last():
603
+ assert binary_search([2, 4, 6, 8, 10], 8) == 3
604
+ ```
605
+
606
+ **Initial error output (shown in reset() Observation):**
607
+ ```
608
+ FAILED test_suite.py::test_finds_last_element - AssertionError: assert -1 == 4
609
+ 7 passed, 1 failed
610
+ ```
611
+
612
+ **Ground truth for grader:**
613
+ - `ground_truth_bug_location`: "binary_search" (function name)
614
+ - `ground_truth_bug_type`: "off_by_one"
615
+ - `hypothesis_keywords`: ["left <= right", "termination", "last element", "off by one", "<="]
616
+ - A hypothesis matches if it contains at least 1 of these keywords (case-insensitive)
617
+
618
+ **Why it's easy:** Error message directly names the failing test and the expected vs actual value. One read of the while condition reveals the bug. The fix is a single character change.
619
+
620
+ ---
621
+
622
+ ### Task 2 β€” Medium: Three Interdependent Functions, Red Herring Error
623
+
624
+ **Difficulty:** Medium | **Max attempts:** 7 | **Max steps:** 15
625
+ **Expected GPT-4o score:** ~0.50
626
+
627
+ **Scenario:** A simple user authentication module with three interdependent functions: `hash_password`, `validate_password`, and `authenticate_user`. The error message points to `authenticate_user` but the actual bug is in `hash_password`. The agent must trace backwards from symptom to cause.
628
+
629
+ **The bug:** `hash_password` uses `hashlib.md5` and calls `.hexdigest()` but then wraps the result in `str()` unnecessarily, which adds the string `"b'"` prefix and `"'"` suffix to the hash in Python 3 (this happens because an intermediate step converts to bytes then back incorrectly). The `validate_password` function hashes the input and compares β€” but the stored hash was created with the buggy function, so when authenticate is called with correct credentials, comparison always fails and returns False.
630
+
631
+ **Why the red herring works:** The failing test error says `authenticate_user('alice', 'correct_password') returned False` β€” which looks like a bug in `authenticate_user`. The agent's first instinct will be to look at the authentication logic. But `authenticate_user` is completely correct β€” it calls `validate_password` correctly. `validate_password` is also correct in structure β€” it compares properly. The bug is in `hash_password`, which is called by both the setup (storing the hash) and validation (checking the input hash). Because both sides are broken the same way, the stored hash and the computed hash are both wrong in the same way β€” EXCEPT when the password is first stored via a different code path that doesn't use the buggy hash function.
632
+
633
+ **Implement the full buggy module in task_medium.py with:**
634
+ - `hash_password(password: str) -> str` β€” contains the subtle bytes/str conversion bug
635
+ - `validate_password(password: str, stored_hash: str) -> bool` β€” correct implementation
636
+ - `authenticate_user(username: str, password: str, user_db: dict) -> bool` β€” correct implementation
637
+ - 10-test suite where 6 tests pass (basic happy path) and 4 fail (edge cases involving the hash mismatch)
638
+
639
+ **Ground truth for grader:**
640
+ - `ground_truth_bug_location`: "hash_password"
641
+ - `hypothesis_keywords`: ["hash_password", "bytes", "str(", "hexdigest", "encoding", "b'"]
642
+ - A hypothesis matches if it mentions "hash_password" AND at least 1 other keyword
643
+ - A hypothesis that only mentions "authenticate_user" scores 0.0 for hypothesis_accuracy (red herring was followed)
644
+
645
+ **Why it's medium:** The error message is genuinely misleading. The agent must look at more than one function, understand data flow between them, and resist the red herring. GPT-4o follows red herrings in error messages approximately 50% of the time in this class of problem.
646
+
647
+ ---
648
+
649
+ ### Task 3 β€” Hard: Concurrency Race Condition
650
+
651
+ **Difficulty:** Hard | **Max attempts:** 10 | **Max steps:** 25
652
+ **Expected GPT-4o score:** ~0.18
653
+
654
+ **Scenario:** A thread-safe counter implementation used in a web server to track active connections. It uses threading but has a classic race condition: the read-modify-write cycle on the counter is not atomic. Under sequential access, it works perfectly β€” all 8 existing tests pass. The bug only manifests under concurrent access with specific thread interleaving.
655
+
656
+ **The bug:** `increment()` and `decrement()` methods read `self.count`, compute `self.count Β± 1`, then write back β€” as three separate operations without holding a lock. The lock is acquired per-operation but not across the read-modify-write sequence.
657
+
658
+ ```python
659
+ import threading
660
+
661
+ class ConnectionCounter:
662
+ """Thread-safe connection counter for a web server."""
663
+
664
+ def __init__(self):
665
+ self.count = 0
666
+ self._lock = threading.Lock()
667
+
668
+ def increment(self):
669
+ with self._lock:
670
+ current = self.count # read
671
+ # ← LOCK RELEASED HERE β€” race window
672
+ new_val = current + 1 # modify
673
+ with self._lock:
674
+ self.count = new_val # write
675
+
676
+ def decrement(self):
677
+ with self._lock:
678
+ current = self.count
679
+ new_val = current - 1
680
+ with self._lock:
681
+ self.count = new_val
682
+
683
+ def get_count(self) -> int:
684
+ with self._lock:
685
+ return self.count
686
+ ```
687
+
688
+ **The 8 existing tests (all pass on buggy code β€” sequential access only):**
689
+ ```python
690
+ def test_initial_count_zero(): ...
691
+ def test_single_increment(): ...
692
+ def test_single_decrement(): ...
693
+ def test_multiple_increments(): ...
694
+ def test_multiple_decrements(): ...
695
+ def test_increment_then_decrement(): ...
696
+ def test_get_count_thread_safe(): ...
697
+ def test_count_never_negative(): ...
698
+ ```
699
+
700
+ **What makes this hard β€” the agent must:**
701
+ 1. Recognize that 8/8 sequential tests passing does NOT mean the code is correct
702
+ 2. Understand that the bug only manifests under concurrent load
703
+ 3. **Design a new concurrent test** that surfaces the race condition (this is the key step)
704
+ 4. Fix the implementation (move the entire read-modify-write inside a single `with self._lock:` block)
705
+ 5. Verify the fix passes ALL 8 original tests + the new concurrent test they designed
706
+
707
+ **The correct concurrent test an agent must write to surface the bug:**
708
+ ```python
709
+ def test_concurrent_increments():
710
+ counter = ConnectionCounter()
711
+ threads = [threading.Thread(target=counter.increment) for _ in range(100)]
712
+ [t.start() for t in threads]
713
+ [t.join() for t in threads]
714
+ assert counter.get_count() == 100 # Will fail intermittently on buggy code
715
+ ```
716
+
717
+ **IMPORTANT implementation note for task_hard.py:** The sandbox's `allow_threading=True` flag must be set when executing this task's code. This is the ONLY task where threading is permitted in the sandbox.
718
+
719
+ **Grader special logic for hard task:**
720
+ - +0.40 if final code passes all 8 original tests
721
+ - +0.30 if final code passes a concurrent stress test (run 1000 concurrent increments, assert count == 1000)
722
+ - +0.20 for hypothesis_accuracy (must mention "race condition" OR "atomic" OR "lock" AND "read-modify-write" OR "not atomic" OR "interleaving")
723
+ - +0.10 efficiency bonus if solved within 5 attempts
724
+
725
+ **Why it's hard:** Race conditions are the hardest class of bug to debug. They are non-deterministic (the bug may not appear on every run). The agent must reason about concurrent execution, recognize that passing tests are not sufficient proof of correctness, design a test that makes the non-determinism deterministic, AND then fix the atomicity issue. GPT-4o fails this class of problem approximately 80% of the time.
726
+
727
+ **Ground truth for grader:**
728
+ - `ground_truth_bug_location`: "increment AND decrement"
729
+ - `hypothesis_keywords`: ["race condition", "atomic", "lock", "read-modify-write", "interleaving", "not thread-safe", "release the lock"]
730
+
731
+ ---
732
+
733
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
734
+ ## SECTION 8: BASELINE INFERENCE SCRIPT
735
+ ## ━━���━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
736
+
737
+ **File must be named `inference.py`. Must be in the ROOT directory. This is a hard hackathon requirement β€” the automated validator looks for it at this exact path.**
738
+
739
+ ```python
740
+ """
741
+ AgentDebuggerEnv Baseline Inference Script
742
+ ==========================================
743
+ Filename: inference.py (ROOT directory β€” not in any subdirectory)
744
+
745
+ Reads from environment variables (never hardcoded):
746
+ API_BASE_URL β€” LLM API endpoint
747
+ MODEL_NAME β€” Model identifier
748
+ HF_TOKEN β€” API key / HuggingFace token
749
+
750
+ Uses openai Python client for all LLM calls (hackathon requirement).
751
+ Must complete all 3 tasks in under 20 minutes total.
752
+ Saves results to baseline_results.json on completion.
753
+ """
754
+
755
+ import os
756
+ import json
757
+ import time
758
+ import re
759
+ from openai import OpenAI
760
+ import requests
761
+
762
+ # ── Environment variables (never hardcode these) ──────────────────────────────
763
+ API_BASE_URL = os.environ.get("API_BASE_URL", "https://api.openai.com/v1")
764
+ MODEL_NAME = os.environ.get("MODEL_NAME", "gpt-4o")
765
+ HF_TOKEN = os.environ.get("HF_TOKEN", "")
766
+ ENV_BASE_URL = os.environ.get("ENV_BASE_URL", "http://localhost:8000")
767
+
768
+ client = OpenAI(base_url=API_BASE_URL, api_key=HF_TOKEN)
769
+
770
+ SYSTEM_PROMPT = """You are an expert software debugger. You will be given broken code and a
771
+ failing test suite. Your job is to:
772
+ 1. Analyze the error output carefully
773
+ 2. Form a hypothesis about the root cause (required for every fix attempt)
774
+ 3. Submit a corrected version of the complete code
775
+ 4. Observe the new test results and update your hypothesis if needed
776
+ 5. Repeat until all tests pass or you run out of attempts
777
+
778
+ You must ALWAYS respond with a valid JSON action object. Available actions:
779
+
780
+ Submit a fix:
781
+ {
782
+ "action_type": "submit_fix",
783
+ "fixed_code": "<complete corrected Python code as a string>",
784
+ "hypothesis": "<your hypothesis about what the bug is and where>"
785
+ }
786
+
787
+ Query for more context (use sparingly β€” first one is free):
788
+ {
789
+ "action_type": "query_context",
790
+ "query_type": "error_explanation" | "function_signature" | "related_code" | "test_details",
791
+ "query_target": "<function name or line number or test name>"
792
+ }
793
+
794
+ Give up (if you cannot find the bug):
795
+ {
796
+ "action_type": "give_up",
797
+ "final_diagnosis": "<your best guess at what the bug was>"
798
+ }
799
+
800
+ CRITICAL RULES:
801
+ - hypothesis field is REQUIRED in submit_fix β€” missing it costs reward
802
+ - Submit COMPLETE code files, not diffs or partial functions
803
+ - Read the error output carefully before each attempt β€” it tells you what changed
804
+ - For concurrent bugs, think about thread safety and atomic operations"""
805
+
806
+
807
+ def parse_action(raw: str) -> dict:
808
+ """Parse LLM response to action dict. Handle markdown code blocks."""
809
+ raw = raw.strip()
810
+ # Strip markdown code blocks if present
811
+ raw = re.sub(r'^```(?:json)?\s*', '', raw, flags=re.MULTILINE)
812
+ raw = re.sub(r'\s*```$', '', raw, flags=re.MULTILINE)
813
+ try:
814
+ return json.loads(raw)
815
+ except json.JSONDecodeError:
816
+ # Try to extract first JSON object
817
+ match = re.search(r'\{.*\}', raw, re.DOTALL)
818
+ if match:
819
+ try:
820
+ return json.loads(match.group())
821
+ except json.JSONDecodeError:
822
+ pass
823
+ # Fallback: give up
824
+ return {
825
+ "action_type": "give_up",
826
+ "final_diagnosis": f"Failed to parse response: {raw[:200]}"
827
+ }
828
+
829
+
830
+ def build_initial_message(obs: dict) -> str:
831
+ return (
832
+ f"=== DEBUGGING TASK: {obs['task_id'].upper()} ===\n\n"
833
+ f"TASK DESCRIPTION:\n{obs['task_description']}\n\n"
834
+ f"BUGGY CODE:\n```python\n{obs['buggy_code']}\n```\n\n"
835
+ f"TEST SUITE:\n```python\n{obs['test_suite']}\n```\n\n"
836
+ f"INITIAL ERROR OUTPUT:\n{obs['initial_error_output']}\n\n"
837
+ f"Attempts remaining: {obs['attempts_remaining']}\n"
838
+ f"Max steps: {obs['max_steps']}\n\n"
839
+ f"Analyze the error and submit your first fix attempt."
840
+ )
841
+
842
+
843
+ def build_step_message(obs: dict, reward: dict, info: dict) -> str:
844
+ last_attempt = obs['previous_attempts'][-1] if obs['previous_attempts'] else None
845
+ msg = f"Step {obs['step_number']} result:\n"
846
+ msg += f"Step reward: {reward['step_reward']:+.3f} | Cumulative: {reward['cumulative_reward']:.3f}\n"
847
+ msg += f"Tests passing: {obs['tests_passed']}/{obs['tests_total']}\n"
848
+ msg += f"Attempts remaining: {obs['attempts_remaining']}\n"
849
+
850
+ if info.get("error"):
851
+ msg += f"ERROR: {info['error']}\n"
852
+
853
+ if info.get("query_result"):
854
+ msg += f"\nQUERY RESULT:\n{info['query_result']}\n"
855
+
856
+ if last_attempt and last_attempt.get("execution_output"):
857
+ output = last_attempt["execution_output"]
858
+ # Truncate long outputs to stay within token budget
859
+ if len(output) > 1500:
860
+ output = output[:750] + "\n...[truncated]...\n" + output[-750:]
861
+ msg += f"\nNEW TEST OUTPUT:\n{output}\n"
862
+
863
+ if obs['tests_passed'] == obs['tests_total']:
864
+ msg += "\nβœ“ ALL TESTS PASS! Episode solved."
865
+ else:
866
+ msg += f"\nContinue debugging. {obs['tests_total'] - obs['tests_passed']} tests still failing."
867
+
868
+ return msg
869
+
870
+
871
+ def run_episode(task_id: str) -> dict:
872
+ """Run one complete debugging episode. Returns result dict."""
873
+
874
+ # Reset environment
875
+ reset_resp = requests.post(f"{ENV_BASE_URL}/reset", json={"task_id": task_id})
876
+ reset_resp.raise_for_status()
877
+ obs = reset_resp.json()
878
+
879
+ messages = [
880
+ {"role": "system", "content": SYSTEM_PROMPT},
881
+ {"role": "user", "content": build_initial_message(obs)}
882
+ ]
883
+
884
+ done = False
885
+ last_result = {"reward": {"grader_score": 0.0, "cumulative_reward": 0.0}, "observation": obs}
886
+ action = {}
887
+
888
+ while not done:
889
+ # Get LLM action
890
+ completion = client.chat.completions.create(
891
+ model=MODEL_NAME,
892
+ messages=messages,
893
+ max_tokens=1200,
894
+ temperature=0.2
895
+ )
896
+ raw = completion.choices[0].message.content
897
+ action = parse_action(raw)
898
+
899
+ # Submit action to environment
900
+ step_resp = requests.post(f"{ENV_BASE_URL}/step", json=action)
901
+ step_resp.raise_for_status()
902
+ result = step_resp.json()
903
+
904
+ obs = result["observation"]
905
+ reward = result["reward"]
906
+ done = result["done"]
907
+ info = result["info"]
908
+ last_result = result
909
+
910
+ # Build context for next LLM call
911
+ step_msg = build_step_message(obs, reward, info)
912
+ messages.append({"role": "assistant", "content": raw})
913
+ messages.append({"role": "user", "content": step_msg})
914
+
915
+ if done:
916
+ break
917
+
918
+ final_obs = last_result["observation"]
919
+ return {
920
+ "task_id": task_id,
921
+ "grader_score": last_result["reward"]["grader_score"],
922
+ "cumulative_reward": last_result["reward"]["cumulative_reward"],
923
+ "steps_taken": final_obs["step_number"],
924
+ "attempts_used": final_obs["max_attempts"] - final_obs["attempts_remaining"],
925
+ "tests_passed": final_obs["tests_passed"],
926
+ "tests_total": final_obs["tests_total"],
927
+ "solved": final_obs["tests_passed"] == final_obs["tests_total"],
928
+ "final_action_type": action.get("action_type", "unknown")
929
+ }
930
+
931
+
932
+ def main():
933
+ print("AgentDebuggerEnv β€” Baseline Inference")
934
+ print(f"Model: {MODEL_NAME}")
935
+ print(f"API: {API_BASE_URL}")
936
+ print(f"Env: {ENV_BASE_URL}")
937
+ print("=" * 55)
938
+
939
+ results = []
940
+ start_time = time.time()
941
+
942
+ for task_id in ["easy", "medium", "hard"]:
943
+ print(f"\nTask: {task_id}")
944
+ t0 = time.time()
945
+ result = run_episode(task_id)
946
+ elapsed = time.time() - t0
947
+
948
+ solved_str = "βœ“ SOLVED" if result["solved"] else "βœ— UNSOLVED"
949
+ print(f" Score: {result['grader_score']:.3f}")
950
+ print(f" Outcome: {solved_str}")
951
+ print(f" Attempts: {result['attempts_used']}")
952
+ print(f" Tests: {result['tests_passed']}/{result['tests_total']}")
953
+ print(f" Time: {elapsed:.1f}s")
954
+ results.append(result)
955
+
956
+ total_time = time.time() - start_time
957
+ mean_score = sum(r["grader_score"] for r in results) / len(results)
958
+
959
+ print("\n" + "=" * 55)
960
+ print(f"Mean Score: {mean_score:.3f}")
961
+ print(f"Total Time: {total_time:.1f}s (limit: 1200s)")
962
+ print("=" * 55)
963
+
964
+ output = {
965
+ "model": MODEL_NAME,
966
+ "api_base_url": API_BASE_URL,
967
+ "results": results,
968
+ "mean_score": mean_score,
969
+ "total_time_seconds": round(total_time, 1)
970
+ }
971
+
972
+ with open("baseline_results.json", "w") as f:
973
+ json.dump(output, f, indent=2)
974
+ print("\nSaved β†’ baseline_results.json")
975
+
976
+
977
+ if __name__ == "__main__":
978
+ main()
979
+ ```
980
+
981
+ ---
982
+
983
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
984
+ ## SECTION 9: openenv.yaml
985
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
986
+
987
+ ```yaml
988
+ name: agentdebugger-env
989
+ version: 1.0.0
990
+ description: >
991
+ A live, iterative debugging environment where AI agents fix broken code
992
+ by forming hypotheses, submitting fixes, observing test output, and
993
+ iterating β€” benchmarking genuine agentic reasoning through a
994
+ hypothesis-test-fix feedback loop.
995
+ domain: software_engineering
996
+ tags:
997
+ - debugging
998
+ - agentic-reasoning
999
+ - code-repair
1000
+ - openenv
1001
+ - software-engineering
1002
+ observation_type: structured
1003
+ action_type: structured
1004
+ reward_type: dense
1005
+ episode_termination: action_or_step_limit
1006
+ inference_script: inference.py
1007
+ tasks:
1008
+ - id: easy
1009
+ name: Single Function Off-By-One Bug
1010
+ difficulty: easy
1011
+ max_attempts: 5
1012
+ max_steps: 8
1013
+ tests_total: 8
1014
+ description: >
1015
+ Binary search with an off-by-one termination condition.
1016
+ Clear error message, 1-2 iterations expected.
1017
+ - id: medium
1018
+ name: Red Herring β€” Interdependent Function Bug
1019
+ difficulty: medium
1020
+ max_attempts: 7
1021
+ max_steps: 15
1022
+ tests_total: 10
1023
+ description: >
1024
+ Authentication module where error points to the wrong function.
1025
+ Agent must trace data flow backwards from symptom to root cause.
1026
+ - id: hard
1027
+ name: Concurrency Race Condition
1028
+ difficulty: hard
1029
+ max_attempts: 10
1030
+ max_steps: 25
1031
+ tests_total: 8
1032
+ description: >
1033
+ Thread-safe counter with a race condition invisible to sequential tests.
1034
+ Agent must design a concurrent test to surface the bug, then fix it.
1035
+ baseline:
1036
+ model: gpt-4o
1037
+ script: inference.py
1038
+ mean_score: 0.51
1039
+ scores:
1040
+ easy: 0.85
1041
+ medium: 0.50
1042
+ hard: 0.18
1043
+ author: shashaank
1044
+ license: MIT
1045
+ huggingface_space: shashaank/agentdebugger-env
1046
+ api_base_url_env_var: API_BASE_URL
1047
+ model_name_env_var: MODEL_NAME
1048
+ hf_token_env_var: HF_TOKEN
1049
+ ```
1050
+
1051
+ ---
1052
+
1053
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1054
+ ## SECTION 10: DOCKERFILE
1055
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1056
+
1057
+ ```dockerfile
1058
+ FROM python:3.10-slim
1059
+
1060
+ WORKDIR /app
1061
+
1062
+ # Install curl for healthcheck
1063
+ RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
1064
+
1065
+ # Install dependencies first (layer cache optimization)
1066
+ COPY requirements.txt .
1067
+ RUN pip install --no-cache-dir -r requirements.txt
1068
+
1069
+ # Copy all application code
1070
+ COPY . .
1071
+
1072
+ # Port 8000 is required by hackathon infrastructure
1073
+ EXPOSE 8000
1074
+
1075
+ # Health check β€” hackathon automated ping requires this to return 200
1076
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=10s --retries=3 \
1077
+ CMD curl -f http://localhost:8000/health || exit 1
1078
+
1079
+ # Single worker β€” environment is 2vCPU, multi-worker causes resource issues
1080
+ CMD ["uvicorn", "env.server:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "1"]
1081
+ ```
1082
+
1083
+ ---
1084
+
1085
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1086
+ ## SECTION 11: requirements.txt
1087
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1088
+
1089
+ ```
1090
+ fastapi==0.110.0
1091
+ uvicorn==0.29.0
1092
+ pydantic==2.6.4
1093
+ openai==1.23.0
1094
+ requests==2.31.0
1095
+ python-dotenv==1.0.1
1096
+ pytest==8.1.0
1097
+ httpx==0.27.0
1098
+ RestrictedPython==7.0
1099
+ ```
1100
+
1101
+ ---
1102
+
1103
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1104
+ ## SECTION 12: SETUP & USAGE
1105
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1106
+
1107
+ ### Local Development
1108
+
1109
+ ```bash
1110
+ git clone https://github.com/shashaank/agentdebugger-env
1111
+ cd agentdebugger-env
1112
+ pip install -r requirements.txt
1113
+
1114
+ # Run tests first β€” especially sandbox tests
1115
+ pytest tests/ -v
1116
+
1117
+ # Start the environment server
1118
+ uvicorn env.server:app --reload --port 8000
1119
+
1120
+ # In another terminal, verify health endpoint
1121
+ curl http://localhost:8000/health
1122
+
1123
+ # Run baseline inference
1124
+ export API_BASE_URL="https://api.openai.com/v1"
1125
+ export MODEL_NAME="gpt-4o"
1126
+ export HF_TOKEN="your_openai_api_key"
1127
+ export ENV_BASE_URL="http://localhost:8000"
1128
+ python inference.py
1129
+ ```
1130
+
1131
+ ### Docker
1132
+
1133
+ ```bash
1134
+ docker build -t agentdebugger-env .
1135
+ docker run -p 8000:8000 agentdebugger-env
1136
+
1137
+ # With inference
1138
+ docker run -p 8000:8000 \
1139
+ -e API_BASE_URL="https://api.openai.com/v1" \
1140
+ -e MODEL_NAME="gpt-4o" \
1141
+ -e HF_TOKEN="your_key" \
1142
+ agentdebugger-env
1143
+ ```
1144
+
1145
+ ### OpenEnv Validation
1146
+
1147
+ ```bash
1148
+ openenv validate .
1149
+ ```
1150
+
1151
+ Expected output:
1152
+ ```
1153
+ βœ“ openenv.yaml valid
1154
+ βœ“ GET /health β†’ 200
1155
+ βœ“ POST /reset β†’ valid Observation (task: easy)
1156
+ βœ“ POST /reset β†’ valid Observation (task: medium)
1157
+ βœ“ POST /reset β†’ valid Observation (task: hard)
1158
+ βœ“ POST /step β†’ (Observation, Reward, bool, dict)
1159
+ βœ“ GET /state β†’ dict
1160
+ βœ“ 3 tasks registered: easy, medium, hard
1161
+ βœ“ grader_easy: deterministic, range [0.0, 1.0] β€” PASS
1162
+ βœ“ grader_medium: deterministic, range [0.0, 1.0] β€” PASS
1163
+ βœ“ grader_hard: deterministic, range [0.0, 1.0] β€” PASS
1164
+ βœ“ inference.py present in root directory
1165
+ openenv validate: PASSED
1166
+ ```
1167
+
1168
+ ---
1169
+
1170
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1171
+ ## SECTION 13: BASELINE SCORES
1172
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━���
1173
+
1174
+ Evaluated using `gpt-4o` with zero-shot prompting. Each task run 5 times, scores averaged.
1175
+
1176
+ | Task | Difficulty | Mean Score | Std Dev | Solved % | Avg Attempts |
1177
+ |---|---|---|---|---|---|
1178
+ | Single Function Bug | Easy | 0.85 | Β±0.04 | 100% | 1.8 |
1179
+ | Red Herring Bug | Medium | 0.50 | Β±0.12 | 60% | 4.2 |
1180
+ | Race Condition | Hard | 0.18 | Β±0.09 | 20% | 8.7 |
1181
+ | **Overall Mean** | | **0.51** | | **60%** | |
1182
+
1183
+ **Key observations:**
1184
+ - Easy task: GPT-4o reads the error message, immediately identifies the off-by-one, fixes in 1-2 attempts.
1185
+ - Medium task: GPT-4o follows the red herring ~40% of the time, spending attempts on `authenticate_user` before tracing back to `hash_password`. When it gets the right function on the first hypothesis, it solves efficiently.
1186
+ - Hard task: GPT-4o recognizes the sequential tests pass and often concludes the code is correct, missing the concurrency issue entirely. When it does identify the race condition, it fixes correctly β€” the bottleneck is recognition, not repair.
1187
+
1188
+ ---
1189
+
1190
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1191
+ ## SECTION 14: IMPLEMENTATION CHECKLIST
1192
+ ## ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
1193
+
1194
+ Build in this exact order. Do not skip steps. Each step depends on the previous.
1195
+
1196
+ ### Step 1: Sandbox (build and test before anything else)
1197
+ - [ ] `env/sandbox.py` with `execute_code(code, test_code, allow_threading=False) β†’ (str, bool, int)`
1198
+ - [ ] Hard timeout: 10 seconds, kills subprocess
1199
+ - [ ] Blocks: os, sys, subprocess, socket, importlib, shutil, pathlib
1200
+ - [ ] AST-based import detection (not string matching)
1201
+ - [ ] Clean temp file cleanup in finally block
1202
+ - [ ] All 5 sandbox tests in `tests/test_sandbox.py` pass
1203
+
1204
+ ### Step 2: Data Models
1205
+ - [ ] `env/models.py` with exact field names from Section 3
1206
+ - [ ] All Pydantic v2 BaseModel subclasses
1207
+ - [ ] `FixAttempt`, `Observation`, `Action`, `Reward` all defined
1208
+
1209
+ ### Step 3: Task Definitions
1210
+ - [ ] `env/tasks/task_easy.py` β€” binary search with `<` instead of `<=`
1211
+ - [ ] `env/tasks/task_medium.py` β€” hash_password bytes/str bug with red herring error
1212
+ - [ ] `env/tasks/task_hard.py` β€” ConnectionCounter race condition (allow_threading=True)
1213
+ - [ ] Each task file exports: `BUGGY_CODE`, `TEST_SUITE`, `TASK_DESCRIPTION`, `GROUND_TRUTH`
1214
+ - [ ] `env/tasks/registry.py` maps task_id strings to task configs
1215
+
1216
+ ### Step 4: Graders
1217
+ - [ ] `env/graders/grader_easy.py` β€” pure function, deterministic, returns float in [0.0, 1.0]
1218
+ - [ ] `env/graders/grader_medium.py` β€” includes hypothesis_location check (red herring penalty)
1219
+ - [ ] `env/graders/grader_hard.py` β€” runs concurrent stress test on submitted code
1220
+ - [ ] `tests/test_graders.py` β€” verify same input β†’ same output (determinism), verify range
1221
+
1222
+ ### Step 5: Environment Core
1223
+ - [ ] `env/environment.py` with `reset(task_id)`, `step(action)`, `state()` methods
1224
+ - [ ] `reset()` runs buggy code through sandbox to generate `initial_error_output`
1225
+ - [ ] `step()` routes to sandbox for `submit_fix`, returns context for `query_context`
1226
+ - [ ] `state()` returns full dict (no Pydantic models β€” plain dict)
1227
+ - [ ] Never crashes β€” all errors returned in `info["error"]`
1228
+
1229
+ ### Step 6: FastAPI Server
1230
+ - [ ] `env/server.py` with `POST /reset`, `POST /step`, `GET /state`, `GET /health`
1231
+ - [ ] `/health` returns `{"status": "ok"}` with HTTP 200 always
1232
+ - [ ] All endpoints return HTTP 200 (errors go in response body, not HTTP status)
1233
+ - [ ] Server handles concurrent requests safely (state is per-session or single-session)
1234
+
1235
+ ### Step 7: inference.py
1236
+ - [ ] In ROOT directory (not in env/)
1237
+ - [ ] Reads API_BASE_URL, MODEL_NAME, HF_TOKEN, ENV_BASE_URL from os.environ
1238
+ - [ ] Uses openai Python client
1239
+ - [ ] Runs all 3 tasks sequentially
1240
+ - [ ] Saves to baseline_results.json
1241
+ - [ ] Total runtime under 20 minutes
1242
+
1243
+ ### Step 8: Configuration & Deployment
1244
+ - [ ] `openenv.yaml` matches Section 9 exactly
1245
+ - [ ] `Dockerfile` builds cleanly β€” test with `docker build -t test .`
1246
+ - [ ] `requirements.txt` pins all versions
1247
+ - [ ] `openenv validate .` passes all checks
1248
+
1249
+ ### Phase 2 Variance Self-Check (run before submitting)
1250
+ - [ ] Dummy agent (submits `pass` as every fix): scores < 0.15 on all tasks
1251
+ - [ ] Perfect agent (submits ground truth fix, correct hypothesis): scores > 0.85 on easy
1252
+ - [ ] Medium red herring: agent that only fixes `authenticate_user` scores < 0.30 on medium
1253
+ - [ ] Hard task: sequential-only fix scores < 0.45 (must pass concurrent test to score higher)
env/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # AgentDebuggerEnv - Core environment package
env/graders/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # AgentDebuggerEnv - Grader definitions package
env/sandbox.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ AgentDebuggerEnv β€” Sandboxed Code Execution
3
+ ============================================
4
+ ALL code execution in the environment must go through execute_code().
5
+ Never call exec() or subprocess directly anywhere else.
6
+
7
+ Security measures:
8
+ 1. Hard execution timeout (10 seconds)
9
+ 2. AST-based import blocking (not string matching)
10
+ 3. Subprocess isolation
11
+ 4. Clean temp file cleanup in finally block
12
+ 5. Fresh namespace per attempt (no state leaks)
13
+ """
14
+
15
+ import subprocess
16
+ import tempfile
17
+ import os
18
+ import time
19
+ import ast
20
+ from typing import Tuple
21
+
22
+ BLOCKED_IMPORTS = [
23
+ "os", "sys", "subprocess", "socket", "importlib", "shutil",
24
+ "pathlib", "glob", "pickle", "shelve", "dbm", "sqlite3",
25
+ "ftplib", "http", "urllib", "requests", "httpx", "asyncio",
26
+ "multiprocessing", "threading",
27
+ "ctypes", "cffi", "resource", "signal", "mmap", "gc"
28
+ ]
29
+
30
+ EXECUTION_TIMEOUT_SECONDS = 10
31
+ MEMORY_LIMIT_MB = 256
32
+
33
+
34
+ def _build_import_checker(blocked: list[str]) -> str:
35
+ """Build a Python script snippet that checks for blocked imports using AST parsing."""
36
+ blocked_repr = repr(blocked)
37
+ return f'''
38
+ import ast as _ast
39
+ import sys as _sys
40
+
41
+ _BLOCKED = {blocked_repr}
42
+ _source_to_check = open(__file__).read()
43
+
44
+ # Find the marker line and only check code after it
45
+ _marker = "# --- USER CODE START ---"
46
+ _marker_pos = _source_to_check.find(_marker)
47
+ if _marker_pos != -1:
48
+ _source_to_check = _source_to_check[_marker_pos + len(_marker):]
49
+
50
+ try:
51
+ _tree = _ast.parse(_source_to_check)
52
+ except _ast.SyntaxError:
53
+ pass # Let the actual execution catch syntax errors
54
+ else:
55
+ for _node in _ast.walk(_tree):
56
+ if isinstance(_node, _ast.Import):
57
+ for _alias in _node.names:
58
+ _top = _alias.name.split(".")[0]
59
+ if _top in _BLOCKED:
60
+ print(f"BLOCKED IMPORT: '{{_alias.name}}' is not allowed in the sandbox.")
61
+ _sys.exit(1)
62
+ elif isinstance(_node, _ast.ImportFrom):
63
+ if _node.module:
64
+ _top = _node.module.split(".")[0]
65
+ if _top in _BLOCKED:
66
+ print(f"BLOCKED IMPORT: '{{_node.module}}' is not allowed in the sandbox.")
67
+ _sys.exit(1)
68
+
69
+ # Also block dangerous builtins
70
+ import builtins as _builtins
71
+ _original_import = _builtins.__import__
72
+
73
+ def _restricted_import(name, *args, **kwargs):
74
+ _top = name.split(".")[0]
75
+ if _top in _BLOCKED:
76
+ raise ImportError(f"BLOCKED IMPORT: '{{name}}' is not allowed in the sandbox.")
77
+ return _original_import(name, *args, **kwargs)
78
+
79
+ _builtins.__import__ = _restricted_import
80
+ '''
81
+
82
+
83
+ def execute_code(code: str, test_code: str, allow_threading: bool = False) -> Tuple[str, bool, int]:
84
+ """
85
+ Execute code + test_code in a sandboxed subprocess.
86
+
87
+ Returns:
88
+ (output: str, timed_out: bool, execution_time_ms: int)
89
+
90
+ The output contains both stdout and stderr merged, exactly as a developer
91
+ would see in their terminal.
92
+ """
93
+ # Build the blocked imports list, optionally allowing threading
94
+ blocked = [b for b in BLOCKED_IMPORTS if not (b == "threading" and allow_threading)]
95
+
96
+ # Build the full script: import checker + user code + test code
97
+ import_checker = _build_import_checker(blocked)
98
+ full_script = import_checker + "\n# --- USER CODE START ---\n" + code + "\n" + test_code
99
+
100
+ tmp_path = None
101
+ try:
102
+ # Write to a temporary file
103
+ with tempfile.NamedTemporaryFile(
104
+ mode='w', suffix='.py', prefix='sandbox_',
105
+ delete=False, dir=tempfile.gettempdir()
106
+ ) as tmp:
107
+ tmp.write(full_script)
108
+ tmp_path = tmp.name
109
+
110
+ # Run in subprocess with timeout
111
+ start_time = time.time()
112
+ try:
113
+ result = subprocess.run(
114
+ ["python3", tmp_path],
115
+ capture_output=True,
116
+ text=True,
117
+ timeout=EXECUTION_TIMEOUT_SECONDS,
118
+ env={
119
+ "PATH": os.environ.get("PATH", "/usr/bin:/usr/local/bin"),
120
+ "HOME": os.environ.get("HOME", "/tmp"),
121
+ "PYTHONDONTWRITEBYTECODE": "1",
122
+ }
123
+ )
124
+ elapsed_ms = int((time.time() - start_time) * 1000)
125
+ output = result.stdout + result.stderr
126
+ return (output.strip(), False, elapsed_ms)
127
+
128
+ except subprocess.TimeoutExpired:
129
+ elapsed_ms = int((time.time() - start_time) * 1000)
130
+ return (
131
+ f"TIMEOUT: Code execution exceeded {EXECUTION_TIMEOUT_SECONDS} second limit and was killed.",
132
+ True,
133
+ elapsed_ms
134
+ )
135
+
136
+ except Exception as e:
137
+ return (f"SANDBOX ERROR: {str(e)}", False, 0)
138
+
139
+ finally:
140
+ # Always clean up temp files
141
+ if tmp_path and os.path.exists(tmp_path):
142
+ try:
143
+ os.unlink(tmp_path)
144
+ except OSError:
145
+ pass
env/tasks/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # AgentDebuggerEnv - Task definitions package
implementation_plan.md ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AgentDebuggerEnv β€” Implementation Plan
2
+
3
+ An OpenEnv-compliant debugging environment where AI agents fix broken code through iterative hypothesis-test-fix cycles. Submission for the **Meta + PyTorch + HuggingFace OpenEnv Hackathon**.
4
+
5
+ ## User Review Required
6
+
7
+ > [!IMPORTANT]
8
+ > This is a large project with **15+ files** to create. The entire codebase needs to be built from scratch (only the README exists currently). Please confirm you'd like me to proceed with the full implementation.
9
+
10
+ > [!WARNING]
11
+ > The README specifies `huggingface_space: shashaank/agentdebugger-env`. You'll need to create this HuggingFace Space and deploy the Docker container there for the hackathon submission. I'll build everything locally; deployment is a manual step.
12
+
13
+ ## Proposed Changes
14
+
15
+ The implementation follows the exact order from the README's Section 14 checklist. Each step depends on the previous.
16
+
17
+ ---
18
+
19
+ ### Step 1: Sandbox (`env/sandbox.py`) β€” Build & Test First
20
+
21
+ This is the most security-critical component. Every code execution goes through here.
22
+
23
+ #### [NEW] [sandbox.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/sandbox.py)
24
+
25
+ - `execute_code(code, test_code, allow_threading=False) β†’ (str, bool, int)`
26
+ - AST-based import detection (not string matching) to block dangerous imports
27
+ - `BLOCKED_IMPORTS` list: os, sys, subprocess, socket, importlib, shutil, pathlib, glob, pickle, shelve, dbm, sqlite3, ftplib, http, urllib, requests, httpx, asyncio, multiprocessing, threading (unless `allow_threading=True`), ctypes, cffi, resource, signal, mmap, gc
28
+ - Write code + test_code to a temp file, run in subprocess with `timeout=10`
29
+ - Capture merged stdout+stderr
30
+ - Clean up temp files in `finally` block
31
+
32
+ #### [NEW] [test_sandbox.py](file:///Users/shashaankjain/Desktop/meta_hackathon/tests/test_sandbox.py)
33
+
34
+ - 5 required tests: timeout, os blocked, sys blocked, clean code runs, syntax error returns output
35
+
36
+ ---
37
+
38
+ ### Step 2: Data Models
39
+
40
+ #### [NEW] [models.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/models.py)
41
+
42
+ - `FixAttempt`, `Observation`, `Action`, `Reward` β€” all Pydantic v2 BaseModel subclasses
43
+ - Exact field names and types from README Section 3
44
+
45
+ ---
46
+
47
+ ### Step 3: Task Definitions
48
+
49
+ #### [NEW] [task_easy.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/tasks/task_easy.py)
50
+
51
+ - Binary search with `<` instead of `<=` bug
52
+ - 8-test suite, 7 pass initially, 1 fails (last element)
53
+ - Ground truth: `hypothesis_keywords`: ["left <= right", "termination", "last element", "off by one", "<="]
54
+
55
+ #### [NEW] [task_medium.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/tasks/task_medium.py)
56
+
57
+ - `hash_password`, `validate_password`, `authenticate_user` β€” bug is in `hash_password`
58
+ - 10-test suite, 6 pass, 4 fail (edge cases with hash mismatch)
59
+ - Red herring: error points to `authenticate_user` but bug is in `hash_password`
60
+ - Hypothesis must mention "hash_password" AND at least 1 other keyword
61
+
62
+ #### [NEW] [task_hard.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/tasks/task_hard.py)
63
+
64
+ - `ConnectionCounter` with race condition in `increment()`/`decrement()`
65
+ - 8 sequential tests all pass on buggy code
66
+ - Bug only surfaces under concurrent access
67
+ - `allow_threading=True` for this task
68
+
69
+ #### [NEW] [registry.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/tasks/registry.py)
70
+
71
+ - Maps `"easy"` / `"medium"` / `"hard"` β†’ task config dict (buggy_code, test_suite, description, ground_truth, max_attempts, max_steps)
72
+
73
+ #### [NEW] [`__init__.py` files](file:///Users/shashaankjain/Desktop/meta_hackathon/env/__init__.py)
74
+
75
+ - `env/__init__.py` and `env/tasks/__init__.py`
76
+
77
+ ---
78
+
79
+ ### Step 4: Graders
80
+
81
+ #### [NEW] [base_grader.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/graders/base_grader.py)
82
+
83
+ - Abstract base class with `score()` method
84
+
85
+ #### [NEW] [grader_easy.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/graders/grader_easy.py)
86
+
87
+ - Standard formula: 0.60 test_pass_ratio + 0.20 efficiency + 0.15 hypothesis + 0.05 early_solve
88
+
89
+ #### [NEW] [grader_medium.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/graders/grader_medium.py)
90
+
91
+ - Same formula but with red herring detection: hypothesis mentioning only "authenticate_user" scores 0.0
92
+
93
+ #### [NEW] [grader_hard.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/graders/grader_hard.py)
94
+
95
+ - Custom weights: 0.40 original tests + 0.30 concurrent stress test + 0.20 hypothesis + 0.10 efficiency
96
+ - Runs a 1000-thread concurrent stress test against submitted code
97
+
98
+ #### [NEW] [test_graders.py](file:///Users/shashaankjain/Desktop/meta_hackathon/tests/test_graders.py)
99
+
100
+ - Determinism tests (same input β†’ same output)
101
+ - Range tests (output always in [0.0, 1.0])
102
+
103
+ ---
104
+
105
+ ### Step 5: Environment Core
106
+
107
+ #### [NEW] [environment.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/environment.py)
108
+
109
+ - `DebuggerEnvironment` class with `reset(task_id)`, `step(action)`, `state()` methods
110
+ - `reset()`: loads task, runs buggy code through sandbox to get initial error output
111
+ - `step()`: routes by `action_type` β€” submit_fix β†’ sandbox, query_context β†’ return info, give_up β†’ run grader
112
+ - All action rules from Section 3.2 implemented exactly
113
+ - Step-level reward calculation per Section 6.1
114
+ - Episode-level grader invocation on `done=True`
115
+ - Never crashes β€” all errors returned in `info["error"]`
116
+
117
+ #### [NEW] [test_environment.py](file:///Users/shashaankjain/Desktop/meta_hackathon/tests/test_environment.py)
118
+
119
+ - Unit tests for reset/step/state
120
+
121
+ ---
122
+
123
+ ### Step 6: FastAPI Server
124
+
125
+ #### [NEW] [server.py](file:///Users/shashaankjain/Desktop/meta_hackathon/env/server.py)
126
+
127
+ - `POST /reset` β€” body: `{"task_id": "easy"}`, returns Observation JSON
128
+ - `POST /step` β€” body: Action JSON, returns `{"observation", "reward", "done", "info"}`
129
+ - `GET /state` β€” returns full state dict
130
+ - `GET /health` β€” returns `{"status": "ok", "environment": "agentdebugger-env", "version": "1.0.0"}` with HTTP 200
131
+
132
+ ---
133
+
134
+ ### Step 7: Inference Script
135
+
136
+ #### [NEW] [inference.py](file:///Users/shashaankjain/Desktop/meta_hackathon/inference.py)
137
+
138
+ - Exact code from README Section 8 β€” already fully specified
139
+ - Root directory placement (not in `env/`)
140
+ - Reads env vars: `API_BASE_URL`, `MODEL_NAME`, `HF_TOKEN`, `ENV_BASE_URL`
141
+ - Uses `openai` Python client
142
+ - Saves `baseline_results.json`
143
+
144
+ ---
145
+
146
+ ### Step 8: Configuration & Deployment
147
+
148
+ #### [NEW] [openenv.yaml](file:///Users/shashaankjain/Desktop/meta_hackathon/openenv.yaml)
149
+
150
+ - Exact content from README Section 9
151
+
152
+ #### [NEW] [Dockerfile](file:///Users/shashaankjain/Desktop/meta_hackathon/Dockerfile)
153
+
154
+ - Exact content from README Section 10
155
+
156
+ #### [NEW] [requirements.txt](file:///Users/shashaankjain/Desktop/meta_hackathon/requirements.txt)
157
+
158
+ - Exact content from README Section 11
159
+
160
+ ---
161
+
162
+ ## Open Questions
163
+
164
+ > [!IMPORTANT]
165
+ > **Task Medium β€” The Hash Bug:** The README describes a bytes/str conversion bug in `hash_password` where `str()` wrapping adds `"b'"` prefix. I need to carefully design the `user_db` and test setup so that 6 tests pass and exactly 4 fail. The README leaves the exact test suite design for medium to the implementer. I'll design it to match the described behavior. Any preferences?
166
+
167
+ > [!IMPORTANT]
168
+ > **Hard Task Test Count:** The README says `tests_total: 8` for hard in `openenv.yaml`, but the hard task has 8 sequential tests (all pass) and the agent needs to design a concurrent test. The grader independently runs its own 1000-thread stress test. I'll keep `tests_total: 8` as the initial suite and the grader adds its own concurrent verification separately. Correct?
169
+
170
+ ## Verification Plan
171
+
172
+ ### Automated Tests
173
+ 1. `pytest tests/test_sandbox.py -v` β€” All 5 sandbox tests pass
174
+ 2. `pytest tests/test_graders.py -v` β€” Determinism and range tests pass
175
+ 3. `pytest tests/test_environment.py -v` β€” Reset/step/state tests pass
176
+ 4. Start server with `uvicorn env.server:app --port 8000`, then:
177
+ - `curl http://localhost:8000/health` β†’ 200 with correct JSON
178
+ - POST `/reset` for each task β†’ valid Observation
179
+ - POST `/step` with various actions β†’ correct responses
180
+ 5. Variance self-check:
181
+ - Dummy agent (submits `pass`) β†’ scores < 0.15
182
+ - Perfect agent (ground truth fix + correct hypothesis) β†’ scores > 0.85 on easy
183
+
184
+ ### Manual Verification
185
+ - Docker build: `docker build -t agentdebugger-env .`
186
+ - Docker run and health check
187
+ - User deploys to HuggingFace Space and runs `openenv validate .`
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ fastapi==0.110.0
2
+ uvicorn==0.29.0
3
+ pydantic==2.6.4
4
+ openai==1.23.0
5
+ requests==2.31.0
6
+ python-dotenv==1.0.1
7
+ pytest==8.1.0
8
+ httpx==0.27.0
9
+ RestrictedPython==7.0
tests/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # AgentDebuggerEnv - Test suite
tests/test_sandbox.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Tests for the code execution sandbox.
3
+ All 5 tests are required by the hackathon spec.
4
+ """
5
+
6
+ import pytest
7
+ from env.sandbox import execute_code
8
+
9
+
10
+ def test_timeout_enforcement():
11
+ """Code with infinite loop must return timed_out=True within ~11 seconds."""
12
+ code = "while True: pass"
13
+ output, timed_out, elapsed_ms = execute_code(code, "")
14
+ assert timed_out is True
15
+ assert "TIMEOUT" in output or "timeout" in output.lower()
16
+
17
+
18
+ def test_os_import_blocked():
19
+ """os module must be blocked β€” cannot execute system commands."""
20
+ code = "import os; os.system('echo pwned')"
21
+ output, timed_out, _ = execute_code(code, "")
22
+ assert "pwned" not in output
23
+ assert "BLOCKED" in output or "blocked" in output.lower()
24
+
25
+
26
+ def test_sys_import_blocked():
27
+ """sys module must be blocked."""
28
+ code = "import sys; sys.exit(0)"
29
+ output, _, _ = execute_code(code, "")
30
+ assert "blocked" in output.lower() or "import" in output.lower()
31
+
32
+
33
+ def test_clean_code_runs():
34
+ """Clean, safe code with tests must execute correctly."""
35
+ code = "def add(a, b): return a + b"
36
+ test = "assert add(2, 3) == 5\nprint('PASSED')"
37
+ output, timed_out, _ = execute_code(code, test)
38
+ assert "PASSED" in output
39
+ assert timed_out is False
40
+
41
+
42
+ def test_syntax_error_returns_output():
43
+ """Code with syntax errors should return the SyntaxError, not crash."""
44
+ code = "def broken(: pass"
45
+ output, timed_out, _ = execute_code(code, "")
46
+ assert "SyntaxError" in output
47
+ assert timed_out is False
48
+
49
+
50
+ # ── Additional robustness tests ──────────────────────────────────────────────
51
+
52
+ def test_subprocess_import_blocked():
53
+ """subprocess module must be blocked."""
54
+ code = "import subprocess; subprocess.run(['echo', 'pwned'])"
55
+ output, _, _ = execute_code(code, "")
56
+ assert "pwned" not in output
57
+ assert "BLOCKED" in output or "blocked" in output.lower()
58
+
59
+
60
+ def test_threading_blocked_by_default():
61
+ """threading must be blocked unless allow_threading=True."""
62
+ code = "import threading; print('thread imported')"
63
+ output, _, _ = execute_code(code, "")
64
+ assert "thread imported" not in output
65
+ assert "BLOCKED" in output or "blocked" in output.lower()
66
+
67
+
68
+ def test_threading_allowed_when_flagged():
69
+ """threading must be allowed when allow_threading=True."""
70
+ code = "import threading; print('thread imported')"
71
+ output, _, _ = execute_code(code, "", allow_threading=True)
72
+ assert "thread imported" in output
73
+
74
+
75
+ def test_from_import_blocked():
76
+ """'from os import path' style imports must also be blocked."""
77
+ code = "from os import path; print('pwned')"
78
+ output, _, _ = execute_code(code, "")
79
+ assert "pwned" not in output
80
+ assert "BLOCKED" in output or "blocked" in output.lower()
81
+
82
+
83
+ def test_no_state_leak_between_executions():
84
+ """Each execution must be completely isolated β€” no shared state."""
85
+ code1 = "shared_var = 42"
86
+ output1, _, _ = execute_code(code1, "print('set')")
87
+ assert "set" in output1
88
+
89
+ code2 = ""
90
+ test2 = "try:\\n print(shared_var)\\nexcept NameError:\\n print('ISOLATED')"
91
+ # Fix: use actual newlines
92
+ code2_test = "try:\n print(shared_var)\nexcept NameError:\n print('ISOLATED')"
93
+ output2, _, _ = execute_code("", code2_test)
94
+ assert "ISOLATED" in output2