AgPerry commited on
Commit
ee747e2
·
verified ·
1 Parent(s): dfd621f

V2: Hermes-only — drop openclaw + sort by raw Intercepted DESC + footnote refresh

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -60,15 +60,14 @@ Live results — pulled from [`leaderboard/results.csv`](https://huggingface.co/
60
 
61
  | Rank | Model | Harness | Intercepted | Reward | Pass / Total |
62
  |------|-------|---------|-------------|--------|--------------|
63
- | 1 | `glm-5.1` | hermes | **48.5%** | 18.5% | 24 / 130 |
64
- | 2 | `deepseek-v4-pro` | hermes | **43.8%** | 10.0% | 13 / 130 |
65
- | 3 | `claude-opus-4-7` *(partial)* | hermes | **54.7%** | 13.3% | 10 / 75 |
66
- | 4 | `gpt-5.5` *(partial)* | hermes | **48.1%** | 11.1% | 9 / 81 |
67
  | 5 | `openrouter/owl-alpha` | hermes | **14.6%** | 4.6% | 6 / 130 |
68
  | 6 | `deepseek-v4-flash` | hermes | **3.1%** | 1.5% | 2 / 130 |
69
- | 7 | `glm-5.1` | openclaw | **0.0%** | 0.0% | 0 / 130 |
70
 
71
- **Intercepted** (sort key) = fraction whose final HTTP request matched the per-task URL/method schema — Stage 1, deterministic, no judge. **Reward** = additionally requires an LLM judge (default `deepseek/deepseek-v4-pro`) to confirm the intercepted payload fulfilled the instruction — Stage 2. Rows ranked by Intercepted with Reward as tiebreak. *Partial* = batch attempted < 130 V2 tasks; the displayed % is over attempted, but **ranking treats unattempted as failures** — so a partial 54.7% Intercepted (10/75) ranks below a complete 48.5% (63/130). Companion traces in [`TIGER-Lab/ClawBenchV2Trace`](https://huggingface.co/datasets/TIGER-Lab/ClawBenchV2Trace). See [scoring.md](https://github.com/reacher-z/ClawBench/blob/main/eval/scoring.md), [live leaderboard Space](https://huggingface.co/spaces/TIGER-Lab/ClawBench).
72
 
73
  **Submit a result** → run [`clawbench-eval`](https://pypi.org/project/clawbench-eval/) on your model and open a PR to [`leaderboard/results.csv`](https://huggingface.co/datasets/TIGER-Lab/ClawBench/blob/main/leaderboard/results.csv) — one row per (model × harness × corpus).
74
 
 
60
 
61
  | Rank | Model | Harness | Intercepted | Reward | Pass / Total |
62
  |------|-------|---------|-------------|--------|--------------|
63
+ | 1 | `claude-opus-4-7` *(partial)* | hermes | **54.7%** | 13.3% | 10 / 75 |
64
+ | 2 | `glm-5.1` | hermes | **48.5%** | 18.5% | 24 / 130 |
65
+ | 3 | `gpt-5.5` *(partial)* | hermes | **48.1%** | 11.1% | 9 / 81 |
66
+ | 4 | `deepseek-v4-pro` | hermes | **43.8%** | 10.0% | 13 / 130 |
67
  | 5 | `openrouter/owl-alpha` | hermes | **14.6%** | 4.6% | 6 / 130 |
68
  | 6 | `deepseek-v4-flash` | hermes | **3.1%** | 1.5% | 2 / 130 |
 
69
 
70
+ **Intercepted** (sort key) = fraction whose final HTTP request matched the per-task URL/method schema — Stage 1, deterministic, no judge. **Reward** = additionally requires an LLM judge (default `deepseek/deepseek-v4-pro`) — Stage 2. Rows ranked by Intercepted DESC, Reward as tiebreak. **V2 is Hermes-only**; alternative harnesses are evaluated separately. *Partial* = batch attempted < 130 V2 tasks; rates are over attempted, not over 130.. Companion traces in [`TIGER-Lab/ClawBenchV2Trace`](https://huggingface.co/datasets/TIGER-Lab/ClawBenchV2Trace). See [scoring.md](https://github.com/reacher-z/ClawBench/blob/main/eval/scoring.md), [live leaderboard Space](https://huggingface.co/spaces/TIGER-Lab/ClawBench).
71
 
72
  **Submit a result** → run [`clawbench-eval`](https://pypi.org/project/clawbench-eval/) on your model and open a PR to [`leaderboard/results.csv`](https://huggingface.co/datasets/TIGER-Lab/ClawBench/blob/main/leaderboard/results.csv) — one row per (model × harness × corpus).
73