YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
NeuroGolf Solver v3
Builds minimal ONNX networks for ARC-AGI tasks. Currently solves 306/400 on CPU (15s budget per task).
Results
| Version | Solved | Key Changes |
|---|---|---|
| v1 | 128/400 | Conv solver only |
| v2 | 294/400 | Added spatial_gather, variable-shape conv, diff-shape conv |
| v3 | 306/400 | Fixed opset 10 compat (Gather vs GatherElements), added concat_enhanced, varshape_spatial_gather, conv_var_diff |
v3 Solver Breakdown (306/400)
| Solver | Count | Description |
|---|---|---|
| conv_var | 125 | Variable-shape conv on full 30×30 grid |
| conv_fixed | 106 | Fixed-shape conv (Slice→Conv→Pad) |
| conv_diff | 39 | Diff-shape conv (output smaller than input) |
| spatial_gather | 16 | Fixed-shape pixel remapping |
| concat | 5 | Tiled concat with flips |
| color_map | 4 | 1×1 color remapping conv |
| concat_enhanced | 4 | Tiled concat with all 8 dihedral transforms |
| rotate | 3 | 90°/180°/270° rotation |
| transpose | 2 | Matrix transpose |
| varshape_spatial_gather | 1 | Variable-shape pixel remapping |
| upscale | 1 | Nearest-neighbor upscale |
Quick Start
# 1. Clone
git clone https://huggingface.co/rogermt/neurogolf-solver
cd neurogolf-solver
# 2. Install deps
pip install numpy onnx onnxruntime
# 3. Get ARC data
git clone --depth 1 https://github.com/fchollet/ARC-AGI.git
# 4. Run
python neurogolf_solver.py --data_dir ARC-AGI/data/training/ --output_dir submission --conv_budget 15
# 5. Results
ls submission/*.onnx | wc -l
Create submission.zip for Kaggle
import zipfile, os
with zipfile.ZipFile('submission.zip', 'w', zipfile.ZIP_DEFLATED) as zf:
for f in sorted(os.listdir('submission')):
if f.endswith('.onnx'):
zf.write(os.path.join('submission', f), f)
print(f"Created submission.zip: {os.path.getsize('submission.zip')/1024:.0f} KB")
Key Parameters
| Flag | Default | Description |
|---|---|---|
--conv_budget |
30 |
Seconds per task for conv solver. More = more tasks solved |
--data_dir |
ARC-AGI/data/training/ |
Path to task JSONs |
--output_dir |
submission |
Where to save .onnx files |
--kaggle |
off | Use Kaggle task format (task001.json) |
--tasks |
all | Comma-separated task numbers (e.g., 1,2,3) |
--use_wandb |
off | Enable W&B logging |
How It Works
Format: Input/output = [1, 10, 30, 30] one-hot float32. ONNX opset 10, IR version 10.
Solver pipeline:
- Analytical solvers (instant, zero-cost): identity, constant, color_map, transpose, flip, rotate, tile, upscale, kronecker, concat, concat_enhanced, diagonal_tile, spatial_gather, varshape_spatial_gather
- Conv solvers (learned via least-squares):
- Fixed-shape:
Slice → Conv → ArgMax → Equal+Cast → Pad - Variable-shape:
Conv(30×30) → ArgMax → Equal+Cast → Mul(mask) - Diff-shape:
Slice → Conv → Slice(crop) → ArgMax → Equal+Cast → Pad - Variable diff-shape:
Conv(30×30) → ArgMax → Equal+Cast → Mul(input_mask)
- Fixed-shape:
Key design decisions:
- Uses
Gatherinstead ofGatherElementsfor opset 10 compatibility - Uses
Equal + Castinstead ofOneHot(avoids CUDA kernel issues) - CPU-only inference (GPU has no benefit for tiny 30×30 grids)
- Least-squares fitting finds optimal conv weights analytically (no gradient descent)
What's NOT solved yet (94 tasks)
- Tasks with input-driven output structure (output layout depends on input content)
- Tasks requiring multi-step reasoning (flood fill, gravity, counting)
- Variable diff-shape tasks where output extends beyond input bounds
- Tasks needing very large conv kernels (>29×29)
Scoring
NeuroGolf scoring: Score = MACs + memory_bytes + params
- Analytical solvers → near-zero cost
- Conv solvers → cost proportional to kernel size
- Lower score = better
Repo
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support