--- license: cc-by-4.0 task_categories: - text-generation - text-classification - reinforcement-learning language: - en - code tags: - synthetic - coding-agent - mcts - reasoning-traces - process-reward-model - rlhf - dpo - agentic-ai - tool-use - code-generation - llm-training - ucb - reward-modeling pretty_name: Coding Agent MCTS Reasoning Trace Pack size_categories: - 10K | Ordered reasoning steps: `action` (`analyze_context`, `write_draft`, `run_tests`, `lethe_prune`, `prometheus_anchor`), `depth`, `ucb_score` (null at root / terminal), `reward` (populated on terminal actions only), `thought` (natural-language rationale) | | `correlated_telemetry` | struct | `linter_warnings_initial`, `linter_warnings_final`, `test_runtime_ms`, `ci_status` | | `execution_summary` | struct | `files_changed`, `lines_added`, `lines_removed`, `time_to_resolution_sec` | | `genetic_optimizer_feedback` | struct | `final_reward`, `lethe_prunes_triggered`, `nodes_expanded`, `phenotype_used` | See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown. ## Why this dataset is useful Most public coding datasets (HumanEval, SWE-bench, MBPP) only give you the *final answer* and the task description. They don't capture the reasoning tree the agent walked through — the wrong paths, the prunes, the anchor points. This pack is shaped around what modern agent-training pipelines actually need: - **Explicit exploration vs exploitation.** Traces include both successful and pruned branches — `lethe_prune` events with negative reward, `prometheus_anchor` events with positive reward. Roughly 30% of traces carry a failed exploration branch before reaching the golden timeline. - **Reward signals embedded at every step.** UCB scores at each non-terminal step, explicit rewards at terminal actions — directly usable for RL, DPO, and process-reward-model training. - **Phenotype labels on every trace.** Train a `SECURITY_FIRST` coder specifically; run phenotype-transfer studies; build strategy-aware evaluation harnesses. - **Correlated telemetry.** Linter-warning deltas, test runtime, and CI status correlated to reasoning outcome — grounds the trace in observable signals. - **Compact.** Parquet fits in 340 KB, JSONL in 12.5 MB — you can pull this into a notebook in seconds and iterate. ## Typical use cases - MCTS-based coding agent architecture training - Process reward model (PRM) training - Reasoning-chain evaluation benchmarks - Agent self-improvement via trace replay - Strategy-conditional code-generation research - Curriculum learning with task-difficulty ladders - LLM fine-tuning on structured reasoning narratives - Benchmarking UCB-based exploration policies ## Quick start ```python import pandas as pd import pyarrow.parquet as pq df = pq.read_table("coding_intel_sample.parquet").to_pandas() # Phenotype distribution (stratified balanced) print(df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"]).value_counts()) # Average final reward by phenotype df["pheno"] = df["genetic_optimizer_feedback"].apply(lambda g: g["phenotype_used"]) df["reward"] = df["genetic_optimizer_feedback"].apply(lambda g: g["final_reward"]) print(df.groupby("pheno")["reward"].mean().round(2)) # Prune rate by task type df["task"] = df["event"].apply(lambda e: e["task_type"]) df["prunes"] = df["genetic_optimizer_feedback"].apply(lambda g: g["lethe_prunes_triggered"]) print(df.groupby("task")["prunes"].mean().round(2)) # Pull one full reasoning chain row = df.iloc[0] for step in row["agent_reasoning"]: print(f" d={step['depth']:<2} {step['action']:<20} ucb={step['ucb_score']} reward={step['reward']}: {step['thought']}") ``` Streaming form: ```python import json with open("coding_intel_sample.jsonl") as f: for line in f: trace = json.loads(line) # one MCTS reasoning trace per line ``` ## Notes and limitations - **Reasoning traces use canned action templates rather than live-executed code.** This pack is designed for agent-architecture training, not end-to-end SWE-bench-style evaluation. - **`ci_status` is `SUCCESS` for every row in this sample** — the production pack includes `FAILURE` / `FLAKY` / `TIMEOUT` variants; this free sample is restricted to golden-timeline anchored traces to keep a clean reward surface. - **UCB scores at root nodes use positive infinity** (serialized as `"Infinity"` in JSONL), following the standard MCTS convention. - Phenotype distribution is uniform; production licensing supports custom phenotype mixes. ## Responsible use This dataset is intended for **agent-training, process-reward-model, and MCTS research**. It contains synthesized reasoning narratives and action templates — it does **not** contain real code, real commit history, or proprietary repository content. Models trained on this data will learn reasoning structure and phenotype-conditional behavior; downstream code-generation quality still depends on training with real-code supervision from appropriately licensed corpora. ## License Released under **CC BY 4.0**. Use freely for research, agent prototyping, education, and commercial development with attribution. ## Get the full pack This Hugging Face repo is a **10K-trace sample**. The production pack scales to 2.5M+ traces with wider CI-outcome distribution (FAILURE / FLAKY / TIMEOUT), additional languages (C++, Java, Kotlin, Swift, C#), AST-diff variants, tool-call graph traces, multi-turn user-interaction sequences, custom phenotype mixes, and buyer-specific variants. **Self-serve (Stripe checkout):** - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery. **Full pack + enterprise scope:** - [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants. **Procurement catalog:** - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda. ## Citation ```bibtex @dataset{solstice_coding_intel_pack_2026, title = {Coding Agent MCTS Reasoning Trace Pack (Sample)}, author = {SolsticeAI}, year = {2026}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/solsticestudioai/coding-intel-pack} } ```