| --- |
| license: cc-by-4.0 |
| task_categories: |
| - tabular-classification |
| - tabular-regression |
| - reinforcement-learning |
| language: |
| - en |
| tags: |
| - synthetic |
| - defi |
| - decentralized-finance |
| - trading |
| - risk-detection |
| - on-chain |
| - mev |
| - liquidations |
| - autonomous-agents |
| - mcts |
| - reinforcement-learning |
| - blockchain |
| - agentic-ai |
| pretty_name: ARC-T DeFi Decision Telemetry Pack |
| size_categories: |
| - 10K<n<100K |
| configs: |
| - config_name: default |
| data_files: |
| - split: train |
| path: arc_t_defi_sample.parquet |
| --- |
| |
| # ARC-T DeFi Decision Telemetry Pack (Sample) |
|
|
| **A synthetic DeFi risk-to-execution decision-telemetry dataset for autonomous-trading research, on-chain anomaly detection, and reinforcement-learning environments.** Each row is a complete risk-to-execution lifecycle: a triggering market anomaly, market-state snapshot, adversarial-planner reasoning (MCTS branch count + winning strategy), step-by-step execution chain with per-step gas / slippage / latency telemetry, execution summary, and final decision outcome with PnL delta. |
|
|
| Built by [SolsticeAI](https://www.solsticestudio.ai/datasets) as a free sample of a larger commercial pack. 100% synthetic. No real trades, wallets, addresses, or on-chain identities. Safe for model training, competitive evaluation, and public benchmark work. |
|
|
| ## What is included |
|
|
| | File | Rows | Format | Purpose | |
| |---|---:|---|---| |
| | `arc_t_defi_sample.parquet` | 10,000 | Parquet | Columnar, typed, best for analytics | |
| | `arc_t_defi_sample.jsonl` | 10,000 | JSON Lines | Streaming / LLM training friendly | |
|
|
| **Source pack:** 100K-lifecycle corpus (production pack scales to 2.5M+) |
| **This sample:** 10,000 lifecycles, stratified 2,500 per decision outcome |
| **Outcome classes:** `alpha_captured`, `risk_mitigated`, `slippage_loss`, `execution_failed` |
| **Chains covered:** `ethereum`, `arbitrum`, `solana`, `optimism` |
| **Protocols covered:** Aave-V3, Uniswap-V3, Curve-Fi, GMX, MakerDAO, Drift, Lido, Kamino, Jupiter, Orca, Marinade, Synthetix, Meteora |
| **Risk triggers:** oracle staleness, TVL crash, whale dump, governance attack, bridge congestion, funding/borrow-rate dislocation, liquidity fragmentation, redemption run, liquid-staking dislocation |
|
|
| ## Record structure |
|
|
| Each record is one risk-to-execution lifecycle with 7 top-level fields: |
|
|
| | Field | Type | Contents | |
| |---|---|---| |
| | `schema_version` | string | Pack schema version (`1.0.0-arc-t-sample`) | |
| | `event` | struct | `id`, `trace_id`, `timestamp`, `decision_outcome`, `pnl_delta_usd` | |
| | `risk_context` | struct | `trigger`, `protocol`, `chain`, `impacted_asset`, `anomaly_signature`, nested `market_state` (severity, volatility_regime, liquidity_band, oracle_age, venue divergence, price impact, notional, 1h vol, etc.) | |
| | `agent_reasoning` | struct | `engine`, `winning_strategy`, `confidence_score`, `mcts_branches` | |
| | `correlated_telemetry` | list<struct> | Ordered component chain (`ARES`, `FRACTAL`, `ARGUS`, `SENTINEL`, …) with per-step latency, slippage, gas price, priority fee, route hops, node provider | |
| | `execution_summary` | struct | `strategy`, total execution time, gas cost, average slippage | |
| | `genetic_optimizer_feedback` | struct | `fitness_score_update`, `parameter_drift` | |
|
|
| See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown. |
|
|
| ## Why this dataset is useful |
|
|
| Most public on-chain datasets are either raw block-level trace data or narrow single-protocol slices. This pack is shaped around what autonomous-trading and on-chain-risk teams actually need to train decision models: |
|
|
| - Complete risk-to-execution lifecycles rather than isolated transactions |
| - Balanced outcome classes across winning and losing execution states |
| - Adversarial reasoning trace (strategy + MCTS branch count + confidence) alongside the telemetry |
| - Per-step gas, slippage, and latency signals to train execution-aware policies |
| - Multi-chain, multi-protocol coverage for generalization across venues |
| - PnL-linked outcomes for reward shaping in RL environments |
| - Stable schema suitable for backtesting, RL gym integration, and dashboarding |
|
|
| ## Typical use cases |
|
|
| - Autonomous DeFi agent training (RL and supervised) |
| - On-chain anomaly detection and risk-trigger classification |
| - Execution-strategy optimization (gas / slippage / route) |
| - Arbitrage and cross-venue routing model development |
| - Market-microstructure and liquidity-state modeling |
| - LLM fine-tuning on execution narratives and decision rationale |
| - Benchmarking MCTS strategy selection under uncertainty |
| - Gym environment authoring for multi-chain trading |
|
|
| ## Quick start |
|
|
| ```python |
| import pandas as pd |
| import pyarrow.parquet as pq |
| |
| df = pq.read_table("arc_t_defi_sample.parquet").to_pandas() |
| |
| # Outcome distribution (stratified balanced) |
| print(df["event"].apply(lambda e: e["decision_outcome"]).value_counts()) |
| |
| # Average PnL by strategy |
| df["strategy"] = df["agent_reasoning"].apply(lambda r: r["winning_strategy"]) |
| df["pnl"] = df["event"].apply(lambda e: e["pnl_delta_usd"]) |
| print(df.groupby("strategy")["pnl"].mean().round(2)) |
| |
| # Chain vs outcome cross-tab |
| df["chain"] = df["risk_context"].apply(lambda r: r["chain"]) |
| df["outcome"] = df["event"].apply(lambda e: e["decision_outcome"]) |
| print(pd.crosstab(df["chain"], df["outcome"])) |
| ``` |
|
|
| Streaming form: |
|
|
| ```python |
| import json |
| |
| with open("arc_t_defi_sample.jsonl") as f: |
| for line in f: |
| lifecycle = json.loads(line) |
| # one risk-to-execution lifecycle per line |
| ``` |
|
|
| ## Responsible use |
|
|
| This dataset is intended for **research, prototyping, and defensive / monitoring** use cases around autonomous DeFi systems: anomaly detection, strategy evaluation, execution-policy training, and RL environments. It contains synthesized market states, execution traces, and decision outcomes — it does **not** contain real wallet addresses, real transaction hashes, real pool reserves, or any live on-chain state. Do not deploy policies trained solely on this synthetic data to real capital without independent validation against live data. |
|
|
| ## License |
|
|
| Released under **CC BY 4.0**. Use freely for research, RL experiments, education, and commercial prototyping with attribution. |
|
|
| ## Get the full pack |
|
|
| This Hugging Face repo is a **10K-lifecycle sample**. The production pack scales to 2.5M+ lifecycles with expanded chain and protocol coverage, finer-grained market-state regimes, campaign-linked multi-step attack sequences, MEV-aware telemetry, richer step-level component traces, parquet + JSONL + gym-compatible formats, and buyer-specific variants. |
|
|
| **Self-serve (Stripe checkout):** |
| - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery. |
|
|
| **Full pack + enterprise scope:** |
| - [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants. |
|
|
| **Procurement catalog:** |
| - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @dataset{solstice_arc_t_defi_pack_2026, |
| title = {ARC-T DeFi Decision Telemetry Pack (Sample)}, |
| author = {SolsticeAI}, |
| year = {2026}, |
| publisher = {Hugging Face}, |
| url = {https://huggingface.co/datasets/solsticestudioai/arc-t-defi-pack} |
| } |
| ``` |
|
|