--- license: cc-by-4.0 task_categories: - tabular-classification - text-generation - reinforcement-learning language: - en tags: - synthetic - agentic-ai - cross-domain - autonomous-agents - reasoning - decision-making - multi-domain - mcts - orchestration - agi-adjacent - strategic-ai - rl pretty_name: Atlas Apex Cross-Domain Autonomous Intelligence Pack size_categories: - 10K | Ordered cross-domain events: `timestamp`, `event_name`, `domain`, `data_source`, `value_at_risk_usd`, `fidelity_score`, `latency_ms` | | `reasoning_trace` | struct | `primary_objective`, `decision_depth`, `confidence_threshold`, `branches_evaluated`, `winning_branch_reward`, `counterfactual_considered` | | `detection_logic` | struct | `anomaly_description`, `predictive_fidelity`, `cross_domain_signal_count`, `signal_conflicts[]` | | `simulation` | struct | `synthetic`, `engine`, `cross_domain_sync_mechanism`, `scenario_class`, `intended_use[]` | See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown. ## Why this dataset is useful Most public agent datasets are either single-domain (coding, math, game-play) or single-objective (reward-shaped for one goal). Agentic systems in production actually operate *across* domains — a trading agent watches satellite data, an AI scientist files patents, an orchestrator restores services under load. This pack is shaped around that shape. - **Cross-domain causal chains.** Each telemetry stream spans 2–4 domains (e.g., biotech → legal → finance, space → economics → finance). - **Reasoning DNA.** Each agent carries an explicit reasoning-strategy identifier (`DNA-XXXX-MCTS-EXPLORE-0.65`) so you can train and compare behavior conditional on strategy. - **Autonomy gradient.** L2 assisted through L5 full-auto — train policies that respect human-approval gates or score automatic escalation behavior. - **Outcome variance beyond success/failure.** `partial_success`, `rolled_back`, `escalated_to_human`, `executed_with_caveats` — closer to real operational reporting. - **Reasoning trace metadata.** Decision depth, branches evaluated, winning-branch reward, counterfactual-considered flag — directly usable for process-reward-model training and counterfactual reasoning research. ## Typical use cases - Multi-domain AI reasoning model training - Autonomous agent architecture R&D - Cross-domain decision-policy benchmarks - RL / multi-objective optimization research - Escalation-policy and human-in-the-loop research - LLM fine-tuning on cross-domain reasoning narratives - Counterfactual-reasoning model training - Orchestrator / dispatcher agent prototyping ## Quick start ```python import pandas as pd import pyarrow.parquet as pq df = pq.read_table("atlas_apex_sample.parquet").to_pandas() # Scenario distribution (stratified balanced) print(df["simulation"].apply(lambda s: s["scenario_class"]).value_counts()) # Outcome by scenario df["scenario"] = df["simulation"].apply(lambda s: s["scenario_class"]) df["outcome"] = df["event"].apply(lambda e: e["outcome"]) print(pd.crosstab(df["scenario"], df["outcome"])) # Distinct domains per record df["domains_touched"] = df["causal_telemetry_stream"].apply( lambda stream: len({step["domain"] for step in stream}) ) print(df.groupby("scenario")["domains_touched"].mean().round(2)) # Reasoning depth vs winning-branch reward df["depth"] = df["reasoning_trace"].apply(lambda r: r["decision_depth"]) df["reward"] = df["reasoning_trace"].apply(lambda r: r["winning_branch_reward"]) print(df.groupby(pd.cut(df["depth"], bins=[0,5,8,12,20]))["reward"].mean().round(2)) ``` Streaming form: ```python import json with open("atlas_apex_sample.jsonl") as f: for line in f: cycle = json.loads(line) # one autonomous decision cycle per line ``` ## Responsible use This dataset is intended for **research, agent prototyping, and educational benchmarking**. It contains abstract narrative templates — it does **not** contain real scientific discoveries, real trades, real robotic telemetry, real patents, or identifiable actors in any domain. Agents trained on this data will learn cross-domain reasoning *structure*; deployment in any specific domain (finance, healthcare, robotics) requires grounded domain-specific training, validation, and oversight appropriate to that domain's regulatory context. ## License Released under **CC BY 4.0**. Use freely for research, agent prototyping, education, and commercial development with attribution. ## Get the full pack This Hugging Face repo is a **10K-cycle sample**. The production pack scales to 100K+ cycles with expanded domain coverage (energy, defense, biosecurity, supply chain, climate), richer agent archetypes (swarm coordinators, red-team agents, digital-twin orchestrators), multi-agent collaboration traces, longer causal chains, adversarial / cooperative variants, parquet + JSONL + gym-compatible delivery, and buyer-specific configurations. **Self-serve (Stripe checkout):** - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery. **Full pack + enterprise scope:** - [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants. **Procurement catalog:** - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda. ## Citation ```bibtex @dataset{solstice_atlas_apex_pack_2026, title = {Atlas Apex Cross-Domain Autonomous Intelligence Pack (Sample)}, author = {SolsticeAI}, year = {2026}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/solsticestudioai/atlas-apex-pack} } ```