atlas-apex-pack / README.md
solsticestudioai's picture
Add Atlas Apex sample (10K cross-domain decision cycles) with README, SCHEMA, parquet, JSONL
187417c verified
metadata
license: cc-by-4.0
task_categories:
  - tabular-classification
  - text-generation
  - reinforcement-learning
language:
  - en
tags:
  - synthetic
  - agentic-ai
  - cross-domain
  - autonomous-agents
  - reasoning
  - decision-making
  - multi-domain
  - mcts
  - orchestration
  - agi-adjacent
  - strategic-ai
  - rl
pretty_name: Atlas Apex Cross-Domain Autonomous Intelligence Pack
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: atlas_apex_sample.parquet

Atlas Apex Cross-Domain Autonomous Intelligence Pack (Sample)

A synthetic dataset of cross-domain autonomous decision cycles for agentic-AI research, multi-objective reinforcement learning, and strategic-reasoning model training. Each row is a complete autonomous decision cycle — an agent observes a cross-domain signal, reasons over a branching decision tree, executes actions across domains (biotech → finance, space → finance, robotics → systems), and records the strategic outcome.

Built by SolsticeAI as a free sample of a larger commercial pack. 100% synthetic. No real scientific results, real trades, real robotic systems, or real operational telemetry — all domain content is abstract narrative templates for reasoning-structure training.

What is included

File Rows Format Purpose
atlas_apex_sample.parquet 10,000 Parquet Columnar, typed, best for analytics
atlas_apex_sample.jsonl 10,000 JSON Lines Streaming / LLM training friendly

This sample: 10,000 autonomous decision cycles, stratified 3,333 per scenario class.
Scenario classes (3): autonomous_scientific_discovery, ai_driven_economic_decisions, distributed_system_coordination
Agent archetypes (3): AI_Scientist, Trading_Agent, Orchestrator (one per scenario)
Autonomy levels: L2_Assisted, L3_Supervised, L4_Conditional, L5_Full_Auto
Strategic-value tiers: low, medium, high, critical, transformative
Outcomes: objective_achieved, partial_success, rolled_back, escalated_to_human, executed_with_caveats
Domains touched per scenario: biotech / legal / finance / economics / space / robotics / systems / meta

Record structure

Each record is one autonomous decision cycle with 7 top-level fields:

Field Type Contents
schema_version string Pack schema version (1.0.0-atlas-apex-sample)
event struct id, trace_id, timestamp, strategic_value, outcome, confidence
identity_context struct agent_type, reasoning_dna, autonomy_level, human_approval_required, escalation_chain[]
causal_telemetry_stream list Ordered cross-domain events: timestamp, event_name, domain, data_source, value_at_risk_usd, fidelity_score, latency_ms
reasoning_trace struct primary_objective, decision_depth, confidence_threshold, branches_evaluated, winning_branch_reward, counterfactual_considered
detection_logic struct anomaly_description, predictive_fidelity, cross_domain_signal_count, signal_conflicts[]
simulation struct synthetic, engine, cross_domain_sync_mechanism, scenario_class, intended_use[]

See SCHEMA.md for the full nested field breakdown.

Why this dataset is useful

Most public agent datasets are either single-domain (coding, math, game-play) or single-objective (reward-shaped for one goal). Agentic systems in production actually operate across domains — a trading agent watches satellite data, an AI scientist files patents, an orchestrator restores services under load. This pack is shaped around that shape.

  • Cross-domain causal chains. Each telemetry stream spans 2–4 domains (e.g., biotech → legal → finance, space → economics → finance).
  • Reasoning DNA. Each agent carries an explicit reasoning-strategy identifier (DNA-XXXX-MCTS-EXPLORE-0.65) so you can train and compare behavior conditional on strategy.
  • Autonomy gradient. L2 assisted through L5 full-auto — train policies that respect human-approval gates or score automatic escalation behavior.
  • Outcome variance beyond success/failure. partial_success, rolled_back, escalated_to_human, executed_with_caveats — closer to real operational reporting.
  • Reasoning trace metadata. Decision depth, branches evaluated, winning-branch reward, counterfactual-considered flag — directly usable for process-reward-model training and counterfactual reasoning research.

Typical use cases

  • Multi-domain AI reasoning model training
  • Autonomous agent architecture R&D
  • Cross-domain decision-policy benchmarks
  • RL / multi-objective optimization research
  • Escalation-policy and human-in-the-loop research
  • LLM fine-tuning on cross-domain reasoning narratives
  • Counterfactual-reasoning model training
  • Orchestrator / dispatcher agent prototyping

Quick start

import pandas as pd
import pyarrow.parquet as pq

df = pq.read_table("atlas_apex_sample.parquet").to_pandas()

# Scenario distribution (stratified balanced)
print(df["simulation"].apply(lambda s: s["scenario_class"]).value_counts())

# Outcome by scenario
df["scenario"] = df["simulation"].apply(lambda s: s["scenario_class"])
df["outcome"] = df["event"].apply(lambda e: e["outcome"])
print(pd.crosstab(df["scenario"], df["outcome"]))

# Distinct domains per record
df["domains_touched"] = df["causal_telemetry_stream"].apply(
    lambda stream: len({step["domain"] for step in stream})
)
print(df.groupby("scenario")["domains_touched"].mean().round(2))

# Reasoning depth vs winning-branch reward
df["depth"] = df["reasoning_trace"].apply(lambda r: r["decision_depth"])
df["reward"] = df["reasoning_trace"].apply(lambda r: r["winning_branch_reward"])
print(df.groupby(pd.cut(df["depth"], bins=[0,5,8,12,20]))["reward"].mean().round(2))

Streaming form:

import json

with open("atlas_apex_sample.jsonl") as f:
    for line in f:
        cycle = json.loads(line)
        # one autonomous decision cycle per line

Responsible use

This dataset is intended for research, agent prototyping, and educational benchmarking. It contains abstract narrative templates — it does not contain real scientific discoveries, real trades, real robotic telemetry, real patents, or identifiable actors in any domain. Agents trained on this data will learn cross-domain reasoning structure; deployment in any specific domain (finance, healthcare, robotics) requires grounded domain-specific training, validation, and oversight appropriate to that domain's regulatory context.

License

Released under CC BY 4.0. Use freely for research, agent prototyping, education, and commercial development with attribution.

Get the full pack

This Hugging Face repo is a 10K-cycle sample. The production pack scales to 100K+ cycles with expanded domain coverage (energy, defense, biosecurity, supply chain, climate), richer agent archetypes (swarm coordinators, red-team agents, digital-twin orchestrators), multi-agent collaboration traces, longer causal chains, adversarial / cooperative variants, parquet + JSONL + gym-compatible delivery, and buyer-specific configurations.

Self-serve (Stripe checkout):

Full pack + enterprise scope:

  • www.solsticestudio.ai/datasets — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants.

Procurement catalog:

Citation

@dataset{solstice_atlas_apex_pack_2026,
  title        = {Atlas Apex Cross-Domain Autonomous Intelligence Pack (Sample)},
  author       = {SolsticeAI},
  year         = {2026},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/solsticestudioai/atlas-apex-pack}
}