The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
AURORA-Workflow-1 — Rule-grammar dataset
The primary structured-workflow dataset for Stage-1 of the AURORA research programme. This repository hosts the rule grammars and dataset specification, not the materialised training data — the 60 000 episodes are generated programmatically from the grammars by the AURORA training pipeline.
What is in this repo
| Path | Role |
|---|---|
spec.md |
Dataset specification (v1.0.0): six domains, splits, sample sizes, generation pipeline, manual-audit rules. |
domains/<slug>/grammar.md |
Per-domain rule grammar. Six domains: invoice-triage, appointment-scheduling, inventory-reorder, lab-sample-routing, issue-ticket-escalation, household-maintenance-planning. |
aurora-federated-1-spec.md |
Sister dataset (federated schema-exchange). Used by H6. |
Why grammars, not raw events
AURORA-Workflow-1 is generator-defined, not collected. The grammars are seeded simulators; the same git-pinned grammar + the same seed always emits the same events. This makes the dataset:
- Bit-exact reproducible across replication partners,
- Schema-versioned (a grammar change is a
dataset_versionbump per the spec), - Storage-cheap (~92 KB instead of ~80 GB).
Any partner who wants to consume the materialised data downloads the grammars from this repo and runs the AURORA generator scripts.
How to use
git clone https://huggingface.co/datasets/Anthril/aurora-workflow-1 aurora-workflow-1
# Then in an AURORA checkout:
python scripts/generate-enriched-corpus.py \
--grammar-dir aurora-workflow-1/domains/ \
--output data/baselines/lora-llama-8b/<date>-enriched/
Headline numbers (per spec.md)
| Property | Value |
|---|---|
| Domains | 6 |
| Workflows per domain | 50 |
| Episodes per workflow | 200 |
| Total episodes | 60 000 |
| Adversarial-exception rate | 10 % |
| Temporal rule-replacement rate | 20 % |
| Splits | train / validation / calibration (5 %) / test / OOD / compositional-holdout (20 %) |
| Seeds | 100 master; 20 per Stage-1 experiment |
Hypothesis coverage
Used by H1 (event-vs-token), H2 (sparse routing), H3 (continual learning), H4 (episodic-semantic memory), and benchmark families LDA, EUT, ESC, HUB.
Provenance
- Source-of-truth path in repo:
data/aurora-workflow-1/anddata/aurora-federated-1/spec.md. - Spec anchor:
architecture/engineering-spec/training-methodology/training-fairness-controls.md§"Information equivalence". - Local SHA-256 manifest: see
hf-publish-manifest.jsonin the source tree. - Dataset version: v1.0.0 (frontmatter of
spec.md).
License
CC-BY-4.0. The grammars and spec are AURORA-original.
- Downloads last month
- 13