| ---
|
| license: cc-by-4.0
|
| task_categories:
|
| - tabular-classification
|
| - text-classification
|
| language:
|
| - en
|
| tags:
|
| - synthetic
|
| - cybersecurity
|
| - threat-intelligence
|
| - red-team
|
| - blue-team
|
| - soc
|
| - siem
|
| - edr
|
| - mitre-attack
|
| - detection-engineering
|
| - security-analytics
|
| - adversarial-simulation
|
| - agentic-ai
|
| pretty_name: Nemesis Cyber Threat Simulation Pack
|
| size_categories:
|
| - 10K<n<100K
|
| configs:
|
| - config_name: default
|
| data_files:
|
| - split: train
|
| path: nemesis_cyber_sample.parquet
|
| ---
|
|
|
| # Nemesis Cyber Threat Simulation Pack (Sample)
|
|
|
| **A synthetic adversarial-agent cyber operations dataset for detection-model training, SOC analyst triage research, and blue-team evaluation.** Each row captures a complete simulated attack episode: triggering anomaly, environment context, adversarial planner reasoning, correlated telemetry trace, execution summary, and final decision outcome (detected / blocked / impact achieved / stealth maintained / exfiltration complete).
|
|
|
| Built by [SolsticeAI](https://www.solsticestudio.ai/datasets) as a free sample of a larger commercial pack. 100% synthetic. No real incident, victim, or exploit data — and no working offensive code. TTP labels align with MITRE ATT&CK vocabulary so this sample can be used to train and benchmark defenders.
|
|
|
| ## What is included
|
|
|
| | File | Rows | Format | Purpose |
|
| |---|---:|---|---|
|
| | `nemesis_cyber_sample.parquet` | 10,000 | Parquet | Columnar, typed, best for analytics |
|
| | `nemesis_cyber_sample.jsonl` | 10,000 | JSON Lines | Streaming / LLM training friendly |
|
|
|
| **Source pack:** 2.5M-episode corpus
|
| **This sample:** 10,000 episodes, stratified 2,000 per outcome class
|
| **Outcome classes:** `detected_by_soc`, `blocked_by_edr`, `stealth_maintained`, `exfiltration_complete`, `impact_achieved`
|
| **Environments covered:** AWS-Cloud, Active-Directory, Kubernetes, Web-App-Gateway
|
|
|
| ## Record structure
|
|
|
| Each record is one simulated attack episode with 8 top-level fields:
|
|
|
| | Field | Type | Contents |
|
| |---|---|---|
|
| | `schema_version` | string | Pack schema version (`1.0.0-nemesis-cyber-sample`) |
|
| | `event` | struct | `id`, `timestamp`, `trace_id`, `weighted_score`, `decision_outcome` |
|
| | `risk_context` | struct | `trigger`, `protocol`, `chain`, `impacted_asset`, `anomaly_signature` |
|
| | `agent_reasoning` | struct | `engine`, `winning_strategy`, `confidence_score`, `mcts_branches` |
|
| | `correlated_telemetry` | list<struct> | Ordered action chain with per-step telemetry (latency, noise, evasion score, node provider) |
|
| | `execution_summary` | struct | `strategy`, `success_rate`, `total_execution_ms`, `noise_penalty` |
|
| | `genetic_optimizer_feedback` | struct | `fitness_score_update`, `parameter_drift` |
|
| | `decision_outcome` | string | Final label (duplicated from `event.decision_outcome` for convenience) |
|
|
|
| See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown.
|
|
|
| ## Why this dataset is useful
|
|
|
| Most public cybersecurity datasets are either raw packet captures, static CTI feeds, or narrow single-technique labeling sets. This pack is shaped around what detection-engineering and SOC-analytics teams actually need to train modern models:
|
|
|
| - Multi-step attack episodes rather than isolated alerts
|
| - Balanced outcome classes across detected, blocked, stealthy, and successful attempts
|
| - Adversarial reasoning trace (strategy + MCTS branch count + confidence) alongside the telemetry
|
| - Per-step evasion and noise signals to train detection models that weigh stealth vs noise trade-offs
|
| - Cross-environment coverage (cloud, identity, container, web)
|
| - Stable schema suitable for dashboard prototyping, triage simulators, and ML pipelines
|
|
|
| ## Typical use cases
|
|
|
| - SOC triage and alert-prioritization model training
|
| - Detection engineering rule evaluation against balanced positive and negative cases
|
| - Adversarial-AI research on multi-step planner behavior
|
| - Tabletop and red-vs-blue simulator content
|
| - LLM fine-tuning on incident narratives and defender reasoning
|
| - Benchmarking anomaly-scoring and false-positive reduction pipelines
|
| - Dashboard and BI template development for security analytics
|
|
|
| ## Quick start
|
|
|
| ```python
|
| import pandas as pd
|
| import pyarrow.parquet as pq
|
|
|
| df = pq.read_table("nemesis_cyber_sample.parquet").to_pandas()
|
|
|
| # Outcome distribution (stratified balanced)
|
| print(df["decision_outcome"].value_counts())
|
|
|
| # Evasion pressure per environment
|
| df["protocol"] = df["risk_context"].apply(lambda r: r.get("protocol"))
|
| df["avg_evasion"] = df["correlated_telemetry"].apply(
|
| lambda steps: sum(s["telemetry"]["evasion_score"] for s in steps) / max(len(steps), 1)
|
| )
|
| print(df.groupby("protocol")["avg_evasion"].mean().round(3))
|
|
|
| # Detection-rate by trigger type
|
| df["trigger"] = df["risk_context"].apply(lambda r: r.get("trigger"))
|
| detection_rate = (df["decision_outcome"].isin(["detected_by_soc", "blocked_by_edr"])
|
| .groupby(df["trigger"]).mean().round(3))
|
| print(detection_rate)
|
| ```
|
|
|
| Streaming form:
|
|
|
| ```python
|
| import json
|
|
|
| with open("nemesis_cyber_sample.jsonl") as f:
|
| for line in f:
|
| episode = json.loads(line)
|
| # one episode per line
|
| ```
|
|
|
| ## Responsible use
|
|
|
| This dataset is intended for **defensive** research: detection modeling, SOC tooling, and adversarial-agent studies. It contains synthesized attack metadata and MITRE-aligned TTP labels — it does **not** contain working offensive payloads, exploit code, shellcode, malware samples, credentials, private vulnerability details, or any real-world victim data. Please use it to improve defenses.
|
|
|
| ## License
|
|
|
| Released under **CC BY 4.0**. Use freely for research, detection-engineering, education, and commercial prototyping with attribution.
|
|
|
| ## Get the full pack
|
|
|
| This Hugging Face repo is a **10K-episode sample**. The production pack scales to 2.5M+ episodes, additional outcome labels, richer per-step telemetry, attacker/defender variant splits, multi-environment campaign chains, parquet + JSONL + SIEM-import formats, and buyer-specific variants.
|
|
|
| **Self-serve (Stripe checkout):**
|
| - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery.
|
|
|
| **Full pack + enterprise scope:**
|
| - [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants.
|
|
|
| **Procurement catalog:**
|
| - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda.
|
|
|
| ## Citation
|
|
|
| ```bibtex
|
| @dataset{solstice_nemesis_cyber_pack_2026,
|
| title = {Nemesis Cyber Threat Simulation Pack (Sample)},
|
| author = {SolsticeAI},
|
| year = {2026},
|
| publisher = {Hugging Face},
|
| url = {https://huggingface.co/datasets/solsticestudioai/nemesis-cyber-pack}
|
| }
|
| ```
|
|
|