solsticestudioai commited on
Commit
43c913a
·
verified ·
1 Parent(s): 1db4576

Point all canonical links at www.solsticestudio.ai/datasets

Browse files
Files changed (1) hide show
  1. README.md +155 -156
README.md CHANGED
@@ -1,156 +1,155 @@
1
- ---
2
- license: cc-by-4.0
3
- task_categories:
4
- - tabular-classification
5
- - text-classification
6
- language:
7
- - en
8
- tags:
9
- - synthetic
10
- - cybersecurity
11
- - threat-intelligence
12
- - red-team
13
- - blue-team
14
- - soc
15
- - siem
16
- - edr
17
- - mitre-attack
18
- - detection-engineering
19
- - security-analytics
20
- - adversarial-simulation
21
- - agentic-ai
22
- pretty_name: Nemesis Cyber Threat Simulation Pack
23
- size_categories:
24
- - 10K<n<100K
25
- configs:
26
- - config_name: default
27
- data_files:
28
- - split: train
29
- path: nemesis_cyber_sample.parquet
30
- ---
31
-
32
- # Nemesis Cyber Threat Simulation Pack (Sample)
33
-
34
- **A synthetic adversarial-agent cyber operations dataset for detection-model training, SOC analyst triage research, and blue-team evaluation.** Each row captures a complete simulated attack episode: triggering anomaly, environment context, adversarial planner reasoning, correlated telemetry trace, execution summary, and final decision outcome (detected / blocked / impact achieved / stealth maintained / exfiltration complete).
35
-
36
- Built by [SolsticeAI](https://solsticestudio.ai) as a free sample of a larger commercial pack. 100% synthetic. No real incident, victim, or exploit data — and no working offensive code. TTP labels align with MITRE ATT&CK vocabulary so this sample can be used to train and benchmark defenders.
37
-
38
- ## What is included
39
-
40
- | File | Rows | Format | Purpose |
41
- |---|---:|---|---|
42
- | `nemesis_cyber_sample.parquet` | 10,000 | Parquet | Columnar, typed, best for analytics |
43
- | `nemesis_cyber_sample.jsonl` | 10,000 | JSON Lines | Streaming / LLM training friendly |
44
-
45
- **Source pack:** 2.5M-episode corpus
46
- **This sample:** 10,000 episodes, stratified 2,000 per outcome class
47
- **Outcome classes:** `detected_by_soc`, `blocked_by_edr`, `stealth_maintained`, `exfiltration_complete`, `impact_achieved`
48
- **Environments covered:** AWS-Cloud, Active-Directory, Kubernetes, Web-App-Gateway
49
-
50
- ## Record structure
51
-
52
- Each record is one simulated attack episode with 8 top-level fields:
53
-
54
- | Field | Type | Contents |
55
- |---|---|---|
56
- | `schema_version` | string | Pack schema version (`1.0.0-nemesis-cyber-sample`) |
57
- | `event` | struct | `id`, `timestamp`, `trace_id`, `weighted_score`, `decision_outcome` |
58
- | `risk_context` | struct | `trigger`, `protocol`, `chain`, `impacted_asset`, `anomaly_signature` |
59
- | `agent_reasoning` | struct | `engine`, `winning_strategy`, `confidence_score`, `mcts_branches` |
60
- | `correlated_telemetry` | list<struct> | Ordered action chain with per-step telemetry (latency, noise, evasion score, node provider) |
61
- | `execution_summary` | struct | `strategy`, `success_rate`, `total_execution_ms`, `noise_penalty` |
62
- | `genetic_optimizer_feedback` | struct | `fitness_score_update`, `parameter_drift` |
63
- | `decision_outcome` | string | Final label (duplicated from `event.decision_outcome` for convenience) |
64
-
65
- See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown.
66
-
67
- ## Why this dataset is useful
68
-
69
- Most public cybersecurity datasets are either raw packet captures, static CTI feeds, or narrow single-technique labeling sets. This pack is shaped around what detection-engineering and SOC-analytics teams actually need to train modern models:
70
-
71
- - Multi-step attack episodes rather than isolated alerts
72
- - Balanced outcome classes across detected, blocked, stealthy, and successful attempts
73
- - Adversarial reasoning trace (strategy + MCTS branch count + confidence) alongside the telemetry
74
- - Per-step evasion and noise signals to train detection models that weigh stealth vs noise trade-offs
75
- - Cross-environment coverage (cloud, identity, container, web)
76
- - Stable schema suitable for dashboard prototyping, triage simulators, and ML pipelines
77
-
78
- ## Typical use cases
79
-
80
- - SOC triage and alert-prioritization model training
81
- - Detection engineering rule evaluation against balanced positive and negative cases
82
- - Adversarial-AI research on multi-step planner behavior
83
- - Tabletop and red-vs-blue simulator content
84
- - LLM fine-tuning on incident narratives and defender reasoning
85
- - Benchmarking anomaly-scoring and false-positive reduction pipelines
86
- - Dashboard and BI template development for security analytics
87
-
88
- ## Quick start
89
-
90
- ```python
91
- import pandas as pd
92
- import pyarrow.parquet as pq
93
-
94
- df = pq.read_table("nemesis_cyber_sample.parquet").to_pandas()
95
-
96
- # Outcome distribution (stratified balanced)
97
- print(df["decision_outcome"].value_counts())
98
-
99
- # Evasion pressure per environment
100
- df["protocol"] = df["risk_context"].apply(lambda r: r.get("protocol"))
101
- df["avg_evasion"] = df["correlated_telemetry"].apply(
102
- lambda steps: sum(s["telemetry"]["evasion_score"] for s in steps) / max(len(steps), 1)
103
- )
104
- print(df.groupby("protocol")["avg_evasion"].mean().round(3))
105
-
106
- # Detection-rate by trigger type
107
- df["trigger"] = df["risk_context"].apply(lambda r: r.get("trigger"))
108
- detection_rate = (df["decision_outcome"].isin(["detected_by_soc", "blocked_by_edr"])
109
- .groupby(df["trigger"]).mean().round(3))
110
- print(detection_rate)
111
- ```
112
-
113
- Streaming form:
114
-
115
- ```python
116
- import json
117
-
118
- with open("nemesis_cyber_sample.jsonl") as f:
119
- for line in f:
120
- episode = json.loads(line)
121
- # one episode per line
122
- ```
123
-
124
- ## Responsible use
125
-
126
- This dataset is intended for **defensive** research: detection modeling, SOC tooling, and adversarial-agent studies. It contains synthesized attack metadata and MITRE-aligned TTP labels — it does **not** contain working offensive payloads, exploit code, shellcode, malware samples, credentials, private vulnerability details, or any real-world victim data. Please use it to improve defenses.
127
-
128
- ## License
129
-
130
- Released under **CC BY 4.0**. Use freely for research, detection-engineering, education, and commercial prototyping with attribution.
131
-
132
- ## Get the full pack
133
-
134
- This Hugging Face repo is a **10K-episode sample**. The production pack scales to 2.5M+ episodes, additional outcome labels, richer per-step telemetry, attacker/defender variant splits, multi-environment campaign chains, parquet + JSONL + SIEM-import formats, and buyer-specific variants.
135
-
136
- **Self-serve (Stripe checkout):**
137
- - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery.
138
-
139
- **Full pack + enterprise scope:**
140
- - [solsticestudio.ai/datasets](https://solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers.
141
- - [solsticestudio.ai/contact](https://solsticestudio.ai/contact) — discovery call for commercial licensing, custom generation, or buyer-specific variants.
142
-
143
- **Procurement catalog:**
144
- - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda.
145
-
146
- ## Citation
147
-
148
- ```bibtex
149
- @dataset{solstice_nemesis_cyber_pack_2026,
150
- title = {Nemesis Cyber Threat Simulation Pack (Sample)},
151
- author = {SolsticeAI},
152
- year = {2026},
153
- publisher = {Hugging Face},
154
- url = {https://huggingface.co/datasets/solsticestudioai/nemesis-cyber-pack}
155
- }
156
- ```
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - tabular-classification
5
+ - text-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - synthetic
10
+ - cybersecurity
11
+ - threat-intelligence
12
+ - red-team
13
+ - blue-team
14
+ - soc
15
+ - siem
16
+ - edr
17
+ - mitre-attack
18
+ - detection-engineering
19
+ - security-analytics
20
+ - adversarial-simulation
21
+ - agentic-ai
22
+ pretty_name: Nemesis Cyber Threat Simulation Pack
23
+ size_categories:
24
+ - 10K<n<100K
25
+ configs:
26
+ - config_name: default
27
+ data_files:
28
+ - split: train
29
+ path: nemesis_cyber_sample.parquet
30
+ ---
31
+
32
+ # Nemesis Cyber Threat Simulation Pack (Sample)
33
+
34
+ **A synthetic adversarial-agent cyber operations dataset for detection-model training, SOC analyst triage research, and blue-team evaluation.** Each row captures a complete simulated attack episode: triggering anomaly, environment context, adversarial planner reasoning, correlated telemetry trace, execution summary, and final decision outcome (detected / blocked / impact achieved / stealth maintained / exfiltration complete).
35
+
36
+ Built by [SolsticeAI](https://www.solsticestudio.ai/datasets) as a free sample of a larger commercial pack. 100% synthetic. No real incident, victim, or exploit data — and no working offensive code. TTP labels align with MITRE ATT&CK vocabulary so this sample can be used to train and benchmark defenders.
37
+
38
+ ## What is included
39
+
40
+ | File | Rows | Format | Purpose |
41
+ |---|---:|---|---|
42
+ | `nemesis_cyber_sample.parquet` | 10,000 | Parquet | Columnar, typed, best for analytics |
43
+ | `nemesis_cyber_sample.jsonl` | 10,000 | JSON Lines | Streaming / LLM training friendly |
44
+
45
+ **Source pack:** 2.5M-episode corpus
46
+ **This sample:** 10,000 episodes, stratified 2,000 per outcome class
47
+ **Outcome classes:** `detected_by_soc`, `blocked_by_edr`, `stealth_maintained`, `exfiltration_complete`, `impact_achieved`
48
+ **Environments covered:** AWS-Cloud, Active-Directory, Kubernetes, Web-App-Gateway
49
+
50
+ ## Record structure
51
+
52
+ Each record is one simulated attack episode with 8 top-level fields:
53
+
54
+ | Field | Type | Contents |
55
+ |---|---|---|
56
+ | `schema_version` | string | Pack schema version (`1.0.0-nemesis-cyber-sample`) |
57
+ | `event` | struct | `id`, `timestamp`, `trace_id`, `weighted_score`, `decision_outcome` |
58
+ | `risk_context` | struct | `trigger`, `protocol`, `chain`, `impacted_asset`, `anomaly_signature` |
59
+ | `agent_reasoning` | struct | `engine`, `winning_strategy`, `confidence_score`, `mcts_branches` |
60
+ | `correlated_telemetry` | list<struct> | Ordered action chain with per-step telemetry (latency, noise, evasion score, node provider) |
61
+ | `execution_summary` | struct | `strategy`, `success_rate`, `total_execution_ms`, `noise_penalty` |
62
+ | `genetic_optimizer_feedback` | struct | `fitness_score_update`, `parameter_drift` |
63
+ | `decision_outcome` | string | Final label (duplicated from `event.decision_outcome` for convenience) |
64
+
65
+ See [SCHEMA.md](./SCHEMA.md) for the full nested field breakdown.
66
+
67
+ ## Why this dataset is useful
68
+
69
+ Most public cybersecurity datasets are either raw packet captures, static CTI feeds, or narrow single-technique labeling sets. This pack is shaped around what detection-engineering and SOC-analytics teams actually need to train modern models:
70
+
71
+ - Multi-step attack episodes rather than isolated alerts
72
+ - Balanced outcome classes across detected, blocked, stealthy, and successful attempts
73
+ - Adversarial reasoning trace (strategy + MCTS branch count + confidence) alongside the telemetry
74
+ - Per-step evasion and noise signals to train detection models that weigh stealth vs noise trade-offs
75
+ - Cross-environment coverage (cloud, identity, container, web)
76
+ - Stable schema suitable for dashboard prototyping, triage simulators, and ML pipelines
77
+
78
+ ## Typical use cases
79
+
80
+ - SOC triage and alert-prioritization model training
81
+ - Detection engineering rule evaluation against balanced positive and negative cases
82
+ - Adversarial-AI research on multi-step planner behavior
83
+ - Tabletop and red-vs-blue simulator content
84
+ - LLM fine-tuning on incident narratives and defender reasoning
85
+ - Benchmarking anomaly-scoring and false-positive reduction pipelines
86
+ - Dashboard and BI template development for security analytics
87
+
88
+ ## Quick start
89
+
90
+ ```python
91
+ import pandas as pd
92
+ import pyarrow.parquet as pq
93
+
94
+ df = pq.read_table("nemesis_cyber_sample.parquet").to_pandas()
95
+
96
+ # Outcome distribution (stratified balanced)
97
+ print(df["decision_outcome"].value_counts())
98
+
99
+ # Evasion pressure per environment
100
+ df["protocol"] = df["risk_context"].apply(lambda r: r.get("protocol"))
101
+ df["avg_evasion"] = df["correlated_telemetry"].apply(
102
+ lambda steps: sum(s["telemetry"]["evasion_score"] for s in steps) / max(len(steps), 1)
103
+ )
104
+ print(df.groupby("protocol")["avg_evasion"].mean().round(3))
105
+
106
+ # Detection-rate by trigger type
107
+ df["trigger"] = df["risk_context"].apply(lambda r: r.get("trigger"))
108
+ detection_rate = (df["decision_outcome"].isin(["detected_by_soc", "blocked_by_edr"])
109
+ .groupby(df["trigger"]).mean().round(3))
110
+ print(detection_rate)
111
+ ```
112
+
113
+ Streaming form:
114
+
115
+ ```python
116
+ import json
117
+
118
+ with open("nemesis_cyber_sample.jsonl") as f:
119
+ for line in f:
120
+ episode = json.loads(line)
121
+ # one episode per line
122
+ ```
123
+
124
+ ## Responsible use
125
+
126
+ This dataset is intended for **defensive** research: detection modeling, SOC tooling, and adversarial-agent studies. It contains synthesized attack metadata and MITRE-aligned TTP labels — it does **not** contain working offensive payloads, exploit code, shellcode, malware samples, credentials, private vulnerability details, or any real-world victim data. Please use it to improve defenses.
127
+
128
+ ## License
129
+
130
+ Released under **CC BY 4.0**. Use freely for research, detection-engineering, education, and commercial prototyping with attribution.
131
+
132
+ ## Get the full pack
133
+
134
+ This Hugging Face repo is a **10K-episode sample**. The production pack scales to 2.5M+ episodes, additional outcome labels, richer per-step telemetry, attacker/defender variant splits, multi-environment campaign chains, parquet + JSONL + SIEM-import formats, and buyer-specific variants.
135
+
136
+ **Self-serve (Stripe checkout):**
137
+ - [**Sample Scale tier — $5,000**](https://buy.stripe.com/7sY5kD2j85QTfSb5lfeEo03) — ~25K records, one subject, 72-hour delivery.
138
+
139
+ **Full pack + enterprise scope:**
140
+ - [www.solsticestudio.ai/datasets](https://www.solsticestudio.ai/datasets) — per-SKU pricing across Starter / Professional / Enterprise tiers, plus commercial licensing, custom generation, and buyer-specific variants.
141
+
142
+ **Procurement catalog:**
143
+ - [SolsticeAI Data Storefront](https://solsticeai.mydatastorefront.com) — available via Datarade / Monda.
144
+
145
+ ## Citation
146
+
147
+ ```bibtex
148
+ @dataset{solstice_nemesis_cyber_pack_2026,
149
+ title = {Nemesis Cyber Threat Simulation Pack (Sample)},
150
+ author = {SolsticeAI},
151
+ year = {2026},
152
+ publisher = {Hugging Face},
153
+ url = {https://huggingface.co/datasets/solsticestudioai/nemesis-cyber-pack}
154
+ }
155
+ ```