--- license: apache-2.0 task_categories: - text-generation language: - en tags: - agent - coding - terminal - shell - pretrain - pretraining - agentic - llm - web - fineweb - dclm pretty_name: WebTerminal size_categories: - 1M>>`, `root@`, `PS C:\` | 2 per match | 10 | | CLI commands | Known commands: `sudo`, `apt-get`, `pip install`, `git clone`, `docker run`, `curl`, `ssh`, `gcc`, etc. (30+ patterns) | 1 per unique match | 8 | | stdout patterns | Output indicators: "successfully installed", "cloning into", `drwx` (ls output), "packets transmitted", "traceback", version strings | 2 per match | 6 | | Code blocks | Terminal-flavored code blocks: `` ```bash ``, `` ```shell ``, `
`, terminal/console div classes | 2 per match | 6 |
| Indented blocks | 3+ consecutive lines indented 4+ spaces (code/output blocks) | 1 per match | 4 |

Documents scoring >=5 are kept.

3. **Dedup**: exact dedup across both datasets using xxhash64 on full text. Removed 1,168 duplicates.

### v0.2 — Clean

The unfiltered subset is ~84-86% noise at lower score levels (5-12), which make up 93% of the data. The root cause: v0.1's scoring uses context-blind keyword matching — CLI command names like `find`, `make`, `cat` appear in normal English prose, bare `$` matches currency amounts, and indented Python/SQL code gets scored as terminal content.

v0.2 applies a three-stage structural filter over the unfiltered data:

1. **Context-aware gate**: instead of matching bare `$`, requires `$ sudo`, `$ git`, `$ docker`, etc. (dollar sign + space + known command). Eliminates ~87% of documents immediately.
2. **Validation regex**: confirms a genuine structural terminal pattern exists — shell prompts followed by real commands, `user@host:~$` patterns, Python REPL `>>>`, tracebacks, `` ```bash `` code blocks, Unix file permission listings, man page headers, shebangs.
3. **Weighted structural scoring** (`term_score_v2`): each pattern has a weight (1-3) and occurrences are capped. Documents need `term_score_v2 >= 3` to be kept.

| Weight | Signal | Max |
|---|---|---|
| 3 | Command prompts (`$ cmd` at line start) | 9 |
| 3 | SSH prompts (`user@host:~$`) | 9 |
| 2 | Python REPL, file listings, tracebacks, terminal code blocks, git/docker ops, Windows prompts, man pages | 2-6 each |
| 1 | Install output, systemd units, shebangs, sudo commands | 1 each |

No indentation-based scoring. No context-blind command substring matching.

**Result**: 3.8% of the unfiltered data survives — from 61.3M rows down to 2.33M rows. Quality jumps from ~15% to ~98% genuine terminal/CLI content.

## Schema

### Clean subset

| Column | Type | Description |
|---|---|---|
| `text` | string | Document text |
| `term_score` | int32 | Original v0.1 score (5-34) |
| `term_score_v2` | int32 | Structural score from v0.2 filter (3+) |

### Unfiltered subset

| Column | Type | Description |
|---|---|---|
| `text` | string | Document text |
| `term_score` | int32 | Original v0.1 score (5-34) |

## Stats

### Clean (v0.2)

- **2,334,414 rows** | **4.6B tokens** (Llama-3.2-1B tokenizer) | **11 GB**
- 62 parquet files, ~169-185 MB each, snappy compressed

### Unfiltered (v0.1)

- **61,341,278 rows** | **359B tokens** | **962 GB**
- 4,187 parquet files, ~180-240 MB each, snappy compressed

| v0.1 Score | Count | % |
|---|---|---|
| 5 | 39,025,201 | 63.62% |
| 6 | 10,787,199 | 17.59% |
| 7 | 4,063,886 | 6.63% |
| 8 | 2,911,983 | 4.75% |
| 9-14 | 3,594,547 | 5.86% |
| 15-34 | 958,462 | 1.56% |

## Use case

Upsampling agentic-adjacent data during pretraining. The `clean` subset is recommended for most use cases. The `unfiltered` subset is available for researchers who want to apply their own filtering.