Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10M - 100M
License:
metadata
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- agent
- coding
- terminal
- shell
- pretrain
- pretraining
- agentic
- llm
- web
- fineweb
- dclm
pretty_name: WebTerminal
size_categories:
- 10M<n<100M
Terminal/CLI Web Text
v0.1
A filtered extract of terminal and command-line content from two large web-text corpora.
Sources
- DCLM (
Zyphra/dclm-dedup) - FineWeb (
Salesforce/fineweb_deduplicated)
How it was built
- Fast filter: skip any document that doesn't contain obvious CLI indicators (
$,sudo,pip install,```bash,root@, etc.) - Score: remaining docs are scored (0-34) across five signals, each with a per-match point value and a cap:
| Filter | Description | Points | Cap |
|---|---|---|---|
| Prompt patterns | Shell prompts like $ cmd, user@host:~$, >>>, root@, PS C:\ |
2 per match | 10 |
| CLI commands | Known commands: sudo, apt-get, pip install, git clone, docker run, curl, ssh, gcc, etc. (30+ patterns) |
1 per unique match | 8 |
| stdout patterns | Output indicators: "successfully installed", "cloning into", drwx (ls output), "packets transmitted", "traceback", version strings |
2 per match | 6 |
| Code blocks | Terminal-flavored code blocks: ```bash, ```shell, <pre><code>, terminal/console div classes |
2 per match | 6 |
| Indented blocks | 3+ consecutive lines indented 4+ spaces (code/output blocks) | 1 per match | 4 |
Documents scoring >=5 are kept.
- Dedup: exact dedup across both datasets using xxhash64 on full text.
Stats
| Chunks | Size | Rows | |
|---|---|---|---|
| DCLM | 13,144 | ~229 GB | ~18.8M |
| FineWeb | 8,800 | ~669 GB | ~47.5M |
| Score | Count | % | Cumulative |
|---|---|---|---|
| 5 | 39,025,201 | 63.62% | 63.62% |
| 6 | 10,787,199 | 17.59% | 81.21% |
| 7 | 4,063,886 | 6.63% | 87.83% |
| 8 | 2,911,983 | 4.75% | 92.58% |
| 9 | 1,304,162 | 2.13% | 94.70% |
| 10 | 1,022,996 | 1.67% | 96.37% |
| 11-14 | 1,609,090 | 2.62% | 98.99% |
| 15-20 | 536,421 | 0.87% | 99.87% |
| 21-34 | 80,340 | 0.13% | 100.00% |
| Total | 61,341,278 |
Use case
Mostly for upsampling agentic-adjacent data during pretraining.
