File size: 3,861 Bytes
95b1739 d417113 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff 95b1739 a1aaeff | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
tags:
- autotrain
- text-classification
base_model: answerdotai/ModernBERT-base
widget:
- text: "I love AutoTrain"
datasets:
- RISys-Lab/cybersec-topic-classification-dataset-filtered
---
# Model Card: Cybersecurity Text Classifier (ModernBERT-base)
<p align="center">
<b> "RedSage: A Cybersecurity Generalist LLM" (ICLR 2026) </b>
<br>
<b>Authors:</b> Naufal Suryanto<sup>1*</sup>, Muzammal Naseer<sup>1</sup>, Pengfei Li<sup>1</sup>, Syed Talal Wasim<sup>2</sup>, Jinhui Yi<sup>2</sup>, Juergen Gall<sup>2</sup>, Paolo Ceravolo<sup>3</sup>, Ernesto Damiani<sup>3</sup>
<br>
<sup>1</sup>Khalifa University, <sup>2</sup>University of Bonn, <sup>3</sup>University of Milan
<br>
<sup>*</sup>Project Lead
<br>
<br>
<a href="https://openreview.net/forum?id=W4FAenIrQ2"><img src="https://img.shields.io/badge/Paper-OpenReview-B31B1B.svg"></a>
<a href="https://huggingface.co/RISys-Lab"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-RISys--Lab-orange"></a>
</p>
---
## Model Details
* **Model Type**: Binary text classification model developed for domain-specific content filtering.
* **Architecture**: Based on **ModernBERT-base**, a bidirectional transformer encoder optimized for efficiency and long-context performance.
* **Domain**: Cybersecurity vs. Non-Cybersecurity.
* **License**: Released as part of the open-source RedSage project resources.
## Intended Use
* **Primary Use Case**: Identifying cybersecurity-relevant documents within large-scale, unstructured web corpora such as FineWeb.
* **Application**: Filtering approximately 17.2 trillion tokens from Common Crawl subsets (2013–2024) to curate the 11.7B-token CyberFineWeb corpus.
* **Intended Users**: Researchers and developers focused on domain continual pretraining for cybersecurity LLMs.
## Training Data
* **Source Dataset**: Cybersecurity Topic Classification dataset.
* **Data Origin**: Labeled samples collected from Reddit, StackExchange, and arXiv, alongside web articles.
* **Dataset Size**:
* **Pre-processing**: 9.27M training samples and 459K validation samples.
* **Post-filtering**: Reduced to 4.62M training samples and 2.46K validation samples after removing very short texts to minimize ambiguity.
* **Labeling Method**: Derived from forum categories, tags, and keyword metadata rather than LLM-generated annotations.
## Training Procedure
* **Optimizer**: Adam optimizer.
* **Learning Rate**: 2e-5.
* **Schedule**: 10% warmup ratio over 2 training epochs.
* **Hardware**: Implementation utilized the ModernBERT-base encoder as the foundation for the binary head.
## Evaluation Results
The model was evaluated on a validation set of 2,460 samples derived from web articles, achieving the following metrics:
| Metric | Score |
| :--- | :--- |
| **Accuracy** | 97.3% |
| **Precision** | 92.8% |
| **Recall** | 90.2% |
| **F1 Score** | 91.4% |
## Limitations & Risks
* **Context Sensitivity**: While highly accurate, the model was specifically filtered to exclude very short texts to avoid context ambiguity.
* **Temporal Bias**: The model identifies cybersecurity content based on trends observed in web data up to late 2024; emerging threats post-2024 may not be represented.
* **Dual-Use Concerns**: The classifier is designed to identify offensive security technical content, which carries an inherent risk of misuse if applied outside of defensive or educational research.
---
## Citation
```bibtex
@inproceedings{suryanto2026redsage,
title={RedSage: A Cybersecurity Generalist {LLM}},
author={Naufal Suryanto and Muzammal Naseer and Pengfei Li and Syed Talal Wasim and Jinhui Yi and Juergen Gall and Paolo Ceravolo and Ernesto Damiani},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=W4FAenIrQ2}
}
```
|