TriCoAlign-0.5B: Stabilizing LLMs for Network Intrusion Detection
📌 Model Overview
TriCoAlign-0.5B is a specialized Large Language Model fine-tuned from Qwen2.5-0.5B for Network Intrusion Detection (NIDS). It implements the TriCoAlign framework proposed.
Standard LLMs often suffer from unstable reasoning behaviors and inconsistent decision outcomes when analyzing network traffic (e.g., producing different labels for the same packet upon repeated inference). TriCoAlign addresses this by jointly aligning three complementary aspects in a cyclic manner:
- Format Alignment: Enforces a structured
Question–Reasoning–Answeroutput to decouple semantic roles. - Thinking Alignment: Uses reasoning summarization to suppress noisy trajectories and focus on security-critical features.
- Answer Alignment: Constrains the decision space to ensure discriminative and consistent predictions.
This model transforms powerful but unstable LLM reasoning into reliable, deployable intrusion detection decisions.
🚀 Key Features
- 🛡️ Enhanced Stability: Mitigates "over-thinking" and prediction inconsistency common in raw LLMs.
- 🔄 Cyclic Optimization: Trained with a unified loss function to align format, reasoning, and answers simultaneously.
- 📊 SOTA Performance: Significantly outperforms vanilla LLMs (Qwen2.5, GLM-4, ChatGPT-OSS) and traditional ML models on benchmarks like NSL-KDD, CIC-IDS, and UNSW-NB15.
- 🧠 Interpretable Reasoning: Generates concise, security-focused reasoning traces rather than redundant chains of thought.
- ⚡ Lightweight: Based on the efficient 1.5B parameter architecture, suitable for resource-constrained environments.
📈 Performance Highlights
Evaluated on standard NIDS benchmarks, TriCoAlign demonstrates superior accuracy and stability compared to baselines.
> Note: Baseline numbers reflect the instability of raw LLMs as reported in the TriCoAlign paper. Our 0.5B variant maintains comparable robustness with lower computational cost.
💻 How to Use
Expected Input
question: 你是一名网络安全师,这是一段关于网络接口的参数,请分析一下以下参数是什么情况?duration的值是0,protocol_type的值是tcp,service的值是http,flag的值是SF,src_bytes的值是54540,dst_bytes的值是8314,land的值是0,wrong_fragment的值是0,urgent的值是0,hot的值是2,num_failed_logins的值是0,logged_in的值是1,num_compromised的值是1,root_shell的值是0,su_attempted的值是0,num_root的值是0,num_file_creations的值是0,num_shells的值是0,num_access_files的值是0,num_outbound_cmds的值是0,is_host_login的值是0,is_guest_login的值是0,count的值是4,srv_count的值是24,serror_rate的值是0,srv_serror_rate的值是0,rerror_rate的值是0,srv_rerror_rate的值是0,same_srv_rate的值是1,diff_srv_rate的值是0,srv_diff_host_rate的值是0.08,dst_host_count的值是255,dst_host_srv_count的值是250,dst_host_same_srv_rate的值是0.98,dst_host_diff_srv_rate的值是0.01,dst_host_same_src_port_rate的值是0,dst_host_srv_diff_host_rate的值是0,dst_host_serror_rate的值是0,dst_host_srv_serror_rate的值是0,dst_host_rerror_rate的值是0.06,dst_host_srv_rerror_rate的值是0.06,
Expected Output
Reasoning: logged_in=1表明已成功登录,num_compromised=1显示账户被入侵,hot=2及低错误率参数支持back攻击特征。
Answer: back
🔬 Methodology: TriCoAlign Framework
The model is trained using a Cyclic Alignment Strategy that minimizes the joint loss:
- Format Alignment: Penalizes deviations from the
Question->Reasoning->Answertemplate. - Thinking Alignment: Aligns internal reasoning states with high-quality, summarized supervisory signals (generated by stronger teacher models like GLM-4/Qwen-7B during training) to remove noise.
- Answer Alignment: Ensures the final prediction distribution is sharp and matches ground truth labels.
This approach effectively reduces the covariance between reasoning noise and decision errors, leading to stable inference.
📚 Datasets & Training
- Base Model: Qwen/Qwen2.5-0.5B
- Training Data: Processed versions of NSL-KDD, CIC-IDS2017, and UNSW-NB15.
- Preprocessing: Raw tabular network flows are converted into semantic natural language prompts via the PCF (Prompt Cast Framework) pipeline before alignment training.
🔗 Resources
- 💻 Code & Reproduction: Full training scripts, data preprocessing tools, and evaluation benchmarks are available at: 👉 https://github.com/Zaneph1/TriCoAlign
📄 License
This model is released under the MIT License.
- Downloads last month
- 26