Pre-trained models in MiniPLM: Knowledge Distillation for Pre-Training Language Models
AI & ML interests
Training efficient language models (MiniLLM, MiniPLM)
Organization Card
models 50
MiniLLM/MiniLLM-gpt2-340M
Text Generation • Updated • 2.15k • 6
MiniLLM/SFT-gpt2-120M
Text Generation • 0.1B • Updated • 2.37k
MiniLLM/SFT-gpt2-760M
Text Generation • 0.8B • Updated • 12
MiniLLM/MiniPLM-Qwen-500M
Text Generation • 0.5B • Updated • 85 • • 7
MiniLLM/MiniPLM-llama3.1-212M
Text Generation • 0.2B • Updated • 11 • 6
MiniLLM/MiniPLM-Mamba-130M
Text Generation • 0.1B • Updated • 10 • 3
MiniLLM/MiniPLM-Qwen-1.2B
Text Generation • 1B • Updated • 213 • 4
MiniLLM/Ref-Pretrain-Qwen-104M
Text Generation • 0.1B • Updated • 17 • 2
MiniLLM/Pretrain-Qwen-1.2B
Text Generation • 1B • Updated • 8
MiniLLM/Pretrain-Qwen-500M
Text Generation • 0.5B • Updated • 17
datasets 10
MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
Updated • 244
MiniLLM/pile-tokenized
Updated • 20 • 2
MiniLLM/roberta-corpus-processed
Updated • 31
MiniLLM/openwebtext-processed
Updated • 75
MiniLLM/dolly-processed
Viewer • Updated • 110k • 233 • 1
MiniLLM/sinst
Viewer • Updated • 8.35k • 78 • 1
MiniLLM/uinst
Viewer • Updated • 64.8k • 80 • 1
MiniLLM/self-inst
Viewer • Updated • 242 • 55 • 2
MiniLLM/Vicuna
Viewer • Updated • 80 • 57 • 1
MiniLLM/dolly
Viewer • Updated • 500 • 130