anuj0456 commited on
Commit
8c03b67
·
verified ·
1 Parent(s): bfe4a0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -11,18 +11,18 @@ tags:
11
  library_name: transformers
12
  ---
13
 
14
- # KiteFish-A1-1.5B
15
 
16
- **KiteFish-A1-1.5B** is a ~1.5B parameter decoder-only transformer trained from scratch on raw arXiv LaTeX sources across mathematics, computer science, and theoretical physics.
17
 
18
  📄 **Paper:** https://arxiv.org/abs/2602.17288
19
- 💻 **Github:** https://github.com/kitefishai/KiteFish-A1-1.5B-Math
20
 
21
  This is a **base scientific language model** (not instruction-tuned).
22
 
23
  ## Overview
24
 
25
- KiteFish-A1-1.5B explores what it takes to train a domain-specialized scientific language model directly from structured LaTeX archives.
26
 
27
  **Training Scale**
28
  - ~52B pretraining tokens
@@ -56,7 +56,7 @@ The focus of this project is *scientific language modeling robustness*, not benc
56
 
57
  ## Intended Use
58
 
59
- KiteFish-A1-1.5B is suitable for:
60
 
61
  - Scientific text modeling research
62
  - Mathematical language modeling experiments
@@ -102,7 +102,7 @@ This release is intended primarily for research and experimentation.
102
  from transformers import AutoTokenizer, AutoModelForCausalLM
103
  import torch
104
 
105
- model_id = "KiteFishAI/KiteFish-A1-1.5B-Math"
106
 
107
  tokenizer = AutoTokenizer.from_pretrained(model_id)
108
  model = AutoModelForCausalLM.from_pretrained(model_id)
 
11
  library_name: transformers
12
  ---
13
 
14
+ # Minnow-Math-1.5B
15
 
16
+ **Minnow-Math-1.5B** is a ~1.5B parameter decoder-only transformer trained from scratch on raw arXiv LaTeX sources across mathematics, computer science, and theoretical physics.
17
 
18
  📄 **Paper:** https://arxiv.org/abs/2602.17288
19
+ 💻 **Github:** https://github.com/kitefishai/Minnow-Math-1.5B
20
 
21
  This is a **base scientific language model** (not instruction-tuned).
22
 
23
  ## Overview
24
 
25
+ Minnow-Math-1.5B explores what it takes to train a domain-specialized scientific language model directly from structured LaTeX archives.
26
 
27
  **Training Scale**
28
  - ~52B pretraining tokens
 
56
 
57
  ## Intended Use
58
 
59
+ Minnow-Math-1.5B is suitable for:
60
 
61
  - Scientific text modeling research
62
  - Mathematical language modeling experiments
 
102
  from transformers import AutoTokenizer, AutoModelForCausalLM
103
  import torch
104
 
105
+ model_id = "KiteFishAI/Minnow-Math-1.5B"
106
 
107
  tokenizer = AutoTokenizer.from_pretrained(model_id)
108
  model = AutoModelForCausalLM.from_pretrained(model_id)