agentlans commited on
Commit
eb3df18
·
verified ·
1 Parent(s): 99c22dc

Upload 13 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - afr
4
+ - als
5
+ - amh
6
+ - arb
7
+ - ars
8
+ - ary
9
+ - arz
10
+ - asm
11
+ - azj
12
+ - bel
13
+ - ben
14
+ - bew
15
+ - bos
16
+ - bul
17
+ - cat
18
+ - ces
19
+ - ckb
20
+ - cmn
21
+ - cym
22
+ - dan
23
+ - deu
24
+ - div
25
+ - ekk
26
+ - ell
27
+ - eng
28
+ - epo
29
+ - eus
30
+ - fao
31
+ - fas
32
+ - fil
33
+ - fin
34
+ - fra
35
+ - fry
36
+ - gle
37
+ - glg
38
+ - guj
39
+ - hau
40
+ - heb
41
+ - hin
42
+ - hrv
43
+ - hun
44
+ - hye
45
+ - ind
46
+ - isl
47
+ - ita
48
+ - jpn
49
+ - kan
50
+ - kat
51
+ - kaz
52
+ - khk
53
+ - khm
54
+ - kin
55
+ - kir
56
+ - kmr
57
+ - kor
58
+ - lao
59
+ - lat
60
+ - lit
61
+ - ltz
62
+ - lvs
63
+ - mal
64
+ - mar
65
+ - mkd
66
+ - mlt
67
+ - mya
68
+ - nld
69
+ - nno
70
+ - nob
71
+ - npi
72
+ - nrm
73
+ - ory
74
+ - pan
75
+ - pbt
76
+ - plt
77
+ - pol
78
+ - por
79
+ - ron
80
+ - rus
81
+ - sin
82
+ - slk
83
+ - slv
84
+ - snd
85
+ - som
86
+ - spa
87
+ - srp
88
+ - swe
89
+ - swh
90
+ - tam
91
+ - tel
92
+ - tgk
93
+ - tha
94
+ - tur
95
+ - ukr
96
+ - urd
97
+ - uzn
98
+ - vie
99
+ - xho
100
+ - yue
101
+ - zsm
102
+ license: mit
103
+ base_model:
104
+ - intfloat/multilingual-e5-small
105
+ datasets:
106
+ - agentlans/multilingual-document-classification
107
+ metrics:
108
+ - f1
109
+ - loss
110
+ model-index:
111
+ - name: multilingual-e5-small-doc-type-v2-classifier
112
+ results:
113
+ - task:
114
+ type: text-classification
115
+ name: Text Classification
116
+ metrics:
117
+ - type: f1
118
+ value: 0.809
119
+ name: Evaluation F1
120
+ - type: loss
121
+ value: 0.8624
122
+ name: Evaluation Loss
123
+ ---
124
+ # multilingual-e5-small Document Type V2 Classifier
125
+
126
+ A fine-tuned version of the **bert** architecture (`BertForSequenceClassification`) optimized for the `text-classification` task.
127
+
128
+ - **Model type:** bert
129
+ - **Problem Type:** single_label_classification
130
+ - **Number of Labels:** 25
131
+ - **Vocabulary Size:** 250037
132
+ - **License:** MIT
133
+
134
+ ## Use
135
+
136
+ To get started with this model in Python using the Hugging Face Transformers library, run the following code:
137
+
138
+ ```python
139
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
140
+ import torch
141
+
142
+ model_id = "agentlans/multilingual-e5-small-doc-type-v2-classifier"
143
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
144
+ model = AutoModelForSequenceClassification.from_pretrained(model_id)
145
+
146
+ text = "Replace this with your input text."
147
+ inputs = tokenizer(text, return_tensors="pt")
148
+
149
+ with torch.no_grad():
150
+ logits = model(**inputs).logits
151
+
152
+ predicted_class_id = logits.argmax().item()
153
+ predicted_class_name = model.config.id2label[predicted_class_id]
154
+
155
+ print(f"Predicted Class ID: {predicted_class_id}")
156
+ print(f"Predicted Class Name: {predicted_class_name}")
157
+ ```
158
+
159
+ ## Intended Uses & Limitations
160
+
161
+ ### Intended Use
162
+ This model is designed for sequence classification tasks. Below are the specific class labels mapped to their corresponding IDs:
163
+
164
+ | Label ID | Label Name |
165
+ |---|---|
166
+ | 0 | About (Org.) |
167
+ | 1 | About (Personal) |
168
+ | 2 | Academic Writing |
169
+ | 3 | Audio Transcript |
170
+ | 4 | Comment Section |
171
+ | 5 | Content Listing |
172
+ | 6 | Creative Writing |
173
+ | 7 | Customer Support |
174
+ | 8 | Documentation |
175
+ | 9 | FAQ |
176
+ | 10 | Knowledge Article |
177
+ | 11 | Legal Notices |
178
+ | 12 | Listicle |
179
+ | 13 | News (Org.) |
180
+ | 14 | News Article |
181
+ | 15 | Nonfiction Writing |
182
+ | 16 | Other/Unclassified |
183
+ | 17 | Personal Blog |
184
+ | 18 | Product Page |
185
+ | 19 | Q&A Forum |
186
+ | 20 | Spam / Ads |
187
+ | 21 | Structured Data |
188
+ | 22 | Truncated |
189
+ | 23 | Tutorial |
190
+ | 24 | User Review |
191
+
192
+ ## Training Details
193
+
194
+ ### Hyperparameters
195
+ The following hyperparameters were used during fine-tuning:
196
+ - **Learning Rate:** 5e-05
197
+ - **Train Batch Size:** 8
198
+ - **Eval Batch Size:** 8
199
+ - **Optimizer:** OptimizerNames.ADAMW_TORCH_FUSED
200
+ - **Number of Epochs:** 3.0
201
+ - **Mixed Precision:** BF16
202
+
203
+ <details>
204
+ <summary><b>Show Advanced Training Configuration</b></summary>
205
+
206
+ #### Optimization & Regularization
207
+ - **Gradient Accumulation Steps:** 1
208
+ - **Learning Rate Scheduler:** SchedulerType.LINEAR
209
+ - **Warmup Steps:** 0
210
+ - **Warmup Ratio:** None
211
+ - **Weight Decay:** 0.0
212
+ - **Max Gradient Norm:** 1.0
213
+
214
+ #### Hardware & Reproducibility
215
+ - **Number of GPUs:** 1
216
+ - **Seed:** 42
217
+
218
+ </details>
219
+
220
+ ## Training Results & Evaluation
221
+
222
+ During fine-tuning, the model achieved the following results on the evaluation set:
223
+
224
+ | Metric | Value |
225
+ |---|---|
226
+ | **Train Loss** | 0.5709 |
227
+ | **Validation Loss** | 0.8624 |
228
+ | **Validation F1 Score** | 0.809 |
229
+ | **Total FLOPs** | 7.9082e+15 |
230
+
231
+ ### Speed Performance
232
+ - **Training Runtime:** 1693.148 seconds
233
+ - **Train Samples per Second:** 283.503
234
+ - **Evaluation Runtime:** 11.4879 seconds
235
+ - **Eval Samples per Second:** 1741.655
236
+
237
+
238
+ <details>
239
+ <summary><b>Show Detailed Training Logs</b></summary>
240
+
241
+ ### Training Logs History
242
+
243
+ | Step | Epoch | Learning Rate | Training Loss | Validation Loss | Validation F1 |
244
+ |---|---|---|---|---|---|
245
+ | 500 | 0.025 | 4.9584e-05 | 1.8537 | N/A | N/A |
246
+ | 1000 | 0.05 | 4.9168e-05 | 1.3289 | N/A | N/A |
247
+ | 1500 | 0.075 | 4.8751e-05 | 1.1698 | N/A | N/A |
248
+ | 2000 | 0.1 | 4.8334e-05 | 1.0996 | N/A | N/A |
249
+ | 2500 | 0.125 | 4.7918e-05 | 1.0552 | N/A | N/A |
250
+ | 3000 | 0.15 | 4.7501e-05 | 1.0462 | N/A | N/A |
251
+ | 3500 | 0.175 | 4.7084e-05 | 1.0004 | N/A | N/A |
252
+ | 4000 | 0.2 | 4.6668e-05 | 0.9812 | N/A | N/A |
253
+ | 4500 | 0.225 | 4.6251e-05 | 0.9245 | N/A | N/A |
254
+ | 5000 | 0.25 | 4.5834e-05 | 0.9282 | N/A | N/A |
255
+ | 5500 | 0.275 | 4.5418e-05 | 0.9167 | N/A | N/A |
256
+ | 6000 | 0.3 | 4.5001e-05 | 0.8886 | N/A | N/A |
257
+ | 6500 | 0.325 | 4.4584e-05 | 0.8826 | N/A | N/A |
258
+ | 7000 | 0.35 | 4.4168e-05 | 0.8443 | N/A | N/A |
259
+ | 7500 | 0.375 | 4.3751e-05 | 0.8374 | N/A | N/A |
260
+ | 8000 | 0.4 | 4.3334e-05 | 0.8271 | N/A | N/A |
261
+ | 8500 | 0.425 | 4.2918e-05 | 0.8306 | N/A | N/A |
262
+ | 9000 | 0.45 | 4.2501e-05 | 0.8561 | N/A | N/A |
263
+ | 9500 | 0.475 | 4.2085e-05 | 0.7851 | N/A | N/A |
264
+ | 10000 | 0.5 | 4.1668e-05 | 0.7841 | N/A | N/A |
265
+ | 10500 | 0.525 | 4.1251e-05 | 0.7678 | N/A | N/A |
266
+ | 11000 | 0.55 | 4.0835e-05 | 0.7538 | N/A | N/A |
267
+ | 11500 | 0.575 | 4.0418e-05 | 0.735 | N/A | N/A |
268
+ | 12000 | 0.6 | 4.0001e-05 | 0.774 | N/A | N/A |
269
+ | 12500 | 0.625 | 3.9585e-05 | 0.7368 | N/A | N/A |
270
+ | 13000 | 0.65 | 3.9168e-05 | 0.7435 | N/A | N/A |
271
+ | 13500 | 0.675 | 3.8751e-05 | 0.7035 | N/A | N/A |
272
+ | 14000 | 0.7 | 3.8335e-05 | 0.7552 | N/A | N/A |
273
+ | 14500 | 0.725 | 3.7918e-05 | 0.7443 | N/A | N/A |
274
+ | 15000 | 0.75 | 3.7501e-05 | 0.7461 | N/A | N/A |
275
+ | 15500 | 0.775 | 3.7085e-05 | 0.7352 | N/A | N/A |
276
+ | 16000 | 0.8 | 3.6668e-05 | 0.6946 | N/A | N/A |
277
+ | 16500 | 0.825 | 3.6252e-05 | 0.6939 | N/A | N/A |
278
+ | 17000 | 0.85 | 3.5835e-05 | 0.7509 | N/A | N/A |
279
+ | 17500 | 0.875 | 3.5418e-05 | 0.6992 | N/A | N/A |
280
+ | 18000 | 0.9 | 3.5002e-05 | 0.7043 | N/A | N/A |
281
+ | 18500 | 0.925 | 3.4585e-05 | 0.6977 | N/A | N/A |
282
+ | 19000 | 0.95 | 3.4168e-05 | 0.6952 | N/A | N/A |
283
+ | 19500 | 0.975 | 3.3752e-05 | 0.708 | N/A | N/A |
284
+ | 20000 | 1.0 | 3.3335e-05 | 0.6695 | N/A | N/A |
285
+ | 20001 | 1.0 | N/A | N/A | 0.6958 | 0.7876 |
286
+ | 20500 | 1.025 | 3.2918e-05 | 0.5363 | N/A | N/A |
287
+ | 21000 | 1.05 | 3.2502e-05 | 0.547 | N/A | N/A |
288
+ | 21500 | 1.075 | 3.2085e-05 | 0.5733 | N/A | N/A |
289
+ | 22000 | 1.1 | 3.1668e-05 | 0.5454 | N/A | N/A |
290
+ | 22500 | 1.125 | 3.1252e-05 | 0.5235 | N/A | N/A |
291
+ | 23000 | 1.15 | 3.0835e-05 | 0.5291 | N/A | N/A |
292
+ | 23500 | 1.175 | 3.0418e-05 | 0.5537 | N/A | N/A |
293
+ | 24000 | 1.2 | 3.0002e-05 | 0.555 | N/A | N/A |
294
+ | 24500 | 1.225 | 2.9585e-05 | 0.5338 | N/A | N/A |
295
+ | 25000 | 1.25 | 2.9169e-05 | 0.5615 | N/A | N/A |
296
+ | 25500 | 1.275 | 2.8752e-05 | 0.5155 | N/A | N/A |
297
+ | 26000 | 1.3 | 2.8335e-05 | 0.5353 | N/A | N/A |
298
+ | 26500 | 1.325 | 2.7919e-05 | 0.5317 | N/A | N/A |
299
+ | 27000 | 1.35 | 2.7502e-05 | 0.5429 | N/A | N/A |
300
+ | 27500 | 1.375 | 2.7085e-05 | 0.5311 | N/A | N/A |
301
+ | 28000 | 1.4 | 2.6669e-05 | 0.5345 | N/A | N/A |
302
+ | 28500 | 1.425 | 2.6252e-05 | 0.5287 | N/A | N/A |
303
+ | 29000 | 1.45 | 2.5835e-05 | 0.5204 | N/A | N/A |
304
+ | 29500 | 1.475 | 2.5419e-05 | 0.5121 | N/A | N/A |
305
+ | 30000 | 1.5 | 2.5002e-05 | 0.52 | N/A | N/A |
306
+ | 30500 | 1.525 | 2.4585e-05 | 0.5094 | N/A | N/A |
307
+ | 31000 | 1.55 | 2.4169e-05 | 0.5169 | N/A | N/A |
308
+ | 31500 | 1.575 | 2.3752e-05 | 0.5226 | N/A | N/A |
309
+ | 32000 | 1.6 | 2.3335e-05 | 0.5281 | N/A | N/A |
310
+ | 32500 | 1.625 | 2.2919e-05 | 0.5246 | N/A | N/A |
311
+ | 33000 | 1.65 | 2.2502e-05 | 0.532 | N/A | N/A |
312
+ | 33500 | 1.675 | 2.2086e-05 | 0.5068 | N/A | N/A |
313
+ | 34000 | 1.7 | 2.1669e-05 | 0.4971 | N/A | N/A |
314
+ | 34500 | 1.725 | 2.1252e-05 | 0.5122 | N/A | N/A |
315
+ | 35000 | 1.75 | 2.0836e-05 | 0.489 | N/A | N/A |
316
+ | 35500 | 1.775 | 2.0419e-05 | 0.479 | N/A | N/A |
317
+ | 36000 | 1.8 | 2.0002e-05 | 0.4919 | N/A | N/A |
318
+ | 36500 | 1.825 | 1.9586e-05 | 0.4974 | N/A | N/A |
319
+ | 37000 | 1.85 | 1.9169e-05 | 0.5045 | N/A | N/A |
320
+ | 37500 | 1.875 | 1.8752e-05 | 0.525 | N/A | N/A |
321
+ | 38000 | 1.9 | 1.8336e-05 | 0.4748 | N/A | N/A |
322
+ | 38500 | 1.925 | 1.7919e-05 | 0.4831 | N/A | N/A |
323
+ | 39000 | 1.95 | 1.7502e-05 | 0.5091 | N/A | N/A |
324
+ | 39500 | 1.975 | 1.7086e-05 | 0.4821 | N/A | N/A |
325
+ | 40000 | 2.0 | 1.6669e-05 | 0.4862 | N/A | N/A |
326
+ | 40002 | 2.0 | N/A | N/A | 0.7491 | 0.797 |
327
+ | 40500 | 2.025 | 1.6253e-05 | 0.357 | N/A | N/A |
328
+ | 41000 | 2.05 | 1.5836e-05 | 0.333 | N/A | N/A |
329
+ | 41500 | 2.075 | 1.5419e-05 | 0.374 | N/A | N/A |
330
+ | 42000 | 2.1 | 1.5003e-05 | 0.3698 | N/A | N/A |
331
+ | 42500 | 2.125 | 1.4586e-05 | 0.3759 | N/A | N/A |
332
+ | 43000 | 2.15 | 1.4169e-05 | 0.3543 | N/A | N/A |
333
+ | 43500 | 2.175 | 1.3753e-05 | 0.3695 | N/A | N/A |
334
+ | 44000 | 2.2 | 1.3336e-05 | 0.3385 | N/A | N/A |
335
+ | 44500 | 2.225 | 1.2919e-05 | 0.3583 | N/A | N/A |
336
+ | 45000 | 2.25 | 1.2503e-05 | 0.3445 | N/A | N/A |
337
+ | 45500 | 2.275 | 1.2086e-05 | 0.3575 | N/A | N/A |
338
+ | 46000 | 2.3 | 1.1669e-05 | 0.3382 | N/A | N/A |
339
+ | 46500 | 2.325 | 1.1253e-05 | 0.3732 | N/A | N/A |
340
+ | 47000 | 2.35 | 1.0836e-05 | 0.3454 | N/A | N/A |
341
+ | 47500 | 2.375 | 1.0419e-05 | 0.3563 | N/A | N/A |
342
+ | 48000 | 2.4 | 1.0003e-05 | 0.3302 | N/A | N/A |
343
+ | 48500 | 2.425 | 9.5862e-06 | 0.3421 | N/A | N/A |
344
+ | 49000 | 2.45 | 9.1695e-06 | 0.3119 | N/A | N/A |
345
+ | 49500 | 2.475 | 8.7529e-06 | 0.3578 | N/A | N/A |
346
+ | 50000 | 2.5 | 8.3362e-06 | 0.3584 | N/A | N/A |
347
+ | 50500 | 2.525 | 7.9196e-06 | 0.3142 | N/A | N/A |
348
+ | 51000 | 2.55 | 7.5030e-06 | 0.3124 | N/A | N/A |
349
+ | 51500 | 2.575 | 7.0863e-06 | 0.3262 | N/A | N/A |
350
+ | 52000 | 2.6 | 6.6697e-06 | 0.3072 | N/A | N/A |
351
+ | 52500 | 2.625 | 6.2530e-06 | 0.3274 | N/A | N/A |
352
+ | 53000 | 2.65 | 5.8364e-06 | 0.3131 | N/A | N/A |
353
+ | 53500 | 2.675 | 5.4197e-06 | 0.3281 | N/A | N/A |
354
+ | 54000 | 2.7 | 5.0031e-06 | 0.3108 | N/A | N/A |
355
+ | 54500 | 2.725 | 4.5864e-06 | 0.3189 | N/A | N/A |
356
+ | 55000 | 2.75 | 4.1698e-06 | 0.3367 | N/A | N/A |
357
+ | 55500 | 2.775 | 3.7531e-06 | 0.2969 | N/A | N/A |
358
+ | 56000 | 2.8 | 3.3365e-06 | 0.3332 | N/A | N/A |
359
+ | 56500 | 2.825 | 2.9199e-06 | 0.3197 | N/A | N/A |
360
+ | 57000 | 2.85 | 2.5032e-06 | 0.312 | N/A | N/A |
361
+ | 57500 | 2.875 | 2.0866e-06 | 0.3275 | N/A | N/A |
362
+ | 58000 | 2.9 | 1.6699e-06 | 0.2933 | N/A | N/A |
363
+ | 58500 | 2.925 | 1.2533e-06 | 0.3123 | N/A | N/A |
364
+ | 59000 | 2.95 | 8.3662e-07 | 0.3045 | N/A | N/A |
365
+ | 59500 | 2.975 | 4.1998e-07 | 0.2928 | N/A | N/A |
366
+ | 60000 | 3.0 | 3.3332e-09 | 0.3199 | N/A | N/A |
367
+ | 60003 | 3.0 | N/A | N/A | 0.8624 | 0.809 |
368
+
369
+ </details>
370
+
371
+
372
+ ## Framework Versions
373
+
374
+ - **Transformers:** 5.0.0.dev0
375
+ - **PyTorch:** 2.9.1+cu128
all_results.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_f1": 0.8089721345660946,
4
+ "eval_loss": 0.8624263405799866,
5
+ "eval_runtime": 11.4879,
6
+ "eval_samples": 20008,
7
+ "eval_samples_per_second": 1741.655,
8
+ "eval_steps_per_second": 217.707,
9
+ "total_flos": 7908189620438016.0,
10
+ "train_loss": 0.5708606184445773,
11
+ "train_runtime": 1693.148,
12
+ "train_samples": 160004,
13
+ "train_samples_per_second": 283.503,
14
+ "train_steps_per_second": 35.439
15
+ }
config.json ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForSequenceClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "finetuning_task": "text-classification",
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 384,
14
+ "id2label": {
15
+ "0": "About (Org.)",
16
+ "1": "About (Personal)",
17
+ "2": "Academic Writing",
18
+ "3": "Audio Transcript",
19
+ "4": "Comment Section",
20
+ "5": "Content Listing",
21
+ "6": "Creative Writing",
22
+ "7": "Customer Support",
23
+ "8": "Documentation",
24
+ "9": "FAQ",
25
+ "10": "Knowledge Article",
26
+ "11": "Legal Notices",
27
+ "12": "Listicle",
28
+ "13": "News (Org.)",
29
+ "14": "News Article",
30
+ "15": "Nonfiction Writing",
31
+ "16": "Other/Unclassified",
32
+ "17": "Personal Blog",
33
+ "18": "Product Page",
34
+ "19": "Q&A Forum",
35
+ "20": "Spam / Ads",
36
+ "21": "Structured Data",
37
+ "22": "Truncated",
38
+ "23": "Tutorial",
39
+ "24": "User Review"
40
+ },
41
+ "initializer_range": 0.02,
42
+ "intermediate_size": 1536,
43
+ "label2id": {
44
+ "About (Org.)": 0,
45
+ "About (Personal)": 1,
46
+ "Academic Writing": 2,
47
+ "Audio Transcript": 3,
48
+ "Comment Section": 4,
49
+ "Content Listing": 5,
50
+ "Creative Writing": 6,
51
+ "Customer Support": 7,
52
+ "Documentation": 8,
53
+ "FAQ": 9,
54
+ "Knowledge Article": 10,
55
+ "Legal Notices": 11,
56
+ "Listicle": 12,
57
+ "News (Org.)": 13,
58
+ "News Article": 14,
59
+ "Nonfiction Writing": 15,
60
+ "Other/Unclassified": 16,
61
+ "Personal Blog": 17,
62
+ "Product Page": 18,
63
+ "Q&A Forum": 19,
64
+ "Spam / Ads": 20,
65
+ "Structured Data": 21,
66
+ "Truncated": 22,
67
+ "Tutorial": 23,
68
+ "User Review": 24
69
+ },
70
+ "layer_norm_eps": 1e-12,
71
+ "max_position_embeddings": 512,
72
+ "model_type": "bert",
73
+ "num_attention_heads": 12,
74
+ "num_hidden_layers": 12,
75
+ "pad_token_id": 1,
76
+ "position_embedding_type": "absolute",
77
+ "problem_type": "single_label_classification",
78
+ "tokenizer_class": "XLMRobertaTokenizer",
79
+ "transformers_version": "5.0.0.dev0",
80
+ "type_vocab_size": 2,
81
+ "use_cache": false,
82
+ "vocab_size": 250037
83
+ }
eval_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_f1": 0.8089721345660946,
4
+ "eval_loss": 0.8624263405799866,
5
+ "eval_runtime": 11.4879,
6
+ "eval_samples": 20008,
7
+ "eval_samples_per_second": 1741.655,
8
+ "eval_steps_per_second": 217.707
9
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bf98e043fd76deaa8531a9bb9f75c3be489d82acf28479c8e30af7c5f51cdfa
3
+ size 470677084
predict_results.txt ADDED
The diff for this file is too large to render. See raw diff
 
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66e2c4647474659095b757711e8aef0583d58dbb50e3349958ebc460a9cf4977
3
+ size 17083065
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "<s>",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "<s>",
47
+ "eos_token": "</s>",
48
+ "extra_special_tokens": {},
49
+ "mask_token": "<mask>",
50
+ "model_max_length": 512,
51
+ "pad_token": "<pad>",
52
+ "sep_token": "</s>",
53
+ "sp_model_kwargs": {},
54
+ "tokenizer_class": "XLMRobertaTokenizer",
55
+ "unk_token": "<unk>"
56
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 7908189620438016.0,
4
+ "train_loss": 0.5708606184445773,
5
+ "train_runtime": 1693.148,
6
+ "train_samples": 160004,
7
+ "train_samples_per_second": 283.503,
8
+ "train_steps_per_second": 35.439
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,910 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 60003,
3
+ "best_metric": 0.8089721345660946,
4
+ "best_model_checkpoint": "./doc_type_v2_primary_model_multilingual-e5-small/checkpoint-60003",
5
+ "epoch": 3.0,
6
+ "eval_steps": 500,
7
+ "global_step": 60003,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.024998750062496876,
14
+ "grad_norm": 17.081697463989258,
15
+ "learning_rate": 4.9584187457293806e-05,
16
+ "loss": 1.8537,
17
+ "step": 500
18
+ },
19
+ {
20
+ "epoch": 0.04999750012499375,
21
+ "grad_norm": 16.341184616088867,
22
+ "learning_rate": 4.9167541622918856e-05,
23
+ "loss": 1.3289,
24
+ "step": 1000
25
+ },
26
+ {
27
+ "epoch": 0.07499625018749062,
28
+ "grad_norm": 12.614828109741211,
29
+ "learning_rate": 4.875089578854391e-05,
30
+ "loss": 1.1698,
31
+ "step": 1500
32
+ },
33
+ {
34
+ "epoch": 0.0999950002499875,
35
+ "grad_norm": 17.94846534729004,
36
+ "learning_rate": 4.833424995416896e-05,
37
+ "loss": 1.0996,
38
+ "step": 2000
39
+ },
40
+ {
41
+ "epoch": 0.12499375031248437,
42
+ "grad_norm": 9.764547348022461,
43
+ "learning_rate": 4.7917604119794014e-05,
44
+ "loss": 1.0552,
45
+ "step": 2500
46
+ },
47
+ {
48
+ "epoch": 0.14999250037498124,
49
+ "grad_norm": 5.973393440246582,
50
+ "learning_rate": 4.7500958285419064e-05,
51
+ "loss": 1.0462,
52
+ "step": 3000
53
+ },
54
+ {
55
+ "epoch": 0.17499125043747812,
56
+ "grad_norm": 5.258781909942627,
57
+ "learning_rate": 4.7084312451044115e-05,
58
+ "loss": 1.0004,
59
+ "step": 3500
60
+ },
61
+ {
62
+ "epoch": 0.199990000499975,
63
+ "grad_norm": 5.401681423187256,
64
+ "learning_rate": 4.666766661666917e-05,
65
+ "loss": 0.9812,
66
+ "step": 4000
67
+ },
68
+ {
69
+ "epoch": 0.2249887505624719,
70
+ "grad_norm": 3.4015467166900635,
71
+ "learning_rate": 4.625102078229422e-05,
72
+ "loss": 0.9245,
73
+ "step": 4500
74
+ },
75
+ {
76
+ "epoch": 0.24998750062496874,
77
+ "grad_norm": 11.498674392700195,
78
+ "learning_rate": 4.583437494791927e-05,
79
+ "loss": 0.9282,
80
+ "step": 5000
81
+ },
82
+ {
83
+ "epoch": 0.2749862506874656,
84
+ "grad_norm": 6.841133117675781,
85
+ "learning_rate": 4.541772911354433e-05,
86
+ "loss": 0.9167,
87
+ "step": 5500
88
+ },
89
+ {
90
+ "epoch": 0.2999850007499625,
91
+ "grad_norm": 5.397707939147949,
92
+ "learning_rate": 4.500108327916937e-05,
93
+ "loss": 0.8886,
94
+ "step": 6000
95
+ },
96
+ {
97
+ "epoch": 0.3249837508124594,
98
+ "grad_norm": 7.148469924926758,
99
+ "learning_rate": 4.458443744479443e-05,
100
+ "loss": 0.8826,
101
+ "step": 6500
102
+ },
103
+ {
104
+ "epoch": 0.34998250087495625,
105
+ "grad_norm": 3.2729530334472656,
106
+ "learning_rate": 4.416779161041948e-05,
107
+ "loss": 0.8443,
108
+ "step": 7000
109
+ },
110
+ {
111
+ "epoch": 0.3749812509374531,
112
+ "grad_norm": 12.553752899169922,
113
+ "learning_rate": 4.375114577604453e-05,
114
+ "loss": 0.8374,
115
+ "step": 7500
116
+ },
117
+ {
118
+ "epoch": 0.39998000099995,
119
+ "grad_norm": 9.571837425231934,
120
+ "learning_rate": 4.333449994166959e-05,
121
+ "loss": 0.8271,
122
+ "step": 8000
123
+ },
124
+ {
125
+ "epoch": 0.42497875106244687,
126
+ "grad_norm": 11.265901565551758,
127
+ "learning_rate": 4.291785410729464e-05,
128
+ "loss": 0.8306,
129
+ "step": 8500
130
+ },
131
+ {
132
+ "epoch": 0.4499775011249438,
133
+ "grad_norm": 18.747684478759766,
134
+ "learning_rate": 4.250120827291969e-05,
135
+ "loss": 0.8561,
136
+ "step": 9000
137
+ },
138
+ {
139
+ "epoch": 0.47497625118744063,
140
+ "grad_norm": 7.2989726066589355,
141
+ "learning_rate": 4.208456243854474e-05,
142
+ "loss": 0.7851,
143
+ "step": 9500
144
+ },
145
+ {
146
+ "epoch": 0.4999750012499375,
147
+ "grad_norm": 21.371959686279297,
148
+ "learning_rate": 4.1667916604169796e-05,
149
+ "loss": 0.7841,
150
+ "step": 10000
151
+ },
152
+ {
153
+ "epoch": 0.5249737513124344,
154
+ "grad_norm": 19.508371353149414,
155
+ "learning_rate": 4.1251270769794846e-05,
156
+ "loss": 0.7678,
157
+ "step": 10500
158
+ },
159
+ {
160
+ "epoch": 0.5499725013749313,
161
+ "grad_norm": 5.09838342666626,
162
+ "learning_rate": 4.0834624935419896e-05,
163
+ "loss": 0.7538,
164
+ "step": 11000
165
+ },
166
+ {
167
+ "epoch": 0.5749712514374281,
168
+ "grad_norm": 6.288057804107666,
169
+ "learning_rate": 4.041797910104495e-05,
170
+ "loss": 0.735,
171
+ "step": 11500
172
+ },
173
+ {
174
+ "epoch": 0.599970001499925,
175
+ "grad_norm": 2.406168222427368,
176
+ "learning_rate": 4.000133326667e-05,
177
+ "loss": 0.774,
178
+ "step": 12000
179
+ },
180
+ {
181
+ "epoch": 0.6249687515624219,
182
+ "grad_norm": 11.135022163391113,
183
+ "learning_rate": 3.9584687432295054e-05,
184
+ "loss": 0.7368,
185
+ "step": 12500
186
+ },
187
+ {
188
+ "epoch": 0.6499675016249188,
189
+ "grad_norm": 16.766277313232422,
190
+ "learning_rate": 3.916804159792011e-05,
191
+ "loss": 0.7435,
192
+ "step": 13000
193
+ },
194
+ {
195
+ "epoch": 0.6749662516874156,
196
+ "grad_norm": 7.3794121742248535,
197
+ "learning_rate": 3.8751395763545154e-05,
198
+ "loss": 0.7035,
199
+ "step": 13500
200
+ },
201
+ {
202
+ "epoch": 0.6999650017499125,
203
+ "grad_norm": 13.058135032653809,
204
+ "learning_rate": 3.833474992917021e-05,
205
+ "loss": 0.7552,
206
+ "step": 14000
207
+ },
208
+ {
209
+ "epoch": 0.7249637518124094,
210
+ "grad_norm": 13.570932388305664,
211
+ "learning_rate": 3.791810409479526e-05,
212
+ "loss": 0.7443,
213
+ "step": 14500
214
+ },
215
+ {
216
+ "epoch": 0.7499625018749062,
217
+ "grad_norm": 16.705114364624023,
218
+ "learning_rate": 3.750145826042031e-05,
219
+ "loss": 0.7461,
220
+ "step": 15000
221
+ },
222
+ {
223
+ "epoch": 0.7749612519374032,
224
+ "grad_norm": 20.24770164489746,
225
+ "learning_rate": 3.708481242604537e-05,
226
+ "loss": 0.7352,
227
+ "step": 15500
228
+ },
229
+ {
230
+ "epoch": 0.7999600019999,
231
+ "grad_norm": 10.8892183303833,
232
+ "learning_rate": 3.666816659167042e-05,
233
+ "loss": 0.6946,
234
+ "step": 16000
235
+ },
236
+ {
237
+ "epoch": 0.8249587520623969,
238
+ "grad_norm": 24.564472198486328,
239
+ "learning_rate": 3.625152075729547e-05,
240
+ "loss": 0.6939,
241
+ "step": 16500
242
+ },
243
+ {
244
+ "epoch": 0.8499575021248937,
245
+ "grad_norm": 14.484394073486328,
246
+ "learning_rate": 3.583487492292053e-05,
247
+ "loss": 0.7509,
248
+ "step": 17000
249
+ },
250
+ {
251
+ "epoch": 0.8749562521873906,
252
+ "grad_norm": 11.327393531799316,
253
+ "learning_rate": 3.541822908854558e-05,
254
+ "loss": 0.6992,
255
+ "step": 17500
256
+ },
257
+ {
258
+ "epoch": 0.8999550022498876,
259
+ "grad_norm": 12.824069023132324,
260
+ "learning_rate": 3.500158325417063e-05,
261
+ "loss": 0.7043,
262
+ "step": 18000
263
+ },
264
+ {
265
+ "epoch": 0.9249537523123844,
266
+ "grad_norm": 1.3452341556549072,
267
+ "learning_rate": 3.458493741979568e-05,
268
+ "loss": 0.6977,
269
+ "step": 18500
270
+ },
271
+ {
272
+ "epoch": 0.9499525023748813,
273
+ "grad_norm": 7.985979080200195,
274
+ "learning_rate": 3.416829158542073e-05,
275
+ "loss": 0.6952,
276
+ "step": 19000
277
+ },
278
+ {
279
+ "epoch": 0.9749512524373781,
280
+ "grad_norm": 6.591372489929199,
281
+ "learning_rate": 3.3751645751045785e-05,
282
+ "loss": 0.708,
283
+ "step": 19500
284
+ },
285
+ {
286
+ "epoch": 0.999950002499875,
287
+ "grad_norm": 4.785042762756348,
288
+ "learning_rate": 3.3334999916670835e-05,
289
+ "loss": 0.6695,
290
+ "step": 20000
291
+ },
292
+ {
293
+ "epoch": 1.0,
294
+ "eval_f1": 0.7876339482882986,
295
+ "eval_loss": 0.6957715749740601,
296
+ "eval_runtime": 12.0347,
297
+ "eval_samples_per_second": 1662.524,
298
+ "eval_steps_per_second": 207.815,
299
+ "step": 20001
300
+ },
301
+ {
302
+ "epoch": 1.024948752562372,
303
+ "grad_norm": 15.502031326293945,
304
+ "learning_rate": 3.2918354082295885e-05,
305
+ "loss": 0.5363,
306
+ "step": 20500
307
+ },
308
+ {
309
+ "epoch": 1.0499475026248688,
310
+ "grad_norm": 0.9488680362701416,
311
+ "learning_rate": 3.2501708247920936e-05,
312
+ "loss": 0.547,
313
+ "step": 21000
314
+ },
315
+ {
316
+ "epoch": 1.0749462526873657,
317
+ "grad_norm": 4.085986614227295,
318
+ "learning_rate": 3.208506241354599e-05,
319
+ "loss": 0.5733,
320
+ "step": 21500
321
+ },
322
+ {
323
+ "epoch": 1.0999450027498625,
324
+ "grad_norm": 15.25266170501709,
325
+ "learning_rate": 3.166841657917104e-05,
326
+ "loss": 0.5454,
327
+ "step": 22000
328
+ },
329
+ {
330
+ "epoch": 1.1249437528123594,
331
+ "grad_norm": 11.815897941589355,
332
+ "learning_rate": 3.125177074479609e-05,
333
+ "loss": 0.5235,
334
+ "step": 22500
335
+ },
336
+ {
337
+ "epoch": 1.1499425028748562,
338
+ "grad_norm": 17.311704635620117,
339
+ "learning_rate": 3.083512491042115e-05,
340
+ "loss": 0.5291,
341
+ "step": 23000
342
+ },
343
+ {
344
+ "epoch": 1.174941252937353,
345
+ "grad_norm": 7.48703145980835,
346
+ "learning_rate": 3.0418479076046197e-05,
347
+ "loss": 0.5537,
348
+ "step": 23500
349
+ },
350
+ {
351
+ "epoch": 1.19994000299985,
352
+ "grad_norm": 0.3721858263015747,
353
+ "learning_rate": 3.000183324167125e-05,
354
+ "loss": 0.555,
355
+ "step": 24000
356
+ },
357
+ {
358
+ "epoch": 1.2249387530623468,
359
+ "grad_norm": 22.23200035095215,
360
+ "learning_rate": 2.9585187407296305e-05,
361
+ "loss": 0.5338,
362
+ "step": 24500
363
+ },
364
+ {
365
+ "epoch": 1.2499375031248436,
366
+ "grad_norm": 2.753875255584717,
367
+ "learning_rate": 2.9168541572921355e-05,
368
+ "loss": 0.5615,
369
+ "step": 25000
370
+ },
371
+ {
372
+ "epoch": 1.2749362531873407,
373
+ "grad_norm": 23.020252227783203,
374
+ "learning_rate": 2.875189573854641e-05,
375
+ "loss": 0.5155,
376
+ "step": 25500
377
+ },
378
+ {
379
+ "epoch": 1.2999350032498376,
380
+ "grad_norm": 31.79548454284668,
381
+ "learning_rate": 2.8335249904171456e-05,
382
+ "loss": 0.5353,
383
+ "step": 26000
384
+ },
385
+ {
386
+ "epoch": 1.3249337533123344,
387
+ "grad_norm": 0.2923097312450409,
388
+ "learning_rate": 2.7918604069796513e-05,
389
+ "loss": 0.5317,
390
+ "step": 26500
391
+ },
392
+ {
393
+ "epoch": 1.3499325033748313,
394
+ "grad_norm": 9.347312927246094,
395
+ "learning_rate": 2.7501958235421566e-05,
396
+ "loss": 0.5429,
397
+ "step": 27000
398
+ },
399
+ {
400
+ "epoch": 1.3749312534373281,
401
+ "grad_norm": 13.638419151306152,
402
+ "learning_rate": 2.7085312401046613e-05,
403
+ "loss": 0.5311,
404
+ "step": 27500
405
+ },
406
+ {
407
+ "epoch": 1.399930003499825,
408
+ "grad_norm": 19.09702491760254,
409
+ "learning_rate": 2.6668666566671667e-05,
410
+ "loss": 0.5345,
411
+ "step": 28000
412
+ },
413
+ {
414
+ "epoch": 1.4249287535623218,
415
+ "grad_norm": 0.6322915554046631,
416
+ "learning_rate": 2.6252020732296717e-05,
417
+ "loss": 0.5287,
418
+ "step": 28500
419
+ },
420
+ {
421
+ "epoch": 1.4499275036248187,
422
+ "grad_norm": 19.159151077270508,
423
+ "learning_rate": 2.583537489792177e-05,
424
+ "loss": 0.5204,
425
+ "step": 29000
426
+ },
427
+ {
428
+ "epoch": 1.4749262536873156,
429
+ "grad_norm": 0.7778434753417969,
430
+ "learning_rate": 2.5418729063546824e-05,
431
+ "loss": 0.5121,
432
+ "step": 29500
433
+ },
434
+ {
435
+ "epoch": 1.4999250037498126,
436
+ "grad_norm": 20.512577056884766,
437
+ "learning_rate": 2.5002083229171875e-05,
438
+ "loss": 0.52,
439
+ "step": 30000
440
+ },
441
+ {
442
+ "epoch": 1.5249237538123093,
443
+ "grad_norm": 8.87389087677002,
444
+ "learning_rate": 2.458543739479693e-05,
445
+ "loss": 0.5094,
446
+ "step": 30500
447
+ },
448
+ {
449
+ "epoch": 1.5499225038748063,
450
+ "grad_norm": 21.17337989807129,
451
+ "learning_rate": 2.416879156042198e-05,
452
+ "loss": 0.5169,
453
+ "step": 31000
454
+ },
455
+ {
456
+ "epoch": 1.574921253937303,
457
+ "grad_norm": 8.69658374786377,
458
+ "learning_rate": 2.3752145726047032e-05,
459
+ "loss": 0.5226,
460
+ "step": 31500
461
+ },
462
+ {
463
+ "epoch": 1.5999200039998,
464
+ "grad_norm": 1.2267570495605469,
465
+ "learning_rate": 2.3335499891672083e-05,
466
+ "loss": 0.5281,
467
+ "step": 32000
468
+ },
469
+ {
470
+ "epoch": 1.624918754062297,
471
+ "grad_norm": 14.757322311401367,
472
+ "learning_rate": 2.2918854057297136e-05,
473
+ "loss": 0.5246,
474
+ "step": 32500
475
+ },
476
+ {
477
+ "epoch": 1.6499175041247938,
478
+ "grad_norm": 6.141539096832275,
479
+ "learning_rate": 2.250220822292219e-05,
480
+ "loss": 0.532,
481
+ "step": 33000
482
+ },
483
+ {
484
+ "epoch": 1.6749162541872906,
485
+ "grad_norm": 15.90838623046875,
486
+ "learning_rate": 2.208556238854724e-05,
487
+ "loss": 0.5068,
488
+ "step": 33500
489
+ },
490
+ {
491
+ "epoch": 1.6999150042497875,
492
+ "grad_norm": 3.071305751800537,
493
+ "learning_rate": 2.166891655417229e-05,
494
+ "loss": 0.4971,
495
+ "step": 34000
496
+ },
497
+ {
498
+ "epoch": 1.7249137543122843,
499
+ "grad_norm": 5.962382793426514,
500
+ "learning_rate": 2.1252270719797344e-05,
501
+ "loss": 0.5122,
502
+ "step": 34500
503
+ },
504
+ {
505
+ "epoch": 1.7499125043747812,
506
+ "grad_norm": 5.9214911460876465,
507
+ "learning_rate": 2.0835624885422398e-05,
508
+ "loss": 0.489,
509
+ "step": 35000
510
+ },
511
+ {
512
+ "epoch": 1.7749112544372783,
513
+ "grad_norm": 8.897248268127441,
514
+ "learning_rate": 2.0418979051047448e-05,
515
+ "loss": 0.479,
516
+ "step": 35500
517
+ },
518
+ {
519
+ "epoch": 1.799910004499775,
520
+ "grad_norm": 16.03746223449707,
521
+ "learning_rate": 2.0002333216672502e-05,
522
+ "loss": 0.4919,
523
+ "step": 36000
524
+ },
525
+ {
526
+ "epoch": 1.824908754562272,
527
+ "grad_norm": 21.669597625732422,
528
+ "learning_rate": 1.9585687382297552e-05,
529
+ "loss": 0.4974,
530
+ "step": 36500
531
+ },
532
+ {
533
+ "epoch": 1.8499075046247686,
534
+ "grad_norm": 3.668883800506592,
535
+ "learning_rate": 1.9169041547922606e-05,
536
+ "loss": 0.5045,
537
+ "step": 37000
538
+ },
539
+ {
540
+ "epoch": 1.8749062546872657,
541
+ "grad_norm": 4.8963518142700195,
542
+ "learning_rate": 1.8752395713547656e-05,
543
+ "loss": 0.525,
544
+ "step": 37500
545
+ },
546
+ {
547
+ "epoch": 1.8999050047497625,
548
+ "grad_norm": 19.771133422851562,
549
+ "learning_rate": 1.833574987917271e-05,
550
+ "loss": 0.4748,
551
+ "step": 38000
552
+ },
553
+ {
554
+ "epoch": 1.9249037548122594,
555
+ "grad_norm": 20.69668960571289,
556
+ "learning_rate": 1.791910404479776e-05,
557
+ "loss": 0.4831,
558
+ "step": 38500
559
+ },
560
+ {
561
+ "epoch": 1.9499025048747562,
562
+ "grad_norm": 3.1742944717407227,
563
+ "learning_rate": 1.750245821042281e-05,
564
+ "loss": 0.5091,
565
+ "step": 39000
566
+ },
567
+ {
568
+ "epoch": 1.974901254937253,
569
+ "grad_norm": 0.3630174696445465,
570
+ "learning_rate": 1.7085812376047867e-05,
571
+ "loss": 0.4821,
572
+ "step": 39500
573
+ },
574
+ {
575
+ "epoch": 1.99990000499975,
576
+ "grad_norm": 10.60681438446045,
577
+ "learning_rate": 1.6669166541672918e-05,
578
+ "loss": 0.4862,
579
+ "step": 40000
580
+ },
581
+ {
582
+ "epoch": 2.0,
583
+ "eval_f1": 0.7969511995029,
584
+ "eval_loss": 0.7491226196289062,
585
+ "eval_runtime": 12.147,
586
+ "eval_samples_per_second": 1647.158,
587
+ "eval_steps_per_second": 205.895,
588
+ "step": 40002
589
+ },
590
+ {
591
+ "epoch": 2.024898755062247,
592
+ "grad_norm": 18.80621910095215,
593
+ "learning_rate": 1.6252520707297968e-05,
594
+ "loss": 0.357,
595
+ "step": 40500
596
+ },
597
+ {
598
+ "epoch": 2.049897505124744,
599
+ "grad_norm": 3.8872764110565186,
600
+ "learning_rate": 1.5835874872923022e-05,
601
+ "loss": 0.333,
602
+ "step": 41000
603
+ },
604
+ {
605
+ "epoch": 2.0748962551872405,
606
+ "grad_norm": 19.08934211730957,
607
+ "learning_rate": 1.5419229038548072e-05,
608
+ "loss": 0.374,
609
+ "step": 41500
610
+ },
611
+ {
612
+ "epoch": 2.0998950052497376,
613
+ "grad_norm": 10.449114799499512,
614
+ "learning_rate": 1.5002583204173126e-05,
615
+ "loss": 0.3698,
616
+ "step": 42000
617
+ },
618
+ {
619
+ "epoch": 2.1248937553122342,
620
+ "grad_norm": 6.660628318786621,
621
+ "learning_rate": 1.4585937369798178e-05,
622
+ "loss": 0.3759,
623
+ "step": 42500
624
+ },
625
+ {
626
+ "epoch": 2.1498925053747313,
627
+ "grad_norm": 9.793807983398438,
628
+ "learning_rate": 1.416929153542323e-05,
629
+ "loss": 0.3543,
630
+ "step": 43000
631
+ },
632
+ {
633
+ "epoch": 2.174891255437228,
634
+ "grad_norm": 20.215002059936523,
635
+ "learning_rate": 1.375264570104828e-05,
636
+ "loss": 0.3695,
637
+ "step": 43500
638
+ },
639
+ {
640
+ "epoch": 2.199890005499725,
641
+ "grad_norm": 20.272212982177734,
642
+ "learning_rate": 1.3335999866673335e-05,
643
+ "loss": 0.3385,
644
+ "step": 44000
645
+ },
646
+ {
647
+ "epoch": 2.2248887555622217,
648
+ "grad_norm": 12.721766471862793,
649
+ "learning_rate": 1.2919354032298386e-05,
650
+ "loss": 0.3583,
651
+ "step": 44500
652
+ },
653
+ {
654
+ "epoch": 2.2498875056247187,
655
+ "grad_norm": 11.291624069213867,
656
+ "learning_rate": 1.2502708197923438e-05,
657
+ "loss": 0.3445,
658
+ "step": 45000
659
+ },
660
+ {
661
+ "epoch": 2.274886255687216,
662
+ "grad_norm": 14.476861000061035,
663
+ "learning_rate": 1.208606236354849e-05,
664
+ "loss": 0.3575,
665
+ "step": 45500
666
+ },
667
+ {
668
+ "epoch": 2.2998850057497124,
669
+ "grad_norm": 8.20272159576416,
670
+ "learning_rate": 1.1669416529173542e-05,
671
+ "loss": 0.3382,
672
+ "step": 46000
673
+ },
674
+ {
675
+ "epoch": 2.3248837558122095,
676
+ "grad_norm": 2.5329763889312744,
677
+ "learning_rate": 1.1252770694798594e-05,
678
+ "loss": 0.3732,
679
+ "step": 46500
680
+ },
681
+ {
682
+ "epoch": 2.349882505874706,
683
+ "grad_norm": 0.9955561757087708,
684
+ "learning_rate": 1.0836124860423647e-05,
685
+ "loss": 0.3454,
686
+ "step": 47000
687
+ },
688
+ {
689
+ "epoch": 2.3748812559372032,
690
+ "grad_norm": 6.986231803894043,
691
+ "learning_rate": 1.0419479026048697e-05,
692
+ "loss": 0.3563,
693
+ "step": 47500
694
+ },
695
+ {
696
+ "epoch": 2.3998800059997,
697
+ "grad_norm": 21.110620498657227,
698
+ "learning_rate": 1.000283319167375e-05,
699
+ "loss": 0.3302,
700
+ "step": 48000
701
+ },
702
+ {
703
+ "epoch": 2.424878756062197,
704
+ "grad_norm": 0.08908458799123764,
705
+ "learning_rate": 9.586187357298801e-06,
706
+ "loss": 0.3421,
707
+ "step": 48500
708
+ },
709
+ {
710
+ "epoch": 2.4498775061246936,
711
+ "grad_norm": 13.181462287902832,
712
+ "learning_rate": 9.169541522923853e-06,
713
+ "loss": 0.3119,
714
+ "step": 49000
715
+ },
716
+ {
717
+ "epoch": 2.4748762561871906,
718
+ "grad_norm": 12.58914852142334,
719
+ "learning_rate": 8.752895688548907e-06,
720
+ "loss": 0.3578,
721
+ "step": 49500
722
+ },
723
+ {
724
+ "epoch": 2.4998750062496873,
725
+ "grad_norm": 39.47843551635742,
726
+ "learning_rate": 8.336249854173957e-06,
727
+ "loss": 0.3584,
728
+ "step": 50000
729
+ },
730
+ {
731
+ "epoch": 2.5248737563121844,
732
+ "grad_norm": 4.305168628692627,
733
+ "learning_rate": 7.919604019799011e-06,
734
+ "loss": 0.3142,
735
+ "step": 50500
736
+ },
737
+ {
738
+ "epoch": 2.5498725063746814,
739
+ "grad_norm": 0.7413849830627441,
740
+ "learning_rate": 7.502958185424062e-06,
741
+ "loss": 0.3124,
742
+ "step": 51000
743
+ },
744
+ {
745
+ "epoch": 2.574871256437178,
746
+ "grad_norm": 1.338671326637268,
747
+ "learning_rate": 7.086312351049114e-06,
748
+ "loss": 0.3262,
749
+ "step": 51500
750
+ },
751
+ {
752
+ "epoch": 2.599870006499675,
753
+ "grad_norm": 26.348230361938477,
754
+ "learning_rate": 6.669666516674167e-06,
755
+ "loss": 0.3072,
756
+ "step": 52000
757
+ },
758
+ {
759
+ "epoch": 2.624868756562172,
760
+ "grad_norm": 38.16984558105469,
761
+ "learning_rate": 6.253020682299218e-06,
762
+ "loss": 0.3274,
763
+ "step": 52500
764
+ },
765
+ {
766
+ "epoch": 2.649867506624669,
767
+ "grad_norm": 13.00293254852295,
768
+ "learning_rate": 5.83637484792427e-06,
769
+ "loss": 0.3131,
770
+ "step": 53000
771
+ },
772
+ {
773
+ "epoch": 2.6748662566871655,
774
+ "grad_norm": 3.519160270690918,
775
+ "learning_rate": 5.419729013549323e-06,
776
+ "loss": 0.3281,
777
+ "step": 53500
778
+ },
779
+ {
780
+ "epoch": 2.6998650067496626,
781
+ "grad_norm": 15.743597984313965,
782
+ "learning_rate": 5.003083179174375e-06,
783
+ "loss": 0.3108,
784
+ "step": 54000
785
+ },
786
+ {
787
+ "epoch": 2.7248637568121596,
788
+ "grad_norm": 20.438329696655273,
789
+ "learning_rate": 4.586437344799427e-06,
790
+ "loss": 0.3189,
791
+ "step": 54500
792
+ },
793
+ {
794
+ "epoch": 2.7498625068746563,
795
+ "grad_norm": 45.14103317260742,
796
+ "learning_rate": 4.169791510424479e-06,
797
+ "loss": 0.3367,
798
+ "step": 55000
799
+ },
800
+ {
801
+ "epoch": 2.774861256937153,
802
+ "grad_norm": 3.860975980758667,
803
+ "learning_rate": 3.7531456760495313e-06,
804
+ "loss": 0.2969,
805
+ "step": 55500
806
+ },
807
+ {
808
+ "epoch": 2.79986000699965,
809
+ "grad_norm": 0.40173372626304626,
810
+ "learning_rate": 3.3364998416745833e-06,
811
+ "loss": 0.3332,
812
+ "step": 56000
813
+ },
814
+ {
815
+ "epoch": 2.824858757062147,
816
+ "grad_norm": 2.1133482456207275,
817
+ "learning_rate": 2.9198540072996353e-06,
818
+ "loss": 0.3197,
819
+ "step": 56500
820
+ },
821
+ {
822
+ "epoch": 2.8498575071246437,
823
+ "grad_norm": 25.709867477416992,
824
+ "learning_rate": 2.5032081729246873e-06,
825
+ "loss": 0.312,
826
+ "step": 57000
827
+ },
828
+ {
829
+ "epoch": 2.8748562571871408,
830
+ "grad_norm": 22.588973999023438,
831
+ "learning_rate": 2.0865623385497392e-06,
832
+ "loss": 0.3275,
833
+ "step": 57500
834
+ },
835
+ {
836
+ "epoch": 2.8998550072496374,
837
+ "grad_norm": 2.185502529144287,
838
+ "learning_rate": 1.6699165041747914e-06,
839
+ "loss": 0.2933,
840
+ "step": 58000
841
+ },
842
+ {
843
+ "epoch": 2.9248537573121345,
844
+ "grad_norm": 12.381799697875977,
845
+ "learning_rate": 1.2532706697998434e-06,
846
+ "loss": 0.3123,
847
+ "step": 58500
848
+ },
849
+ {
850
+ "epoch": 2.949852507374631,
851
+ "grad_norm": 0.39924994111061096,
852
+ "learning_rate": 8.366248354248955e-07,
853
+ "loss": 0.3045,
854
+ "step": 59000
855
+ },
856
+ {
857
+ "epoch": 2.974851257437128,
858
+ "grad_norm": 16.00220489501953,
859
+ "learning_rate": 4.199790010499475e-07,
860
+ "loss": 0.2928,
861
+ "step": 59500
862
+ },
863
+ {
864
+ "epoch": 2.9998500074996253,
865
+ "grad_norm": 2.6532301902770996,
866
+ "learning_rate": 3.3331666749995837e-09,
867
+ "loss": 0.3199,
868
+ "step": 60000
869
+ },
870
+ {
871
+ "epoch": 3.0,
872
+ "eval_f1": 0.8089721345660946,
873
+ "eval_loss": 0.8624263405799866,
874
+ "eval_runtime": 10.7103,
875
+ "eval_samples_per_second": 1868.104,
876
+ "eval_steps_per_second": 233.513,
877
+ "step": 60003
878
+ },
879
+ {
880
+ "epoch": 3.0,
881
+ "step": 60003,
882
+ "total_flos": 7908189620438016.0,
883
+ "train_loss": 0.5708606184445773,
884
+ "train_runtime": 1693.148,
885
+ "train_samples_per_second": 283.503,
886
+ "train_steps_per_second": 35.439
887
+ }
888
+ ],
889
+ "logging_steps": 500,
890
+ "max_steps": 60003,
891
+ "num_input_tokens_seen": 0,
892
+ "num_train_epochs": 3,
893
+ "save_steps": 500,
894
+ "stateful_callbacks": {
895
+ "TrainerControl": {
896
+ "args": {
897
+ "should_epoch_stop": false,
898
+ "should_evaluate": false,
899
+ "should_log": false,
900
+ "should_save": true,
901
+ "should_training_stop": true
902
+ },
903
+ "attributes": {}
904
+ }
905
+ },
906
+ "total_flos": 7908189620438016.0,
907
+ "train_batch_size": 8,
908
+ "trial_name": null,
909
+ "trial_params": null
910
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9dc91e5417def46237bba4ce683908f554f360568df4945209dc9f816a43932
3
+ size 5201