| | --- |
| | license: apache-2.0 |
| | library_name: peft |
| | tags: |
| | - trl |
| | - sft |
| | - generated_from_trainer |
| | base_model: petals-team/falcon-rw-1b |
| | model-index: |
| | - name: GenAI-task2-ModelB |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # GenAI-task2-ModelB |
| |
|
| | This model is a fine-tuned version of [petals-team/falcon-rw-1b](https://huggingface.co/petals-team/falcon-rw-1b) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 1.0712 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 2 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 2 |
| | - total_train_batch_size: 4 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine |
| | - lr_scheduler_warmup_ratio: 0.01 |
| | - num_epochs: 2 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 1.4819 | 0.05 | 20 | 1.5761 | |
| | | 1.6396 | 0.1 | 40 | 1.4181 | |
| | | 1.4715 | 0.15 | 60 | 1.3053 | |
| | | 1.2372 | 0.2 | 80 | 1.2440 | |
| | | 1.3006 | 0.25 | 100 | 1.2091 | |
| | | 1.117 | 0.3 | 120 | 1.1826 | |
| | | 1.1284 | 0.35 | 140 | 1.1691 | |
| | | 1.1199 | 0.4 | 160 | 1.1582 | |
| | | 1.1853 | 0.45 | 180 | 1.1457 | |
| | | 1.1308 | 0.5 | 200 | 1.1411 | |
| | | 1.0031 | 0.55 | 220 | 1.1288 | |
| | | 1.1332 | 0.6 | 240 | 1.1233 | |
| | | 1.1182 | 0.65 | 260 | 1.1185 | |
| | | 1.0737 | 0.7 | 280 | 1.1131 | |
| | | 1.1858 | 0.75 | 300 | 1.1078 | |
| | | 1.0432 | 0.8 | 320 | 1.1026 | |
| | | 1.0895 | 0.85 | 340 | 1.0983 | |
| | | 1.1091 | 0.9 | 360 | 1.0949 | |
| | | 1.0866 | 0.95 | 380 | 1.0927 | |
| | | 1.1613 | 1.0 | 400 | 1.0955 | |
| | | 1.0328 | 1.05 | 420 | 1.0861 | |
| | | 1.0603 | 1.1 | 440 | 1.0842 | |
| | | 1.0627 | 1.15 | 460 | 1.0826 | |
| | | 0.9571 | 1.2 | 480 | 1.0802 | |
| | | 1.0478 | 1.25 | 500 | 1.0808 | |
| | | 1.0482 | 1.3 | 520 | 1.0777 | |
| | | 1.0552 | 1.35 | 540 | 1.0770 | |
| | | 1.0545 | 1.4 | 560 | 1.0778 | |
| | | 0.9966 | 1.45 | 580 | 1.0750 | |
| | | 1.0967 | 1.5 | 600 | 1.0747 | |
| | | 1.0334 | 1.55 | 620 | 1.0736 | |
| | | 1.0981 | 1.6 | 640 | 1.0726 | |
| | | 1.016 | 1.65 | 660 | 1.0726 | |
| | | 1.0358 | 1.7 | 680 | 1.0718 | |
| | | 1.0838 | 1.75 | 700 | 1.0718 | |
| | | 1.0066 | 1.8 | 720 | 1.0715 | |
| | | 1.1167 | 1.85 | 740 | 1.0713 | |
| | | 1.0809 | 1.9 | 760 | 1.0713 | |
| | | 1.0526 | 1.95 | 780 | 1.0712 | |
| | | 1.1084 | 2.0 | 800 | 1.0712 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - PEFT 0.10.0 |
| | - Transformers 4.40.0 |
| | - Pytorch 2.2.1+cu121 |
| | - Datasets 2.19.0 |
| | - Tokenizers 0.19.1 |