| --- |
| library_name: peft |
| license: bigcode-openrail-m |
| base_model: bigcode/starcoderbase-1b |
| tags: |
| - generated_from_trainer |
| model-index: |
| - name: EMS_tb_code_100-starcoder-lora-batch_5_2000 |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # EMS_tb_code_100-starcoder-lora-batch_5_2000 |
| |
| This model is a fine-tuned version of [bigcode/starcoderbase-1b](https://huggingface.co/bigcode/starcoderbase-1b) on an unknown dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0086 |
| |
| ## Model description |
| |
| More information needed |
| |
| ## Intended uses & limitations |
| |
| More information needed |
| |
| ## Training and evaluation data |
| |
| More information needed |
| |
| ## Training procedure |
| |
| ### Training hyperparameters |
| |
| The following hyperparameters were used during training: |
| - learning_rate: 0.0005 |
| - train_batch_size: 5 |
| - eval_batch_size: 5 |
| - seed: 42 |
| - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: cosine |
| - lr_scheduler_warmup_steps: 30 |
| - training_steps: 2000 |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | |
| |:-------------:|:-----:|:----:|:---------------:| |
| | 0.3244 | 0.05 | 100 | 0.1934 | |
| | 0.0814 | 0.1 | 200 | 0.0410 | |
| | 0.0424 | 0.15 | 300 | 0.0251 | |
| | 0.0353 | 0.2 | 400 | 0.0195 | |
| | 0.0289 | 0.25 | 500 | 0.0172 | |
| | 0.0249 | 0.3 | 600 | 0.0153 | |
| | 0.0223 | 0.35 | 700 | 0.0143 | |
| | 0.0204 | 0.4 | 800 | 0.0134 | |
| | 0.0187 | 0.45 | 900 | 0.0124 | |
| | 0.0167 | 0.5 | 1000 | 0.0119 | |
| | 0.0147 | 0.55 | 1100 | 0.0110 | |
| | 0.0143 | 0.6 | 1200 | 0.0105 | |
| | 0.0131 | 0.65 | 1300 | 0.0100 | |
| | 0.0139 | 0.7 | 1400 | 0.0097 | |
| | 0.0126 | 0.75 | 1500 | 0.0093 | |
| | 0.0114 | 0.8 | 1600 | 0.0090 | |
| | 0.0116 | 0.85 | 1700 | 0.0088 | |
| | 0.0117 | 0.9 | 1800 | 0.0087 | |
| | 0.01 | 0.95 | 1900 | 0.0086 | |
| | 0.0098 | 1.0 | 2000 | 0.0086 | |
|
|
|
|
| ### Framework versions |
|
|
| - PEFT 0.14.0 |
| - Transformers 4.46.3 |
| - Pytorch 2.5.1+cu124 |
| - Datasets 3.2.0 |
| - Tokenizers 0.20.3 |