| | --- |
| | library_name: transformers |
| | license: apache-2.0 |
| | base_model: bert-base-uncased |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - accuracy |
| | model-index: |
| | - name: bert-ia-checkpoint |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # bert-ia-checkpoint |
| |
|
| | This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 1.7216 |
| | - Accuracy: 0.7229 |
| | - F1 Macro: 0.6963 |
| | - Precision Macro: 0.7200 |
| | - Recall Macro: 0.6916 |
| | - Auc: 0.7626 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 2e-05 |
| | - train_batch_size: 16 |
| | - eval_batch_size: 16 |
| | - seed: 42 |
| | - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| | - lr_scheduler_type: linear |
| | - num_epochs: 10 |
| | |
| | ### Training results |
| | |
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Auc | |
| | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:------:| |
| | | No log | 1.0 | 79 | 0.6736 | 0.7261 | 0.7028 | 0.7210 | 0.6981 | 0.7428 | |
| | | No log | 2.0 | 158 | 0.8024 | 0.7006 | 0.6975 | 0.6995 | 0.7070 | 0.7566 | |
| | | No log | 3.0 | 237 | 0.9896 | 0.7389 | 0.7226 | 0.7307 | 0.7189 | 0.7613 | |
| | | No log | 4.0 | 316 | 1.3463 | 0.7229 | 0.7032 | 0.7145 | 0.6992 | 0.7444 | |
| | | No log | 5.0 | 395 | 1.4706 | 0.7357 | 0.7246 | 0.7256 | 0.7238 | 0.7536 | |
| | | No log | 6.0 | 474 | 1.6432 | 0.7420 | 0.7264 | 0.7339 | 0.7228 | 0.7518 | |
| | | 0.176 | 7.0 | 553 | 1.7216 | 0.7229 | 0.6963 | 0.7200 | 0.6916 | 0.7626 | |
| | | 0.176 | 8.0 | 632 | 1.7837 | 0.7357 | 0.7078 | 0.7383 | 0.7023 | 0.7596 | |
| | | 0.176 | 9.0 | 711 | 1.7627 | 0.7325 | 0.7129 | 0.7256 | 0.7085 | 0.7611 | |
| | | 0.176 | 10.0 | 790 | 1.7560 | 0.7357 | 0.7188 | 0.7275 | 0.7149 | 0.7610 | |
| | |
| | |
| | ### Framework versions |
| | |
| | - Transformers 4.57.1 |
| | - Pytorch 2.8.0+cu126 |
| | - Datasets 4.0.0 |
| | - Tokenizers 0.22.1 |
| | |