| | --- |
| | license: mit |
| | --- |
| | ## BERT-based Text Classification Model |
| | This model is a fine-tuned version of the bert-base-uncased model, specifically adapted for text classification across a diverse set of categories. The model has been trained on a dataset collected from multiple sources, including the News Category Dataset on Kaggle and various other websites. |
| |
|
| | The model classifies text into one of the following 12 categories: |
| |
|
| | * Food |
| | * Videogames & Shows |
| | * Kids and fun |
| | * Homestyle |
| | * Travel |
| | * Health |
| | * Charity |
| | * Electronics & Technology |
| | * Sports |
| | * Cultural & Music |
| | * Education |
| | * Convenience |
| | The model has demonstrated robust performance with an accuracy of 0.721459, F1 score of 0.659451, precision of 0.707620, and recall of 0.635155. |
| |
|
| | ## Model Architecture |
| | The model leverages the BertForSequenceClassification architecture, It has been fine-tuned on the aforementioned dataset, with the following key configuration parameters: |
| |
|
| | Hidden size: 768 |
| | Number of attention heads: 12 |
| | Number of hidden layers: 12 |
| | Max position embeddings: 512 |
| | Type vocab size: 2 |
| | Vocab size: 30522 |
| | The model uses the GELU activation function in its hidden layers and applies dropout with a probability of 0.1 to the attention probabilities to prevent overfitting. |