| | --- |
| | license: mit |
| | language: |
| | - en |
| | pipeline_tag: text-classification |
| | --- |
| | |
| | # EmoBERTv2 Model |
| | This Model Card is a work in progress and will be completed in the future (dataset upload pending, etc) |
| | ## Model Description |
| | EmoBERTv2 is a emotion text classification model trained on a large dataset of english social media posts. The model is fine-tuned |
| | from "prajjwal1-bert-tiny" EmoBERTv2 can be used for either further fine-tuning, or for usage in real-time emotion prediction applications |
| |
|
| |
|
| | ## Datasets |
| | This model was trained on the [Dataset Name] dataset, which is an aggregation of many datasets through relabling and data subsetting. The |
| | dataset has 9 labels: joy, sad, love, anger, disgust, surprise, neutral, fear, and worry |
| |
|
| | ## Training Procedure |
| | EmoBERTv2 was fine-tuned from [Base Model Name] with specific hyperparameters [List Hyperparameters]. Training involved [X] epochs, using a learning rate of [Y]. |
| |
|
| | ## Intended Use |
| | This model is intended for emotion classification in [specific domains or general use]. It should be used as a tool for [Specify Applications]. |
| |
|
| | ## Performance |
| | EmoBERTv2 demonstrates an accuracy of 86.17% on the [Test Dataset Name]Test set. For detailed performance metrics, refer to [Link to Performance Metrics]. |
| |
|
| | ## Bias and Fairness |
| | While efforts have been made to reduce bias, users should be aware of potential biases in the data. It is advisable to test the model in specific contexts. |
| |
|
| | ## Licensing and Usage |
| | EmoBERTv2 is released under the MIT License and can be freely used as outlined in the license. |
| |
|
| | ## Other Model Variations |
| | Additional variations of EmoBERTv2 include [List Variations]. These variations offer different trade-offs in terms of size, speed, and performance. |
| |
|