Instructions to use SajilAwale/FunnyModel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SajilAwale/FunnyModel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="SajilAwale/FunnyModel")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("SajilAwale/FunnyModel") model = AutoModelForSequenceClassification.from_pretrained("SajilAwale/FunnyModel") - Notebooks
- Google Colab
- Kaggle
update readme
Browse files
README.md
CHANGED
|
@@ -22,6 +22,7 @@ This model was fine tuned to classify if a joke is humorous, offensive and what
|
|
| 22 |
- 10% sample of r/Jokes dataset from https://github.com/orionw/rJokesData (500k)
|
| 23 |
|
| 24 |
## Dataset
|
|
|
|
| 25 |
- Total Data Size: 573,410
|
| 26 |
- Train Data Size: 90% of 10% of total size
|
| 27 |
- Validation Data Size: 10% of 10% of total size
|
|
|
|
| 22 |
- 10% sample of r/Jokes dataset from https://github.com/orionw/rJokesData (500k)
|
| 23 |
|
| 24 |
## Dataset
|
| 25 |
+
- Can be found at https://huggingface.co/datasets/SajilAwale/FunnyData/
|
| 26 |
- Total Data Size: 573,410
|
| 27 |
- Train Data Size: 90% of 10% of total size
|
| 28 |
- Validation Data Size: 10% of 10% of total size
|