| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | metrics: |
| | - accuracy |
| | library_name: transformers |
| | pipeline_tag: text-classification |
| | tags: |
| | - customer-service-tickets |
| | - github-issues |
| | - bart-large-mnli |
| | - zero-shot-classification |
| | - NLP |
| | widget: |
| | - text: "Sign up form is not working" |
| | example_title: "Example 1" |
| | - text: "json and yaml support" |
| | example_title: "Example 2" |
| | - text: "fullscreen and tabs media key don't do what they should" |
| | example_title: "Example 2" |
| | --- |
| | |
| | # GitHub issues classifier (using zero shot classification) |
| |
|
| | Predicts wether a statement is a feature request, issue/bug or question |
| |
|
| | This model was trained using the [**Zero-shot classifier distillation**](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation) method |
| | with the [BART-large-mnli](https://huggingface.co/facebook/bart-large-mnli) model as teacher model, to train a classifier on Github issues from the [Github Issues Prediction dataset](https://www.kaggle.com/datasets/anmolkumar/github-bugs-prediction) |
| |
|
| | ## Labels |
| |
|
| | As per the dataset Kaggle competition, the classifier predicts wether an issue is a bug, feature or question. After playing around with different labels pre-training I've used a different mapping |
| | of labels that yielded better predictions (see notebook [here](https://www.kaggle.com/code/antoinemacia/zero-shot-classifier-for-bug-analysis/edit) for details), labels being |
| |
|
| | * issue |
| | * feature request |
| | * question |
| |
|
| |
|
| | ## Training data |
| |
|
| | * 15k of Github issues titles ("unlabeled_titles_simple.txt") |
| | * Hypothesis used: "This request is a {}" |
| | * Teacher model used: valhalla/distilbart-mnli-12-1 |
| | * Studend model used: distilbert-base-uncased |
| |
|
| | ## Results |
| |
|
| | Agreement of student and teacher predictions: **94.82%** |
| |
|
| | See [this notebook](https://www.kaggle.com/code/antoinemacia/zero-shot-classifier-for-bug-analysis/edit) for more info on feature engineering choice made |
| |
|
| |
|
| | ## How to train using your own dataset |
| | * Download training dataset from https://www.kaggle.com/datasets/anmolkumar/github-bugs-prediction |
| | * Modify and run convert.py, updating the paths to convert to a CSV |
| | * Run distill.py with the csv file (see [here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation) for more info) |
| |
|
| | ## Acknowledgements |
| |
|
| | * Joe Davison and his article on [Zero-Shot Learning in Modern NLP](https://joeddav.github.io/blog/2020/05/29/ZSL.html) |
| | * Jeremy Howard, fast.ai and his notebook [Iterate like a grandmaster](https://www.kaggle.com/code/antoinemacia/iterate-like-a-grandmaster) |