Instructions to use ModelTC/bart-base-mnli with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ModelTC/bart-base-mnli with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="ModelTC/bart-base-mnli")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("ModelTC/bart-base-mnli") model = AutoModelForSequenceClassification.from_pretrained("ModelTC/bart-base-mnli") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 69a647999957105b45dbf8c18a3d0eebe4c90ff07968f556a7a6c8fe094fcf6a
- Size of remote file:
- 560 MB
- SHA256:
- 841db4a650f89881d53676f2dbb6b60d5dd535016f5df78750288d592a315d58
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.