How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-classification", model="hyperonym/barba")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("hyperonym/barba")
model = AutoModelForSequenceClassification.from_pretrained("hyperonym/barba")
Quick Links

Barba

Barba is a multilingual natural language inference model for textual entailment and zero-shot text classification, available as an end-to-end service through TensorFlow Serving. Based on XLM-RoBERTa, it is trained on selected subsets of publicly available English (GLUE), Chinese (CLUE), Japanese (JGLUE), Korean (KLUE) datasets, as well as other private datasets.

GitHub: https://github.com/hyperonym/barba

Framework versions

  • Transformers 4.28.1
  • TensorFlow 2.11.1
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train hyperonym/barba

Paper for hyperonym/barba