Unsupervised Cross-lingual Representation Learning at Scale
Paper • 1911.02116 • Published • 4
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hyperonym/barba")
model = AutoModelForSequenceClassification.from_pretrained("hyperonym/barba")Barba is a multilingual natural language inference model for textual entailment and zero-shot text classification, available as an end-to-end service through TensorFlow Serving. Based on XLM-RoBERTa, it is trained on selected subsets of publicly available English (GLUE), Chinese (CLUE), Japanese (JGLUE), Korean (KLUE) datasets, as well as other private datasets.
GitHub: https://github.com/hyperonym/barba
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="hyperonym/barba")