Instructions to use clue/roberta_chinese_large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use clue/roberta_chinese_large with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("clue/roberta_chinese_large", dtype="auto") - Notebooks
- Google Colab
- Kaggle
roberta_chinese_large
Overview
Language model: roberta-large Model size: 1.2G Language: Chinese Training data: CLUECorpusSmall Eval data: CLUE dataset
Results
For results on downstream tasks like text classification, please refer to this repository.
Usage
NOTE: You have to call BertTokenizer instead of RobertaTokenizer !!!
import torch
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_large")
roberta = BertModel.from_pretrained("clue/roberta_chinese_large")
About CLUE benchmark
Organization of Language Understanding Evaluation benchmark for Chinese: tasks & datasets, baselines, pre-trained Chinese models, corpus and leaderboard.
Github: https://github.com/CLUEbenchmark Website: https://www.cluebenchmarks.com/
- Downloads last month
- 27
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support