Instructions to use google-bert/bert-base-uncased with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google-bert/bert-base-uncased with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="google-bert/bert-base-uncased")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-uncased") - Inference
- Notebooks
- Google Colab
- Kaggle
Updates the tokenizer configuration file
#62
by lysandre HF Staff - opened
The tokenizer configuration file is missing/incorrect and therefore leading to unforeseen errors after the migration of the canonical models.
Refer to the following issue for more information: transformers#29050
The current failing code is the following:
from transformers import AutoTokenizer
>>> previous_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> current_tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
1000000000000000019884624838656, 512
This is the result after the fix:
from transformers import AutoTokenizer
>>> previous_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> current_tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased")
>>> print(previous_tokenizer.model_max_length, current_tokenizer.model_max_length)
512, 512
lysandre changed pull request status to open
lysandre changed pull request status to merged