Instructions to use alirezamsh/small100 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use alirezamsh/small100 with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="alirezamsh/small100")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("alirezamsh/small100") model = AutoModelForSeq2SeqLM.from_pretrained("alirezamsh/small100") - Inference
- Notebooks
- Google Colab
- Kaggle
Small100Tokenizer Error
Hi! I used the Small100Tokenizer for some experiments a couple of weeks ago and it worked perfectly fine, but when I tried to rerun it more recently it resulted in an attribute error ('AttributeError: 'SMALL100Tokenizer' object has no attribute 'encoder''). I encountered the same error when using the demo link on the model card. After switching to the M2MTokenizer (M2M100Tokenizer.from_pretrained("alirezamsh/small100")) it ran smoothly β does anybody know what the issue could be?
Update: Problem solved!
The issue seems to arise due to the transformers library update for version 4.34, which heavily influences the tokenizer workflow. Hence, tokenization_small100.py only functions with transformer < 4.34 at the moment.
Thanks for the message. I will update it soon!