Instructions to use hf-tiny-model-private/tiny-random-RobertaForMaskedLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-tiny-model-private/tiny-random-RobertaForMaskedLM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="hf-tiny-model-private/tiny-random-RobertaForMaskedLM")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("hf-tiny-model-private/tiny-random-RobertaForMaskedLM") model = AutoModelForMaskedLM.from_pretrained("hf-tiny-model-private/tiny-random-RobertaForMaskedLM") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 1243219de3f0c4bd6fc57bf4d3af84cffa03b169a9c7c3b87e0bd1fae4ccdc83
- Size of remote file:
- 678 kB
- SHA256:
- 6bb8b4424d800339a2cb23c8324ca19a59412007c218a72d1a42729692907ee9
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.