Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

answerdotai
/
ModernBERT-base

Fill-Mask
Transformers
PyTorch
ONNX
Safetensors
English
modernbert
masked-lm
long-context
Model card Files Files and versions
xet
Community
85

Instructions to use answerdotai/ModernBERT-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use answerdotai/ModernBERT-base with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("fill-mask", model="answerdotai/ModernBERT-base")
    # Load model directly
    from transformers import AutoTokenizer, AutoModelForMaskedLM
    
    tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base")
    model = AutoModelForMaskedLM.from_pretrained("answerdotai/ModernBERT-base")
  • Notebooks
  • Google Colab
  • Kaggle
ModernBERT-base
3.13 GB
Ctrl+K
Ctrl+K
  • 6 contributors
History: 24 commits
Amyww's picture
Amyww
Create ?
e28be0d verified over 1 year ago
  • onnx
    Upload ONNX weights (#1) over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    8.51 kB
    Add links to answer.ai & lighton.ai over 1 year ago
  • config.json
    1.19 kB
    Bump `max_position_embeddings` to 8192 over 1 year ago
  • model.safetensors
    599 MB
    xet
    Purge duplicate "decoder.weight", rely on tied weights instead over 1 year ago
  • pytorch_model.bin

    Detected Pickle imports (3)

    • "torch._utils._rebuild_tensor_v2",
    • "collections.OrderedDict",
    • "torch.FloatStorage"

    What is a pickle import?

    599 MB
    xet
    Purge duplicate "decoder.weight", rely on tied weights instead over 1 year ago
  • special_tokens_map.json
    694 Bytes
    Also update tokenizer/special_tokens_map over 1 year ago
  • tokenizer.json
    2.13 MB
    Also update tokenizer/special_tokens_map over 1 year ago
  • tokenizer_config.json
    20.8 kB
    Update tokenizer: Set lstrip=True for [MASK] over 1 year ago
  • ?
    0 Bytes
    Create ? over 1 year ago