Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

mlx-community
/
gemma-4-e2b-4bit

Any-to-Any
MLX
Safetensors
gemma4
4-bit precision
Model card Files Files and versions
xet
Community
1

Instructions to use mlx-community/gemma-4-e2b-4bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • MLX

    How to use mlx-community/gemma-4-e2b-4bit with MLX:

    # Download the model from the Hub
    pip install huggingface_hub[hf_xet]
    
    huggingface-cli download --local-dir gemma-4-e2b-4bit mlx-community/gemma-4-e2b-4bit
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • LM Studio
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

⚠️ Existing MLX quantized Gemma 4 models (mlx-community, unsloth) produce garbage output due to quantizing PLE (Per-Layer Embedding) layers.

4
#1 opened about 1 month ago by
Alkd
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs