Instructions to use lmbelo/OpenELM-270M-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use lmbelo/OpenELM-270M-Instruct with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir OpenELM-270M-Instruct lmbelo/OpenELM-270M-Instruct
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
lmbelo/OpenELM-270M-Instruct
The Model lmbelo/OpenELM-270M-Instruct was converted to MLX format from mlx-community/OpenELM-270M-Instruct using mlx-lm version 0.13.1.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("lmbelo/OpenELM-270M-Instruct")
response = generate(model, tokenizer, prompt="hello", verbose=True)
- Downloads last month
- 6
Model size
0.3B params
Tensor type
BF16
·
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support