Instructions to use mlx-community/CodeLlama-7b-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use mlx-community/CodeLlama-7b-mlx with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # if on a CUDA device, also pip install mlx[cuda] # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("mlx-community/CodeLlama-7b-mlx") prompt = "Once upon a time in" text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- MLX LM
How to use mlx-community/CodeLlama-7b-mlx with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Generate some text mlx_lm.generate --model "mlx-community/CodeLlama-7b-mlx" --prompt "Once upon a time"
Update usage instructions
#2
by pcuenq HF Staff - opened
README.md
CHANGED
|
@@ -27,10 +27,10 @@ git clone https://github.com/ml-explore/mlx-examples.git
|
|
| 27 |
|
| 28 |
# Download model
|
| 29 |
export HF_HUB_ENABLE_HF_TRANSFER=1
|
| 30 |
-
huggingface-cli download --local-dir
|
| 31 |
|
| 32 |
# Run example
|
| 33 |
-
python mlx-examples/llama/llama.py CodeLlama-7b-mlx CodeLlama-7b-mlx/tokenizer.model "
|
| 34 |
```
|
| 35 |
|
| 36 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|
|
|
|
| 27 |
|
| 28 |
# Download model
|
| 29 |
export HF_HUB_ENABLE_HF_TRANSFER=1
|
| 30 |
+
huggingface-cli download --local-dir CodeLlama-7b-mlx mlx-llama/CodeLlama-7b-mlx
|
| 31 |
|
| 32 |
# Run example
|
| 33 |
+
python mlx-examples/llama/llama.py CodeLlama-7b-mlx/CodeLlama-7b.npz CodeLlama-7b-mlx/tokenizer.model "def fibonacci("
|
| 34 |
```
|
| 35 |
|
| 36 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|