๐ฌ๐ง English | ๐จ๐ณ ไธญๆ
๐ Scroll down for Chinese version / ๅไธๆปๅจๆฅ็ไธญๆ็ๆฌ
๐ฌ๐ง English
Codestral-22B-Yui-MLX
This model is CyberYui's custom-converted MLX format port of Mistral AI's official mistralai/Codestral-22B-v0.1 model. No modifications, alterations, or fine-tuning of any kind were applied to the original model's weights, architecture, or parameters; this is strictly a format conversion for MLX, optimized exclusively for Apple Silicon (M1/M2/M3/M4) chips.
๐ Model Details
- Base Model:
mistralai/Codestral-22B-v0.1 - Conversion Tool:
mlx-lm 0.29.1 - Quantization: 4-bit (โ12.5GB total size)
- Framework: MLX (native Apple GPU acceleration)
- Use Cases: Code completion, code generation, programming assistance, FIM (Fill-In-the-Middle)
๐ How to Use
1. Command Line (mlx-lm)
First, install the required package:
pip install mlx-lm
Then run the model directly:
mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"
2. Python Code
from mlx_lm import load, generate
# Load this model
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")
# Define your prompt
prompt = "Write a Python function for quicksort with comments"
# Apply chat template if available
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)
# Generate response
response = generate(model, tokenizer, prompt=prompt, verbose=True)
3. LM Studio
- Open LM Studio and log in to your Hugging Face account
- Go to the Publish tab
- Search for this model
CyberYui/Codestral-22B-Yui-MLX - Download and load the model to enjoy native MLX acceleration!
๐ License
This model is distributed under the Apache License 2.0, strictly following the original model's open-source license.
๐จ๐ณ ไธญๆ
Codestral-22B-Yui-MLX
ๆฌๆจกๅๅไธบ Codestral-22B ๏ผๆฏๅบไบ Mistral AI ๅฎๆน mistralai/Codestral-22B-v0.1 ๆจกๅ๏ผๆ ไปปไฝไฟฎๆนๅนถ็ฑCyberYuiไธชไบบ่ฝฌๆข็ MLX ๆ ผๅผไธๅฑ็ๆฌๆจกๅ๏ผไธไธบ Apple Silicon๏ผM1/M2/M3/M4๏ผ ็ณปๅ่ฏ็ๆทฑๅบฆ้้
ใ
๐ ๆจกๅ่ฏฆๆ
- ๅบ็กๆจกๅ๏ผ
mistralai/Codestral-22B-v0.1 - ่ฝฌๆขๅทฅๅ
ท๏ผ
mlx-lm 0.29.1 - ้ๅ็ฒพๅบฆ๏ผ4-bit๏ผๆปๅคงๅฐ็บฆ 12.5GB๏ผ
- ่ฟ่กๆกๆถ๏ผMLX๏ผๅ็่นๆ GPU ๅ ้๏ผ
- ้็จๅบๆฏ๏ผไปฃ็ ่กฅๅ จใไปฃ็ ็ๆใ็ผ็จ่พ ๅฉใFIM๏ผไธญ้ดๅกซๅ ๏ผ
๐ ไฝฟ็จๆนๆณ
1. ๅฝไปค่ก๏ผmlx-lm๏ผ
้ฆๅ ๅฎ่ฃ ไพ่ตๅ ๏ผ
pip install mlx-lm
็ถๅ็ดๆฅ่ฟ่กๆจกๅ๏ผ
mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"
2. Python ไปฃ็
from mlx_lm import load, generate
# ๅ ่ฝฝๆฌๆจกๅ
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")
# ๅฎไนไฝ ็ๆ็คบ่ฏ
prompt = "ๅไธไธชๅธฆๆณจ้็Pythonๅฟซ้ๆๅบๅฝๆฐ"
# ๅบ็จๅฏน่ฏๆจกๆฟ
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)
# ๆจกๅไผ็ๆๅๅค
response = generate(model, tokenizer, prompt=prompt, verbose=True)
3. LM Studio ไฝฟ็จ
- ๆๅผ LM Studio๏ผ็ปๅฝไฝ ็ Hugging Face ่ดฆๅท
- ่ฟๅ ฅ Publish ๆ ็ญพ้กต
- ๆ็ดข
CyberYui/Codestral-22B-Yui-MLX - ไธ่ฝฝๅนถๅ ่ฝฝๆจกๅ๏ผๅณๅฏไบซๅๅ็ MLX ็ๆฌ็ Codestral ๆจกๅไบ๏ผ
๐ ๅผๆบๅ่ฎฎ
ๆฌๆจกๅ้ตๅพช Apache License 2.0 ๅ่ฎฎๅๅ๏ผไธฅๆ ผ้ตๅฎๅๆจกๅ็ๅผๆบ่ฆๆฑใ ```
- Downloads last month
- 365
4-bit
Model tree for CyberYui/Codestral-22B-Yui-MLX
Base model
mistralai/Codestral-22B-v0.1Evaluation results
- vram on Codeself-reported12.5 GB