๐Ÿ‡ฌ๐Ÿ‡ง English | ๐Ÿ‡จ๐Ÿ‡ณ ไธญๆ–‡

๐Ÿ‘‡ Scroll down for Chinese version / ๅ‘ไธ‹ๆปšๅŠจๆŸฅ็œ‹ไธญๆ–‡็‰ˆๆœฌ


๐Ÿ‡ฌ๐Ÿ‡ง English

Codestral-22B-Yui-MLX

This model is CyberYui's custom-converted MLX format port of Mistral AI's official mistralai/Codestral-22B-v0.1 model. No modifications, alterations, or fine-tuning of any kind were applied to the original model's weights, architecture, or parameters; this is strictly a format conversion for MLX, optimized exclusively for Apple Silicon (M1/M2/M3/M4) chips.

๐Ÿ“Œ Model Details

  • Base Model: mistralai/Codestral-22B-v0.1
  • Conversion Tool: mlx-lm 0.29.1
  • Quantization: 4-bit (โ‰ˆ12.5GB total size)
  • Framework: MLX (native Apple GPU acceleration)
  • Use Cases: Code completion, code generation, programming assistance, FIM (Fill-In-the-Middle)

๐Ÿš€ How to Use

1. Command Line (mlx-lm)

First, install the required package:

pip install mlx-lm

Then run the model directly:

mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"

2. Python Code

from mlx_lm import load, generate

# Load this model
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")

# Define your prompt
prompt = "Write a Python function for quicksort with comments"

# Apply chat template if available
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)

# Generate response
response = generate(model, tokenizer, prompt=prompt, verbose=True)

3. LM Studio

  1. Open LM Studio and log in to your Hugging Face account
  2. Go to the Publish tab
  3. Search for this modelCyberYui/Codestral-22B-Yui-MLX
  4. Download and load the model to enjoy native MLX acceleration!

๐Ÿ“„ License

This model is distributed under the Apache License 2.0, strictly following the original model's open-source license.


๐Ÿ‡จ๐Ÿ‡ณ ไธญๆ–‡

Codestral-22B-Yui-MLX

ๆœฌๆจกๅž‹ๅไธบ Codestral-22B ๏ผŒๆ˜ฏๅŸบไบŽ Mistral AI ๅฎ˜ๆ–น mistralai/Codestral-22B-v0.1 ๆจกๅž‹๏ผŒๆ— ไปปไฝ•ไฟฎๆ”นๅนถ็”ฑCyberYuiไธชไบบ่ฝฌๆข็š„ MLX ๆ ผๅผไธ“ๅฑž็‰ˆๆœฌๆจกๅž‹๏ผŒไธ“ไธบ Apple Silicon๏ผˆM1/M2/M3/M4๏ผ‰ ็ณปๅˆ—่Šฏ็‰‡ๆทฑๅบฆ้€‚้…ใ€‚

๐Ÿ“Œ ๆจกๅž‹่ฏฆๆƒ…

  • ๅŸบ็ก€ๆจกๅž‹๏ผšmistralai/Codestral-22B-v0.1
  • ่ฝฌๆขๅทฅๅ…ท๏ผšmlx-lm 0.29.1
  • ้‡ๅŒ–็ฒพๅบฆ๏ผš4-bit๏ผˆๆ€ปๅคงๅฐ็บฆ 12.5GB๏ผ‰
  • ่ฟ่กŒๆก†ๆžถ๏ผšMLX๏ผˆๅŽŸ็”Ÿ่‹นๆžœ GPU ๅŠ ้€Ÿ๏ผ‰
  • ้€‚็”จๅœบๆ™ฏ๏ผšไปฃ็ ่กฅๅ…จใ€ไปฃ็ ็”Ÿๆˆใ€็ผ–็จ‹่พ…ๅŠฉใ€FIM๏ผˆไธญ้—ดๅกซๅ……๏ผ‰

๐Ÿš€ ไฝฟ็”จๆ–นๆณ•

1. ๅ‘ฝไปค่กŒ๏ผˆmlx-lm๏ผ‰

้ฆ–ๅ…ˆๅฎ‰่ฃ…ไพ่ต–ๅŒ…๏ผš

pip install mlx-lm

็„ถๅŽ็›ดๆŽฅ่ฟ่กŒๆจกๅž‹๏ผš

mlx_lm.generate --model CyberYui/Codestral-22B-Yui-MLX --prompt "def quicksort(arr):"

2. Python ไปฃ็ 

from mlx_lm import load, generate

# ๅŠ ่ฝฝๆœฌๆจกๅž‹
model, tokenizer = load("CyberYui/Codestral-22B-Yui-MLX")

# ๅฎšไน‰ไฝ ็š„ๆ็คบ่ฏ
prompt = "ๅ†™ไธ€ไธชๅธฆๆณจ้‡Š็š„Pythonๅฟซ้€ŸๆŽ’ๅบๅ‡ฝๆ•ฐ"

# ๅบ”็”จๅฏน่ฏๆจกๆฟ
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False
)

# ๆจกๅž‹ไผš็”Ÿๆˆๅ›žๅค
response = generate(model, tokenizer, prompt=prompt, verbose=True)

3. LM Studio ไฝฟ็”จ

  1. ๆ‰“ๅผ€ LM Studio๏ผŒ็™ปๅฝ•ไฝ ็š„ Hugging Face ่ดฆๅท
  2. ่ฟ›ๅ…ฅ Publish ๆ ‡็ญพ้กต
  3. ๆœ็ดข CyberYui/Codestral-22B-Yui-MLX
  4. ไธ‹่ฝฝๅนถๅŠ ่ฝฝๆจกๅž‹๏ผŒๅณๅฏไบซๅ—ๅŽŸ็”Ÿ MLX ็‰ˆๆœฌ็š„ Codestral ๆจกๅž‹ไบ†๏ผ

๐Ÿ“„ ๅผ€ๆบๅ่ฎฎ

ๆœฌๆจกๅž‹้ตๅพช Apache License 2.0 ๅ่ฎฎๅˆ†ๅ‘๏ผŒไธฅๆ ผ้ตๅฎˆๅŽŸๆจกๅž‹็š„ๅผ€ๆบ่ฆๆฑ‚ใ€‚ ```

Downloads last month
365
Safetensors
Model size
22B params
Tensor type
BF16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for CyberYui/Codestral-22B-Yui-MLX

Quantized
(50)
this model

Evaluation results