MiniCPM-V 4.6
Collection
MLX variants of MiniCPM-V 4.6, 1.3B parameters (SigLIP2 400M vision encoder + Qwen3.5-0.8B LLM), repo: https://huggingface.co/openbmb/MiniCPM-V-4.6 • 7 items • Updated • 1
How to use mlx-community/MiniCPM-V-4.6-8bit with MLX:
# Make sure mlx-vlm is installed
# pip install --upgrade mlx-vlm
from mlx_vlm import load, generate
from mlx_vlm.prompt_utils import apply_chat_template
from mlx_vlm.utils import load_config
# Load the model
model, processor = load("mlx-community/MiniCPM-V-4.6-8bit")
config = load_config("mlx-community/MiniCPM-V-4.6-8bit")
# Prepare input
image = ["http://images.cocodataset.org/val2017/000000039769.jpg"]
prompt = "Describe this image."
# Apply chat template
formatted_prompt = apply_chat_template(
processor, config, prompt, num_images=1
)
# Generate output
output = generate(model, processor, formatted_prompt, image)
print(output)How to use mlx-community/MiniCPM-V-4.6-8bit with Pi:
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "mlx-community/MiniCPM-V-4.6-8bit"
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
"providers": {
"mlx-lm": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "mlx-community/MiniCPM-V-4.6-8bit"
}
]
}
}
}# Start Pi in your project directory: pi
How to use mlx-community/MiniCPM-V-4.6-8bit with Hermes Agent:
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "mlx-community/MiniCPM-V-4.6-8bit"
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default mlx-community/MiniCPM-V-4.6-8bit
hermes
This model was converted to MLX format from openbmb/MiniCPM-V-4.6
using mlx-vlm version 0.5.0.
Refer to the original model card for more details on the model.
pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/MiniCPM-V-4.6-8bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
8-bit
Base model
openbmb/MiniCPM-V-4.6