Text Generation
Transformers
Safetensors
MLX
llama
code
mlx-my-repo
Eval Results (legacy)
text-generation-inference
8-bit precision
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("cnfusion/Mellum-4b-sft-python-mlx-8Bit")
model = AutoModelForCausalLM.from_pretrained("cnfusion/Mellum-4b-sft-python-mlx-8Bit")Quick Links
cnfusion/Mellum-4b-sft-python-mlx-8Bit
The Model cnfusion/Mellum-4b-sft-python-mlx-8Bit was converted to MLX format from JetBrains/Mellum-4b-sft-python using mlx-lm version 0.22.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("cnfusion/Mellum-4b-sft-python-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 14
Model size
1B params
Tensor type
F16
·
U32 ·
Hardware compatibility
Log In to add your hardware
8-bit
Model tree for cnfusion/Mellum-4b-sft-python-mlx-8Bit
Base model
JetBrains/Mellum-4b-base Finetuned
JetBrains/Mellum-4b-sft-pythonDatasets used to train cnfusion/Mellum-4b-sft-python-mlx-8Bit
Evaluation results
- EM on RepoBench 1.1 (Python)self-reported0.284
- EM ≤ 8k on RepoBench 1.1 (Python)self-reported0.299
- EM on RepoBench 1.1 (Python)self-reported0.292
- EM on RepoBench 1.1 (Python)self-reported0.306
- EM on RepoBench 1.1 (Python)self-reported0.298
- EM on RepoBench 1.1 (Python)self-reported0.268
- EM on RepoBench 1.1 (Python)self-reported0.254
- pass@1 on SAFIMself-reported0.421
- pass@1 on SAFIMself-reported0.332
- pass@1 on SAFIMself-reported0.361
- pass@1 on SAFIMself-reported0.571
- pass@1 on HumanEval Infilling (Single-Line)self-reported0.804
- pass@1 on HumanEval Infilling (Single-Line)self-reported0.482
- pass@1 on HumanEval Infilling (Single-Line)self-reported0.377
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="cnfusion/Mellum-4b-sft-python-mlx-8Bit")