How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="mlx-community/bigcode-starcoder2-15b-fp32")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("mlx-community/bigcode-starcoder2-15b-fp32")
model = AutoModelForCausalLM.from_pretrained("mlx-community/bigcode-starcoder2-15b-fp32")
Quick Links

mlx-community/bigcode-starcoder2-15b-fp32

The Model mlx-community/bigcode-starcoder2-15b-fp32 was converted to MLX format from bigcode/starcoder2-15b using mlx-lm version 0.21.1 by Focused.

Focused Logo

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/bigcode-starcoder2-15b-fp32")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Focused is a technology company at the forefront of AI-driven development, empowering organizations to unlock the full potential of artificial intelligence. From integrating innovative models into existing systems to building scalable, modern AI infrastructures, we specialize in delivering tailored, incremental solutions that meet you where you are. Curious how we can help with your AI next project? Get in Touch

Focused Logo

Downloads last month
44
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mlx-community/bigcode-starcoder2-15b-fp32

Finetuned
(20)
this model

Dataset used to train mlx-community/bigcode-starcoder2-15b-fp32

Evaluation results