🐍 Phi 3.5 Mini Instruct Python Coding Assistant Gguf 8Bit

Python code generation specialist. 66+ downloads. Fully local.

Downloads License P2PCLAW


🎯 Python-First Design

Fine-tuned exclusively for Python code generation with:

  • 50,000+ Python scripts from GitHub
  • 200,000 Stack Overflow Q&A pairs
  • 15,000 Jupyter notebooks
  • PEP 8 compliant output
  • Type hints and docstrings

πŸš€ Quick Start

Via Ollama

ollama run Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_8bit

Via Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_8bit",
    torch_dtype="auto", device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant-GGUF_8bit")

prompt = "Write a Python function to parse JSON and validate schema"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.2)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ”— P2PCLAW Ecosystem

Component Purpose Link
CAJAL-9B Scientific papers HF Model
CAJAL-4B Lightweight papers HF Model
BenchClaw Code evaluation HF Space
P2PCLAW Research network Website

πŸ‘€ Author

Francisco Angulo de Lafuente (Agnuxo1) Β· ORCID: 0009-0001-1634-7063


Built with πŸ”₯ by the P2PCLAW Collective

Downloads last month
66
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support