Gemma-4-TypeScript-Coder : GGUF

This model is a specialized fine-tune of Gemma 4, engineered for TypeScript-centric web development, strict type safety, and modern full-stack architectures. It was trained using Unsloth Studio for maximum efficiency and precision.

🟦 TypeScript Mastery

This fine-tune specializes in:

  • Strict Type Systems: Expertise in complex generics, utility types, and advanced interfaces.
  • Modern Frameworks: High proficiency in Next.js, React, Vue 3, and Node.js.
  • Visual Logic: Leverages vision-language capabilities to transform UI wireframes or screenshots directly into type-safe components.
  • Best Practices: Focus on clean architecture and idiomatic TypeScript patterns.

🀝 Credits & Acknowledgments

A major shout-out to mhhmm for the typescript-instruct-20k dataset. This robust collection of instructions allowed the model to grasp the nuances of the TypeScript ecosystem effectively.

πŸš€ Usage & Inference

The model is provided in GGUF format, compatible with llama.cpp.

Example usage:

  • Standard Text Chat: llama-cli -hf MassivDash/Gemma-4-typescript-coder --jinja
  • Vision/Image Tasks: llama-mtmd-cli -hf MassivDash/Gemma-4-typescript-coder --jinja

πŸ“‚ Available Model Files

  • gemma-4-e2b-it.Q8_0.gguf
  • gemma-4-e2b-it.BF16-mmproj.gguf

⚠️ Ollama Note for Vision Models

Important: Ollama currently requires a unified blob for vision models.

To use this with Ollama:

  1. Ensure your Modelfile is in the same directory as the merged BF16 model.
  2. Run: ollama create model_name -f ./Modelfile

πŸ”— Stay Connected

For more insights on AI development and fine-tuning, visit my blog: πŸ‘‰ spaceout.pl


This model was trained 2x faster with Unsloth

Downloads last month
2,017
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support