How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Codingstark/gemma3-270m-leetcode-gguf
# Run inference directly in the terminal:
llama-cli -hf Codingstark/gemma3-270m-leetcode-gguf
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Codingstark/gemma3-270m-leetcode-gguf
# Run inference directly in the terminal:
llama-cli -hf Codingstark/gemma3-270m-leetcode-gguf
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Codingstark/gemma3-270m-leetcode-gguf
# Run inference directly in the terminal:
./llama-cli -hf Codingstark/gemma3-270m-leetcode-gguf
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Codingstark/gemma3-270m-leetcode-gguf
# Run inference directly in the terminal:
./build/bin/llama-cli -hf Codingstark/gemma3-270m-leetcode-gguf
Use Docker
docker model run hf.co/Codingstark/gemma3-270m-leetcode-gguf
Quick Links

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

gemma3-270m-leetcode-gguf

Original model: Codingstark/gemma3-270m-leetcode Format: GGUF Quantization: bf16

This is a GGUF conversion of the Codingstark/gemma3-270m-leetcode model, optimized for use with applications like LM Studio, Ollama, and other GGUF-compatible inference engines.

Usage

Load this model in any GGUF-compatible application by referencing the .gguf file.

Model Details

  • Original Repository: Codingstark/gemma3-270m-leetcode
  • Converted Format: GGUF
  • Quantization Level: bf16
  • Compatible With: LM Studio, Ollama, llama.cpp, and other GGUF inference engines

Conversion Process

This model was converted using the llama.cpp conversion scripts with the following settings:

  • Input format: Hugging Face Transformers
  • Output format: GGUF
  • Quantization: bf16

License

Please refer to the original model's license terms.

Downloads last month
38
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support