Transformers documentation

llama.cpp

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.1.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

llama.cpp

llama.cpp is a C/C++ inference engine for deploying large language models locally. It’s lightweight and doesn’t require Python, CUDA, or other heavy server infrastructure. llama.cpp uses the GGUF file format. GGUF supports quantized model weights and memory-mapping to reduce memory bandwidth on your device.

Browse the Hub for models already available in GGUF format.

Convert any Transformers model to GGUF format with the convert_hf_to_gguf.py script.

python3 convert_hf_to_gguf.py ./models/openai/gpt-oss-20b \
  --outfile gpt-oss-20b.gguf \

Deploy the model locally from the command line with llama-cli or start a web UI with llama-server. Add the -hf flag to indicate the model is from the Hub.

llama-cli
llama-server
llama-cli -hf ggml-org/gpt-oss-20b-GGUF

Transformers integration

  1. AutoConfig.from_pretrained() loads the model’s config.json file to extract metadata.
  2. AutoTokenizer.from_pretrained() extracts the vocabulary and tokenizer configuration.
  3. Based on the architectures field in the config, the script selects a converter class from its internal registry. The registry maps Transformers architecture names (like LlamaForCausalLM) to corresponding converter classes.
  4. The converter maps Transformers tensor names (for example, model.layers.0.self_attn.q_proj.weight) to GGUF tensor names, transforms tensors, and packages the vocabulary.
  5. The output is a single GGUF file containing the model weights, tokenizer, and metadata.

Resources

Update on GitHub