Transformers documentation

vLLM

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.1.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

vLLM

vLLM is a high-throughput inference engine for serving LLMs at scale. It continuously batches requests and keeps KV cache memory compact with PagedAttention.

Set model_impl="transformers" to load a model using the Transformers modeling backend.

from vllm import LLM

llm = LLM(model="meta-llama/Llama-3.2-1B", model_impl="transformers")
print(llm.generate(["The capital of France is"]))

Pass --model-impl transformers to the vllm serve command for online serving.

vllm serve meta-llama/Llama-3.2-1B \
    --task generate \
    --model-impl transformers

Transformers integration

  1. AutoConfig.from_pretrained() loads the model’s config.json from the Hub or your Hugging Face cache. vLLM checks the architectures field against its internal model registry to determine which vLLM model class to use.
  2. If the model isn’t in the registry, vLLM calls AutoModel.from_config() to load the Transformers model implementation instead.
  3. AutoTokenizer.from_pretrained() loads the tokenizer files. vLLM caches some tokenizer internals to reduce overhead during inference.
  4. Model weights download from the Hub in safetensors format.

Setting model_impl="transformers" bypasses the vLLM model registry and loads directly from Transformers. vLLM replaces most model modules (MoE, attention, linear layers) with its own optimized versions while keeping the Transformers model structure.

Resources

Update on GitHub