Transformers documentation
vLLM
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v5.1.0).
vLLM
vLLM is a high-throughput inference engine for serving LLMs at scale. It continuously batches requests and keeps KV cache memory compact with PagedAttention.
Set model_impl="transformers" to load a model using the Transformers modeling backend.
from vllm import LLM
llm = LLM(model="meta-llama/Llama-3.2-1B", model_impl="transformers")
print(llm.generate(["The capital of France is"]))Pass --model-impl transformers to the vllm serve command for online serving.
vllm serve meta-llama/Llama-3.2-1B \
--task generate \
--model-impl transformersTransformers integration
- AutoConfig.from_pretrained() loads the model’s
config.jsonfrom the Hub or your Hugging Face cache. vLLM checks thearchitecturesfield against its internal model registry to determine which vLLM model class to use. - If the model isn’t in the registry, vLLM calls AutoModel.from_config() to load the Transformers model implementation instead.
- AutoTokenizer.from_pretrained() loads the tokenizer files. vLLM caches some tokenizer internals to reduce overhead during inference.
- Model weights download from the Hub in safetensors format.
Setting model_impl="transformers" bypasses the vLLM model registry and loads directly from Transformers. vLLM replaces most model modules (MoE, attention, linear layers) with its own optimized versions while keeping the Transformers model structure.
Resources
- vLLM docs for more usage examples and tips.
- Integration with Hugging Face explains how vLLM integrates with Transformers.