Transformers documentation

Candle

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v5.1.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Candle

Candle is a machine learning framework providing native Rust implementations of Transformers models. It natively supports safetensors to load Transformers models directly.

/// load model config
let config: Config = 
    serde_json::from_reader(std::fs::File::open(config_filename)?)?;

/// load safetensors and memory-maps them
let vb = unsafe {
    VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)?
};

/// materialize tensors from VarBuilder into model class
let model = Model::new(args.use_flash_attn, &config, vb)?;

Transformers integration

  1. The hf-hub crate checks your local Hugging Face cache for a model. If it isn’t there, it downloads model weights and configs from the Hub.
  2. VarBuilder lazily loads the safetensor files. It maps state-dict key names to Rust structs representing model layers. This mirrors how Transformers organizes its weights.
  3. Candle parses config.json to extract model metadata and instantiates the matching Rust model class with weights from VarBuilder.

Resources

Update on GitHub