Kernels documentation
Kernels CLI Reference
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v0.13.0).
Kernels CLI Reference
The kernels CLI provides commands for managing compute kernels.
Commands
| Command | Description |
|---|---|
| upload | Upload kernels to the Hub |
| benchmark | Run benchmark results for a kernel |
| check | Check a kernel for compliance |
| versions | Show kernel versions |
| lock | Lock kernel revisions |
| download | Download locked kernels |
| skills | Add skills for AI coding assistants |
Quick Start
For building and writing kernels, please refer building kernels and writing kernels.
Use kernels in your project
Directly from the Hub
import torch
from kernels import get_kernel
# Download optimized kernels from the Hugging Face hub
my_kernel = get_kernel("my-username/my-kernel", version=1)
# Random tensor
x = torch.randn((10, 10), dtype=torch.float16, device="cuda")
# Run the kernel
y = torch.empty_like(x)
my_kernel.my_kernel_function(y, x)
print(y)or
Locked and downloaded
Add to pyproject.toml:
[tool.kernels.dependencies]
"my-username/my-kernel" = "1"Then lock and download:
kernels lock . kernels download .
See help
kernels --help