Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
RedHatAI
/
quantization
like
6
Follow
Red Hat AI
2.22k
kernel
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
3
d26f884
quantization
1.38 GB
Ctrl+K
Ctrl+K
2 contributors
History:
36 commits
danieldk
HF Staff
Sync on vLLM 20240402
d26f884
about 1 year ago
build
Build (aarch64)
about 1 year ago
compressed_tensors
Sync with vLLM
over 1 year ago
core
Sync with vLLM
over 1 year ago
cutlass_extensions
Sync on vLLM 20240402
about 1 year ago
cutlass_w8a8
Sync on vLLM 20240402
about 1 year ago
fp8
Sync with vLLM
over 1 year ago
gptq_marlin
Sync with vLLM
over 1 year ago
marlin
Add full Marlin support and tests for Marlin/CUTLASS
over 1 year ago
tests
Add full Marlin support and tests for Marlin/CUTLASS
over 1 year ago
torch-ext
Sync on vLLM 20240402
about 1 year ago
.gitattributes
Safe
1.56 kB
Build
over 1 year ago
LICENSE
Safe
11.4 kB
Add cutlass_w8a8
over 1 year ago
README.md
Safe
195 Bytes
Update README.md (#1)
about 1 year ago
build.toml
3.25 kB
Sync capabilities with upstream
about 1 year ago
cuda_utils.h
Safe
1.41 kB
Sync on vLLM 20240402
about 1 year ago
dispatch_utils.h
Safe
1.49 kB
Add `scaled_(int|fp8)_quant` and `fp8_marlin_gemm`
over 1 year ago
flake.lock
Safe
3.03 kB
Update flake
about 1 year ago
flake.nix
Safe
335 Bytes
Add support for ROCm
about 1 year ago
utils.cuh
Safe
1.84 kB
Sync on vLLM 20240402
about 1 year ago
vectorization.cuh
Safe
778 Bytes
Sync with vLLM
over 1 year ago