hf_text-generation-inference/server/marlin
Daniël de Kok f1f98e369f
Add support for Marlin 2:4 sparsity (#2102)
This change adds support for 2:4 sparsity when using Marlin
quantization. The 2:4 kernel is used when:

* The quantizer is `marlin`;
* the quantizer checkpoint format is `marlin_24`.

Fixes #2098.
2024-06-25 21:09:42 +02:00
..
marlin_kernels Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
COPYRIGHT Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
setup.py Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00