hf_text-generation-inference/server/marlin/marlin_kernels
Daniël de Kok f1f98e369f
Add support for Marlin 2:4 sparsity (#2102)
This change adds support for 2:4 sparsity when using Marlin
quantization. The 2:4 kernel is used when:

* The quantizer is `marlin`;
* the quantizer checkpoint format is `marlin_24`.

Fixes #2098.
2024-06-25 21:09:42 +02:00
..
sparse Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
__init__.pyi Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
ext.cpp Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
ext.hh Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
gptq_marlin.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_dtypes.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_repack.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
marlin_cuda_kernel.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
py.typed Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00