hf_text-generation-inference/server/marlin/marlin_kernels
Daniël de Kok 093a27c528
Add support for GPTQ Marlin (#2052)
Add support for GPTQ Marlin kernels

GPTQ Marlin extends the Marlin kernels to support common GPTQ
configurations:

- bits: 4 or 8
- groupsize: -1, 32, 64, or 128
- desc_act: true/false

Using the GPTQ Marlin kernels requires repacking the parameters in the
Marlin quantizer format.

The kernels were contributed by Neural Magic to VLLM. We vendor them
here for convenience.
2024-06-14 09:45:42 +02:00
..
__init__.pyi Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
ext.cpp Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
ext.hh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_dtypes.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_repack.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
marlin_cuda_kernel.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
py.typed Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00