hf_text-generation-inference/server/marlin
Daniël de Kok 093a27c528
Add support for GPTQ Marlin (#2052)
Add support for GPTQ Marlin kernels

GPTQ Marlin extends the Marlin kernels to support common GPTQ
configurations:

- bits: 4 or 8
- groupsize: -1, 32, 64, or 128
- desc_act: true/false

Using the GPTQ Marlin kernels requires repacking the parameters in the
Marlin quantizer format.

The kernels were contributed by Neural Magic to VLLM. We vendor them
here for convenience.
2024-06-14 09:45:42 +02:00
..
marlin_kernels Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
COPYRIGHT Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
setup.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00