hf_text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok 2ce8019480
Use GPTQ-Marlin for supported GPTQ configurations (#2111)
GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So
let's use it by default if the kernels are installed, the GPU supports
it, and the kernels support the configuration.

For models generated by `text-generation-server quantize`, use
`sym=False`. This subcommand symmetric quantization since the beginning
and incorrectly reporting the model to be symmetric will use
GPTQ-Marlin (which does not support asymmetric quantization).
2024-07-01 12:59:12 +02:00
..
__init__.py Use GPTQ-Marlin for supported GPTQ configurations (#2111) 2024-07-01 12:59:12 +02:00
custom_autotune.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-06-05 10:45:47 +02:00
quant_linear.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
quantize.py Fix `text-generation-server quantize` (#2103) 2024-06-21 15:28:51 +02:00