hf_text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok dbb23fbfa8
Use symmetric quantization in the `quantize` subcommand (#2120)
Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
2024-07-12 12:20:12 +02:00
..
__init__.py Use symmetric quantization in the `quantize` subcommand (#2120) 2024-07-12 12:20:12 +02:00
custom_autotune.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-06-05 10:45:47 +02:00
quant_linear.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
quantize.py Use symmetric quantization in the `quantize` subcommand (#2120) 2024-07-12 12:20:12 +02:00