hf_text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok 34f7dcfd80
Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300)
The `GPTWeightLoader` was structured like this in pseudocode:

if marlin:
  Set up tensors in a way that GPTQ-Marlin expects
else:
  Set up tensors in a way that ExLlama/GPTQ/AWQ expect

However, the GPT-Marlin implementation details should really be in the
`marlin` module. So move the former part out to a separate
`GPTQMarlinWeightsLoader`.
2024-07-31 13:08:41 +02:00
..
__init__.py Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300) 2024-07-31 13:08:41 +02:00
custom_autotune.py Some small fixes for the Torch 2.4.0 update (#2304) 2024-07-25 13:34:44 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
quant_linear.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
quantize.py server quantize: store quantizer config in standard format (#2299) 2024-07-30 15:16:20 +02:00
utils.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00