hf_text-generation-inference/server/marlin/marlin_kernels
Daniël de Kok 9935720c87
Add support for repacking AWQ weights for GPTQ-Marlin (#2278)
* Add support for repacking AWQ weights for GPTQ-Marlin

So far we couldn't support AWQ because virtually all AWQ models use
symmetric quantization, which GPTQ-Marlin did not suppors. GPTQ-Marlin
has recently added support AWQ repacking and AWQ asymmetric quantization
(zero_point=True).

This change updates all GPTQ-Marlin kernels from upstream and wires up
AWQ support. For now enabling AWQ using Marlin requires running TGI with
`--quantize gptq`.

* Enable Marlin for supported AWQ configurations by default

This makes the AWQ -> GPTQ repack test redundant, since we are now
testing this with the regular AWQ test.
2024-07-23 13:08:20 +02:00
..
sparse Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
__init__.pyi Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
awq_marlin_repack.cu Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
ext.cpp Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
ext.hh Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
fp8_marlin.cu Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
gptq_marlin.cu Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
gptq_marlin_repack.cu Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
marlin.cuh Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
marlin_cuda_kernel.cu Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
marlin_dtypes.cuh Add support for repacking AWQ weights for GPTQ-Marlin (#2278) 2024-07-23 13:08:20 +02:00
py.typed Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00