hf_text-generation-inference/server/marlin/marlin_kernels
Daniël de Kok cb150eb295
Add support for FP8 on compute capability >=8.0, <8.9 (#2213)
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.

Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
2024-07-11 16:03:26 +02:00
..
sparse Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
__init__.pyi Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
ext.cpp Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
ext.hh Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
fp8_marlin.cu Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
gptq_marlin.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_dtypes.cuh Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
gptq_marlin_repack.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
marlin_cuda_kernel.cu Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
py.typed Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00