hf_text-generation-inference/server/text_generation_server/layers/marlin
Daniël de Kok 46a5a7e73e
Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758)
This change adds support for wNa16 int checkpoints with 2:4 sparsity
using Marlin 2:4 kernels.
2024-11-20 18:25:23 +01:00
..
__init__.py Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300) 2024-07-31 13:08:41 +02:00
fp8.py Fp8 e4m3_fnuz support for rocm (#2588) 2024-10-16 09:54:50 +02:00
gptq.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
marlin.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
util.py Split up `layers.marlin` into several files (#2292) 2024-07-24 16:33:26 +02:00