hf_text-generation-inference/server/text_generation_server/models/custom_modeling
Daniël de Kok 093a27c528
Add support for GPTQ Marlin (#2052)
Add support for GPTQ Marlin kernels

GPTQ Marlin extends the Marlin kernels to support common GPTQ
configurations:

- bits: 4 or 8
- groupsize: -1, 32, 64, or 128
- desc_act: true/false

Using the GPTQ Marlin kernels requires repacking the parameters in the
Marlin quantizer format.

The kernels were contributed by Neural Magic to VLLM. We vendor them
here for convenience.
2024-06-14 09:45:42 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
clip.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
flash_cohere_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
flash_dbrx_modeling.py Add Phi-3 medium support (#2039) 2024-06-10 09:22:29 +02:00
flash_gemma_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_gpt2_modeling.py Add Phi-3 medium support (#2039) 2024-06-10 09:22:29 +02:00
flash_llama_modeling.py Add Phi-3 medium support (#2039) 2024-06-10 09:22:29 +02:00
flash_mistral_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_mixtral_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_neox_modeling.py feat: move allocation logic to rust (#1835) 2024-06-05 12:18:38 +02:00
flash_pali_gemma_modeling.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
flash_phi_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_qwen2_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_rw_modeling.py feat: move allocation logic to rust (#1835) 2024-06-05 12:18:38 +02:00
flash_santacoder_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
flash_starcoder2_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
idefics2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py chore: formatting 2023-12-11 14:49:52 +01:00
idefics_modeling.py reenable xpu for tgi (#1939) 2024-05-23 14:11:08 +02:00
idefics_perceiver.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
idefics_processing.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_vision.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
llava_next.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
mamba_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
mpt_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
neox_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
opt_modeling.py fix(server): fix OPT implementation (#2061) 2024-06-12 18:22:20 +02:00
phi_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
siglip.py Removing some unused code. (#1915) 2024-05-17 11:35:49 +02:00
t5_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
vlm.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00