hf_text-generation-inference/server/text_generation_server/layers
Daniël de Kok a785000842
Add initial support for compressed-tensors checkpoints (#2732)
compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
2024-11-10 13:54:07 +01:00
..
attention Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
awq fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
compressed_tensors Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
gptq fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
marlin Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
moe
__init__.py
bnb.py
conv.py
eetq.py
exl2.py
fp8.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
layernorm.py
linear.py
lora.py
medusa.py
mlp.py
rotary.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
speculative.py
tensor_parallel.py