hf_text-generation-inference/server/text_generation_server
Daniël de Kok a785000842
Add initial support for compressed-tensors checkpoints (#2732)
compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
2024-11-10 13:54:07 +01:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layers Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
models Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
__init__.py
cache.py
cli.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00