hf_text-generation-inference/docs/source
Daniël de Kok a785000842
Add initial support for compressed-tensors checkpoints (#2732)
compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
2024-11-10 13:54:07 +01:00
..
basic_tutorials chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
conceptual chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
reference Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
_toctree.yml Small fixes for supported models (#2471) 2024-10-14 15:26:39 +02:00
architecture.md Update architecture.md (#2577) 2024-09-30 08:56:20 +02:00
index.md
installation.md
installation_amd.md chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
installation_gaudi.md
installation_inferentia.md
installation_intel.md chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
installation_nvidia.md chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
quicktour.md chore: prepare 2.4.0 release (#2695) 2024-10-25 21:10:49 +00:00
supported_models.md Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
usage_statistics.md feat: allow any supported payload on /invocations (#2683) 2024-10-23 11:26:01 +00:00