hf_text-generation-inference/docs
Daniël de Kok a785000842
Add initial support for compressed-tensors checkpoints (#2732)
compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
2024-11-10 13:54:07 +01:00
..
source Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
README.md Update documentation version to 2.0.4 (#1980) 2024-05-31 16:03:24 +02:00
index.html chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
openapi.json fix: add chat_tokenize endpoint to api docs (#2710) 2024-11-04 06:44:59 +01:00

README.md

Documentation available at: https://huggingface.co/docs/text-generation-inference

Release

When making a release, please update the latest version in the documentation with:

export OLD_VERSION="2\.0\.3"
export NEW_VERSION="2\.0\.4"
find . -name '*.md' -exec sed -i -e "s/$OLD_VERSION/$NEW_VERSION/g" {} \;