hf_text-generation-inference/nix
Daniël de Kok a785000842
Add initial support for compressed-tensors checkpoints (#2732)
compressed-tensors is a safetensors extension for sparse, quantized
tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
quantization, because

- Different quantizer configurations can be used for different targets.
- The format can specify input/output quantizers in addition to weight
  quantizers.
- Configurable exclusions for quantization.

This change adds a dependency on the `compressed-tensors` package for
its configuration parsing and layer matching functionality.

The following types of quantization are supported in this PR:

- W8A16 and W4A16 INT using GPTQ-Marlin kernels.
- W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.

Support for other quantization types will be added in subsequent PRs.
2024-11-10 13:54:07 +01:00
..
client.nix Stream options. (#2533) 2024-09-19 20:50:37 +02:00
crate-overrides.nix nix: support Python tokenizer conversion in the router (#2515) 2024-09-12 10:44:01 +02:00
docker.nix nix: experimental support for building a Docker container (#2470) 2024-10-01 18:02:06 +02:00
impure-shell.nix feat: natively support Granite models (#2682) 2024-10-23 10:04:05 +00:00
overlay.nix nix: example of local package overrides during development (#2607) 2024-10-04 16:52:42 +02:00
server.nix Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00