hf_text-generation-inference/server/text_generation_server/utils
Daniël de Kok 8511669cb2
Move quantized weight handling out of the `Weights` class (#2194)
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.

This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.

I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
2024-07-09 20:04:03 +02:00
..
merges Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
hub.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
import_utils.py refine get xpu free memory/enable Qwen2/gemma2/gemma/phi in intel platform (#2132) 2024-07-01 14:32:54 +02:00
log.py v1.3.4 2023-12-22 15:46:04 +01:00
logits_process.py Fixing frequency penalty (#1811) 2024-04-30 12:13:23 +02:00
peft.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
quantization.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
segments.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
sgmv.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py Use the generation config. (#1808) 2024-04-25 19:41:50 +02:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00