hf_text-generation-inference/server/text_generation_server/models
Daniël de Kok 8511669cb2
Move quantized weight handling out of the `Weights` class (#2194)
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.

This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.

I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
2024-07-09 20:04:03 +02:00
..
custom_modeling Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
__init__.py Falcon/DBRX: get correct number of key-value heads (#2205) 2024-07-08 13:22:38 +02:00
bloom.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
flash_causal_lm.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
flash_mistral.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
galactica.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
globals.py [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-07-01 23:28:00 +02:00
idefics.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
mamba.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
model.py Hotfixing after refactor. 2024-07-05 09:25:29 +00:00
pali_gemma.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
seq2seq_lm.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
types.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
vlm_causal_lm.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00