hf_text-generation-inference/server/text_generation_server/layers
Daniël de Kok 8511669cb2
Move quantized weight handling out of the `Weights` class (#2194)
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.

This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.

I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
2024-07-09 20:04:03 +02:00
..
attention Fixing rocm. (#2164) 2024-07-02 12:01:08 +02:00
awq Support AWQ quantization with bias (#2117) 2024-06-25 21:09:00 +02:00
gptq Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
bnb.py [Bug Fix] Update torch import reference in bnb quantization (#1902) 2024-05-15 21:08:32 +02:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exl2.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
fp8.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Use GPTQ-Marlin for supported GPTQ configurations (#2111) 2024-07-01 12:59:12 +02:00
lora.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
marlin.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
medusa.py fix: use path inside of speculator config (#1935) 2024-05-22 20:46:29 +02:00
mlp.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
rotary.py Adding "longrope" for Phi-3 (#2172) (#2179) 2024-07-05 09:46:41 +02:00
speculative.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
tensor_parallel.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00