34f7dcfd80
The `GPTWeightLoader` was structured like this in pseudocode: if marlin: Set up tensors in a way that GPTQ-Marlin expects else: Set up tensors in a way that ExLlama/GPTQ/AWQ expect However, the GPT-Marlin implementation details should really be in the `marlin` module. So move the former part out to a separate `GPTQMarlinWeightsLoader`. |
||
---|---|---|
.. | ||
__init__.py | ||
custom_autotune.py | ||
exllama.py | ||
exllamav2.py | ||
quant_linear.py | ||
quantize.py | ||
utils.py |