hf_text-generation-inference/server/text_generation_server/layers
Daniël de Kok dbb23fbfa8
Use symmetric quantization in the `quantize` subcommand (#2120)
Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
2024-07-12 12:20:12 +02:00
..
attention Fixing rocm. (#2164) 2024-07-02 12:01:08 +02:00
awq Support AWQ quantization with bias (#2117) 2024-06-25 21:09:00 +02:00
gptq Use symmetric quantization in the `quantize` subcommand (#2120) 2024-07-12 12:20:12 +02:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
bnb.py [Bug Fix] Update torch import reference in bnb quantization (#1902) 2024-05-15 21:08:32 +02:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exl2.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
fp8.py Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
lora.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
marlin.py Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
medusa.py fix: use path inside of speculator config (#1935) 2024-05-22 20:46:29 +02:00
mlp.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
rotary.py [fix] Modifying base in yarn embedding (#2212) 2024-07-12 10:04:51 +02:00
speculative.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
tensor_parallel.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00