hf_text-generation-inference/server/text_generation_server/layers
Daniël de Kok 5e0fb46821
Make handling of FP8 scales more consisent (#2666)
Change `fp8_quantize` so that we can pass around reciprocals everywhere,
so scales are always passed around in the checkpoint format.

I also noticed that we ignore any input scales that we might have when
fbgemm is available. Skip this path if we already have a scale.
2024-10-19 09:05:01 +02:00
..
attention Break cycle between the attention implementations and KV cache (#2627) 2024-10-17 14:54:22 +02:00
awq CI job. Gpt awq 4 (#2665) 2024-10-18 17:55:53 +02:00
gptq CI job. Gpt awq 4 (#2665) 2024-10-18 17:55:53 +02:00
marlin Fp8 e4m3_fnuz support for rocm (#2588) 2024-10-16 09:54:50 +02:00
moe Add support for fused MoE Marlin for AWQ (#2616) 2024-10-08 11:56:41 +02:00
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
bnb.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
exl2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
fp8.py Make handling of FP8 scales more consisent (#2666) 2024-10-19 09:05:01 +02:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
lora.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
medusa.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
mlp.py Tied embeddings in MLP speculator. (#2473) 2024-08-29 17:44:54 +02:00
rotary.py feat: support phi3.5 moe (#2479) 2024-09-30 11:15:09 +02:00
speculative.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
tensor_parallel.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00