hf_text-generation-inference/server/text_generation_server/utils
Daniël de Kok 3c9df21ff8
Add support for compressed-tensors w8a8 int checkpoints (#2745)
* Add support for compressed-tensors w8a8 int checkpoints

This change adds a loader for w8a8 int checkpoints. One large benefit of
int8 support is that the corresponding cutlass matmul kernels also work on
compute capability 7.5.

Evaluation on neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8:

|     Tasks     |Version|     Filter     |n-shot|        Metric         |   |Value |   |Stderr|
|---------------|------:|----------------|-----:|-----------------------|---|-----:|---|------|
|gsm8k_cot_llama|      3|flexible-extract|     8|exact_match            |↑  |0.8431|±  |0.0100|
|               |       |strict-match    |     8|exact_match            |↑  |0.8393|±  |0.0101|
|ifeval         |      4|none            |     0|inst_level_loose_acc   |↑  |0.8597|±  |   N/A|
|               |       |none            |     0|inst_level_strict_acc  |↑  |0.8201|±  |   N/A|
|               |       |none            |     0|prompt_level_loose_acc |↑  |0.7967|±  |0.0173|
|               |       |none            |     0|prompt_level_strict_acc|↑  |0.7468|±  |0.0187|

Which is the same ballpark as vLLM.

As usual, lots of thanks to Neural Magic/vLLM for the kernels.

* Always use dynamic input quantization for w8a8 int

It's far less flaky and gives better output.

* Use marlin-kernels 0.3.5

* Fix a typo

Co-authored-by: drbh <david.richard.holtz@gmail.com>

* Small fixes

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
2024-11-18 17:20:31 +01:00
..
merges feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
hub.py Micro cleanup. (#2555) 2024-09-24 11:19:24 +02:00
import_utils.py feat: enable pytorch xpu support for non-attention models (#2561) 2024-10-14 18:28:49 +02:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
logits_process.py Upgrading our deps. (#2750) 2024-11-15 14:03:27 +01:00
peft.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
prefill_chunking.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
quantization.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
segments.py fix: improve find_segments via numpy diff (#2686) 2024-11-18 09:51:06 -05:00
sgmv.py fix: allocate tmp based on sgmv kernel if available (#2345) 2024-08-12 17:24:32 +02:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Add support for compressed-tensors w8a8 int checkpoints (#2745) 2024-11-18 17:20:31 +01:00