hf_text-generation-inference/server/text_generation_server/utils
Daniël de Kok f1f98e369f
Add support for Marlin 2:4 sparsity (#2102)
This change adds support for 2:4 sparsity when using Marlin
quantization. The 2:4 kernel is used when:

* The quantizer is `marlin`;
* the quantizer checkpoint format is `marlin_24`.

Fixes #2098.
2024-06-25 21:09:42 +02:00
..
merges Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
hub.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
import_utils.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
log.py v1.3.4 2023-12-22 15:46:04 +01:00
logits_process.py Fixing frequency penalty (#1811) 2024-04-30 12:13:23 +02:00
peft.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
segments.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
sgmv.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py Use the generation config. (#1808) 2024-04-25 19:41:50 +02:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00