hf_text-generation-inference/server/text_generation_server
Daniël de Kok f1f98e369f
Add support for Marlin 2:4 sparsity (#2102)
This change adds support for 2:4 sparsity when using Marlin
quantization. The 2:4 kernel is used when:

* The quantizer is `marlin`;
* the quantizer checkpoint format is `marlin_24`.

Fixes #2098.
2024-06-25 21:09:42 +02:00
..
adapters Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
layers Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
models Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
interceptor.py v2.0.0 (#1736) 2024-04-12 18:38:34 +02:00
server.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00