hf_text-generation-inference/server/text_generation_server/utils
Daniël de Kok 5726a9ca81 Move to moe-kernels package and switch to common MoE layer
This change introduces the new `moe-kernels` package:

- Add `moe-kernels` as a dependency.
- Introduce a `SparseMoELayer` module that can be used by MoE
  models.
- Port over Mixtral and Deepseek.
2024-09-16 10:57:44 +00:00
..
merges feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
__init__.py
adapter.py fix: pass missing revision arg for lora adapter when loading multiple… (#2510) 2024-09-12 17:04:52 +02:00
chunks.py
convert.py
dist.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
hub.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
import_utils.py Pr 2337 ci branch (#2379) 2024-08-08 12:30:29 -04:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
logits_process.py patch-error-on-invalid-grammar (#2282) 2024-07-29 10:09:25 -04:00
peft.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
quantization.py Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300) 2024-07-31 13:08:41 +02:00
segments.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
sgmv.py fix: allocate tmp based on sgmv kernel if available (#2345) 2024-08-12 17:24:32 +02:00
speculate.py
tokens.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
watermark.py
weights.py Move to moe-kernels package and switch to common MoE layer 2024-09-16 10:57:44 +00:00