hf_text-generation-inference/server/text_generation_server/utils
drbh 6cb42f49ae
feat: support lora revisions and qkv_proj weights (#2482)
* feat: support lora revisions and qkv_proj weights

* fix: add qkv_proj weights to weight test
2024-09-02 13:09:06 -04:00
..
merges feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
__init__.py
adapter.py feat: support lora revisions and qkv_proj weights (#2482) 2024-09-02 13:09:06 -04:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py
dist.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
hub.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
import_utils.py Pr 2337 ci branch (#2379) 2024-08-08 12:30:29 -04:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
logits_process.py patch-error-on-invalid-grammar (#2282) 2024-07-29 10:09:25 -04:00
peft.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
quantization.py Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300) 2024-07-31 13:08:41 +02:00
segments.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
sgmv.py fix: allocate tmp based on sgmv kernel if available (#2345) 2024-08-12 17:24:32 +02:00
speculate.py
tokens.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
watermark.py
weights.py fix(server): fix fp8 weight loading (#2268) 2024-07-22 15:51:32 +00:00