hf_text-generation-inference/server/text_generation_server/utils
OlivierDehaene a6a0c97ed9
feat: prefill chunking (#2600)
* wip

* rollback

* refactor to use prefix/postfix namming + fix all_input_ids_tensor

* maybe patching vlms?

* fix filter and concat

* wip, no filter, no concat

* current

* add prepare_for_prefill

* working

* load tested

* re-create slots

* re-create slots

* fix slot_filtering_indices

* feedback loop

* remove log

* fix benchmarker

* fix vlm and seq2seq

* rename to cache and input lengths

* fix prefill logprobs

* fix launcher

* fix logprobs?

* idk at this point

* max input length

* omfg

* remove debugging lines

* fix tests

* fix mllama

* fix cargo tests

* remove support chunking for paged

* Fixing non blocked attentions

* Fixing dtype + AMD, Ipex targets.

* lint fix.

* rename

* Fix prefix_caching variable, remove defaults in server (confusing a lot
of the times).

* Add simple resolution when user specifies ATTENTION=paged.

* Put back non default simple tests.

* Fix env name

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-10-16 12:49:33 +02:00
..
merges feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
hub.py Micro cleanup. (#2555) 2024-09-24 11:19:24 +02:00
import_utils.py feat: enable pytorch xpu support for non-attention models (#2561) 2024-10-14 18:28:49 +02:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
logits_process.py patch-error-on-invalid-grammar (#2282) 2024-07-29 10:09:25 -04:00
peft.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
prefill_chunking.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
quantization.py Handle GPTQ-Marlin loading in `GPTQMarlinWeightLoader` (#2300) 2024-07-31 13:08:41 +02:00
segments.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
sgmv.py fix: allocate tmp based on sgmv kernel if available (#2345) 2024-08-12 17:24:32 +02:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Fp8 e4m3_fnuz support for rocm (#2588) 2024-10-16 09:54:50 +02:00