hf_text-generation-inference/server/text_generation_server/models/custom_modeling
Nicolas Patry e415b690a6
Lots of improvements (Still 2 allocators) (#2449)
* Making prefix/flashinfer the default and testing the full release tests.

* Include flashinfer in the docker.

* Using prebuilt.

* Allowing window_left_size (dummy version).

* Disabling flashinfer/prefix caching on odd head_dim

* Disable prefix caching for lora.

* More specific codes.

* Update lock

* Updating integration tests with new values with FI/FD.

Remove paged as a default too, and using FD everywhere.

* Update cargo lock ?

* Upgrade to 1.80 because of bitstream...

* Everywhere 1.80

* Forgot last default place.

* Apply suggestions from code review

Co-authored-by: drbh <david.richard.holtz@gmail.com>

* Updated flake lock

* Tmp

* Upgrade resolution system for less errors in resolution.

* Remove lambda for cleaner function.

* Handling debugger.

* OVerride the env in server tests.

* Is this enough to make it work ?

* This seems to be working.

* Downgrade some logs.

* Fixing the default for vlm.

* Don't enable prefix caching on VLM just yet.

* Change `add_special_tokens` in order to have the correct tokens for chat
input and not (since it's super important with the prefixing now)

* Fixing prefix caching for flashdecoding.

* Update all models.

* Fixed flashinfer version.

* add_special_tokens is internal only

* Fixing seqlen with the new vlms.

* Fixing the issue with `add_special_tokens` not being passed around.

* Fixing the test.

* Removing encoder_decoder (seq2seq).

* Update the chat test.

* Fixing the batching tokenization in flash causal lm.

* Truncating left for radix purposes.

* Oops this doesn't belong here.

* Put back default pure shell.

* Update server tests

- Default to throughput test in k6
- Use TGI_WIGGLE_ROOM to adjust wiggle room

* Only n_heads / process_group.size() are necessary.

* Revert the integrationt tests change (seem linked to head_size
modification).

* Adding error message when assert is violated.

* Fixing the free algorithm to handle times where the common prefix is
smaller.

* Apply suggestions from code review

Co-authored-by: OlivierDehaene <olivier@huggingface.co>

* Update server/text_generation_server/layers/attention/common.py

Co-authored-by: OlivierDehaene <olivier@huggingface.co>

* Fix disabling prefix caching - Fix windowing checks.

* Revert the Cohere tokenizer change (for now using a revision instead).

* Fmt.

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
2024-08-29 16:29:01 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
clip.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flash_cohere_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_dbrx_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_deepseek_v2_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_gemma2_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_gemma_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_gpt2_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_gptj_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_llama_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_mistral_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_mixtral_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_neox_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_pali_gemma_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_phi_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_qwen2_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_rw_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_santacoder_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_starcoder2_modeling.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
idefics2.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_modeling.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-08-01 11:08:36 +02:00
idefics_perceiver.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_vision.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
llava_next.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
mamba_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
mpt_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
neox_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
opt_modeling.py Fix the prefix for OPT model in opt_modelling.py #2370 (CI RUN) (#2371) 2024-08-07 23:14:02 -04:00
phi_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
siglip.py Fix: don't apply post layernorm in SiglipVisionTransformer (#2459) 2024-08-26 17:04:46 -04:00
t5_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00