hf_text-generation-inference/integration-tests/models
Nicolas Patry e415b690a6
Lots of improvements (Still 2 allocators) (#2449)
* Making prefix/flashinfer the default and testing the full release tests.

* Include flashinfer in the docker.

* Using prebuilt.

* Allowing window_left_size (dummy version).

* Disabling flashinfer/prefix caching on odd head_dim

* Disable prefix caching for lora.

* More specific codes.

* Update lock

* Updating integration tests with new values with FI/FD.

Remove paged as a default too, and using FD everywhere.

* Update cargo lock ?

* Upgrade to 1.80 because of bitstream...

* Everywhere 1.80

* Forgot last default place.

* Apply suggestions from code review

Co-authored-by: drbh <david.richard.holtz@gmail.com>

* Updated flake lock

* Tmp

* Upgrade resolution system for less errors in resolution.

* Remove lambda for cleaner function.

* Handling debugger.

* OVerride the env in server tests.

* Is this enough to make it work ?

* This seems to be working.

* Downgrade some logs.

* Fixing the default for vlm.

* Don't enable prefix caching on VLM just yet.

* Change `add_special_tokens` in order to have the correct tokens for chat
input and not (since it's super important with the prefixing now)

* Fixing prefix caching for flashdecoding.

* Update all models.

* Fixed flashinfer version.

* add_special_tokens is internal only

* Fixing seqlen with the new vlms.

* Fixing the issue with `add_special_tokens` not being passed around.

* Fixing the test.

* Removing encoder_decoder (seq2seq).

* Update the chat test.

* Fixing the batching tokenization in flash causal lm.

* Truncating left for radix purposes.

* Oops this doesn't belong here.

* Put back default pure shell.

* Update server tests

- Default to throughput test in k6
- Use TGI_WIGGLE_ROOM to adjust wiggle room

* Only n_heads / process_group.size() are necessary.

* Revert the integrationt tests change (seem linked to head_size
modification).

* Adding error message when assert is violated.

* Fixing the free algorithm to handle times where the common prefix is
smaller.

* Apply suggestions from code review

Co-authored-by: OlivierDehaene <olivier@huggingface.co>

* Update server/text_generation_server/layers/attention/common.py

Co-authored-by: OlivierDehaene <olivier@huggingface.co>

* Fix disabling prefix caching - Fix windowing checks.

* Revert the Cohere tokenizer change (for now using a revision instead).

* Fmt.

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
2024-08-29 16:29:01 +02:00
..
__snapshots__ Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
test_bloom_560m.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_bloom_560m_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_chat_llama.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
test_completion_prompts.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
test_flash_awq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_awq_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_deepseek_v2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
test_flash_falcon.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gemma.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gemma2.py Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
test_flash_gemma_gptq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gpt2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_grammar_llama.py fix: correctly index into mask when applying grammar (#1618) 2024-03-01 18:22:01 +01:00
test_flash_llama.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_llama_exl2.py Fixing exl2 and other quanize tests again. (#2419) 2024-08-15 11:12:51 +02:00
test_flash_llama_fp8.py Further fixes. (#2426) 2024-08-16 13:21:44 +02:00
test_flash_llama_gptq.py GPTQ CI improvements (#2151) 2024-07-05 14:12:16 +02:00
test_flash_llama_marlin.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_llama_marlin_24.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
test_flash_medusa.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-02-26 19:49:28 +01:00
test_flash_mistral.py fix(router): fix openapi and add jsonschema validation (#1578) 2024-02-21 11:05:32 +01:00
test_flash_neox.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_neox_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
test_flash_phi.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_qwen2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_santacoder.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder_gptq.py Upgrading the tests to match the current workings. (#2423) 2024-08-15 13:28:42 +02:00
test_grammar_llama.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_grammar_response_format_llama.py Upgrade fbgemm (#2398) 2024-08-12 14:08:38 +02:00
test_idefics.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
test_idefics2.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
test_llava_next.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_lora_mistral.py feat: simple mistral lora integration tests (#2180) 2024-07-15 09:16:15 -04:00
test_mamba.py All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
test_mpt.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_mt0_base.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_neox.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_neox_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_opt.py Fix the prefix for OPT model in opt_modelling.py #2370 (CI RUN) (#2371) 2024-08-07 23:14:02 -04:00
test_t5_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_tools_llama.py Pr 2451 ci branch (#2454) 2024-08-26 20:19:38 -04:00