hf_text-generation-inference/integration-tests/models
Daniël de Kok 2ce8019480
Use GPTQ-Marlin for supported GPTQ configurations (#2111)
GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So
let's use it by default if the kernels are installed, the GPU supports
it, and the kernels support the configuration.

For models generated by `text-generation-server quantize`, use
`sym=False`. This subcommand symmetric quantization since the beginning
and incorrectly reporting the model to be symmetric will use
GPTQ-Marlin (which does not support asymmetric quantization).
2024-07-01 12:59:12 +02:00
..
__snapshots__ Use GPTQ-Marlin for supported GPTQ configurations (#2111) 2024-07-01 12:59:12 +02:00
test_bloom_560m.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_bloom_560m_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_chat_llama.py Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
test_completion_prompts.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_awq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_awq_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_falcon.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gemma.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gemma_gptq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_gpt2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_grammar_llama.py fix: correctly index into mask when applying grammar (#1618) 2024-03-01 18:22:01 +01:00
test_flash_llama.py feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
test_flash_llama_exl2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_llama_gptq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_llama_marlin.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_medusa.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-02-26 19:49:28 +01:00
test_flash_mistral.py fix(router): fix openapi and add jsonschema validation (#1578) 2024-02-21 11:05:32 +01:00
test_flash_neox.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_neox_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_pali_gemma.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_phi.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_qwen2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_santacoder.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder2.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_flash_starcoder_gptq.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_grammar_llama.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_grammar_response_format_llama.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_idefics.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_idefics2.py Support different image sizes in prefill in VLMs (#2065) 2024-06-17 10:49:41 +02:00
test_llava_next.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_mamba.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_mpt.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_mt0_base.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_neox.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_neox_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_t5_sharded.py Add pytest release marker (#2114) 2024-06-25 16:53:20 +02:00
test_tools_llama.py feat: improve tools to include name and add tests (#1693) 2024-04-16 09:02:46 -04:00