.. |
test_bloom_560m
|
fix: run bloom in non release and update snapshots
|
2024-08-12 11:47:57 -04:00 |
test_bloom_560m_sharded
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-07-29 11:38:38 -04:00 |
test_chat_llama
|
Fix seeded output. (#1949)
|
2024-05-24 15:36:13 +02:00 |
test_completion_prompts
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-07-29 11:38:38 -04:00 |
test_flash_awq
|
Add AWQ quantization inference support (#1019) (#1054)
|
2023-09-25 15:31:27 +02:00 |
test_flash_awq_sharded
|
Add AWQ quantization inference support (#1019) (#1054)
|
2023-09-25 15:31:27 +02:00 |
test_flash_deepseek_v2
|
fix: update deepseek and gemma tests
|
2024-08-12 11:47:57 -04:00 |
test_flash_falcon
|
feat(server): add retry on download (#384)
|
2023-05-31 10:57:53 +02:00 |
test_flash_gemma
|
fix: update deepseek and gemma tests
|
2024-08-12 11:47:57 -04:00 |
test_flash_gemma2
|
Softcapping for gemma2. (#2273)
|
2024-07-22 18:27:10 +02:00 |
test_flash_gemma_gptq
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-07-29 11:38:38 -04:00 |
test_flash_gpt2
|
Add GPT-2 with flash attention (#1889)
|
2024-05-15 13:31:22 +02:00 |
test_flash_grammar_llama
|
fix: correctly index into mask when applying grammar (#1618)
|
2024-03-01 18:22:01 +01:00 |
test_flash_llama
|
Remove the stripping of the prefix space (and any other mangling that tokenizers might do). (#1065)
|
2023-09-27 12:13:45 +02:00 |
test_flash_llama_exl2
|
Add support for exl2 quantization
|
2024-05-30 11:28:05 +02:00 |
test_flash_llama_fp8
|
fix: marlin repeat scale for fp8 and bump snapshots
|
2024-08-12 11:48:07 -04:00 |
test_flash_llama_gptq
|
GPTQ CI improvements (#2151)
|
2024-07-05 14:12:16 +02:00 |
test_flash_llama_marlin
|
Add support for Marlin-quantized models
|
2024-06-06 13:16:52 +02:00 |
test_flash_llama_marlin_24
|
Improve the handling of quantized weights (#2250)
|
2024-07-19 09:37:39 +02:00 |
test_flash_medusa
|
Speculative (#1308)
|
2023-12-11 12:46:30 +01:00 |
test_flash_mistral
|
feat: add mistral model (#1071)
|
2023-09-28 09:55:47 +02:00 |
test_flash_neox
|
fix(server): fix init for flash causal lm (#352)
|
2023-05-22 15:05:32 +02:00 |
test_flash_neox_sharded
|
fix(server): fix init for flash causal lm (#352)
|
2023-05-22 15:05:32 +02:00 |
test_flash_pali_gemma
|
Some small fixes for the Torch 2.4.0 update (#2304)
|
2024-07-25 13:34:44 +02:00 |
test_flash_phi
|
feat: adds phi model (#1442)
|
2024-01-25 15:37:53 +01:00 |
test_flash_qwen2
|
feat: Qwen2 (#1608)
|
2024-02-28 15:50:31 +01:00 |
test_flash_santacoder
|
feat(integration-tests): improve comparison and health checks (#336)
|
2023-05-16 20:22:11 +02:00 |
test_flash_starcoder
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-07-29 11:38:38 -04:00 |
test_flash_starcoder2
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-07-29 11:38:38 -04:00 |
test_flash_starcoder_gptq
|
ROCm AWQ support (#1514)
|
2024-02-09 10:45:16 +01:00 |
test_grammar_llama
|
fix: correctly index into mask when applying grammar (#1618)
|
2024-03-01 18:22:01 +01:00 |
test_grammar_response_format_llama
|
Support chat response format (#2046)
|
2024-06-11 10:44:56 -04:00 |
test_idefics
|
Support different image sizes in prefill in VLMs (#2065)
|
2024-06-17 10:49:41 +02:00 |
test_idefics2
|
Fixing idefics on g6 tests. (#2306)
|
2024-07-25 14:44:21 +02:00 |
test_llava_next
|
Idefics2. (#1756)
|
2024-04-23 23:04:44 +02:00 |
test_lora_mistral
|
feat: simple mistral lora integration tests (#2180)
|
2024-07-15 09:16:15 -04:00 |
test_mamba
|
fix: update mamba snap and run other release tests
|
2024-08-12 11:47:57 -04:00 |
test_mpt
|
feat(server): Add Non flash MPT. (#514)
|
2023-07-03 13:01:46 +02:00 |
test_mt0_base
|
fix: update mt0, mamba and grammar tests
|
2024-08-12 11:47:57 -04:00 |
test_neox
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_neox_sharded
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_server_gptq_quantized
|
GPTQ CI improvements (#2151)
|
2024-07-05 14:12:16 +02:00 |
test_t5_sharded
|
feat(server): support fp16 for t5 (#360)
|
2023-05-23 18:16:48 +02:00 |
test_tools_llama
|
v2.0.1
|
2024-04-18 17:20:36 +02:00 |