.. |
__snapshots__
|
feat(server): Add exllama GPTQ CUDA kernel support #553 (#666)
|
2023-07-21 10:59:00 +02:00 |
test_bloom_560m.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_bloom_560m_sharded.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_falcon.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_llama.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_llama_gptq.py
|
feat: add cuda memory fraction (#659)
|
2023-07-24 11:43:58 +02:00 |
test_flash_neox.py
|
feat(server): add paged attention to flash models (#516)
|
2023-06-30 19:09:59 +02:00 |
test_flash_neox_sharded.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_santacoder.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_starcoder.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_flash_starcoder_gptq.py
|
feat: add cuda memory fraction (#659)
|
2023-07-24 11:43:58 +02:00 |
test_mpt.py
|
feat(server): Add Non flash MPT. (#514)
|
2023-07-03 13:01:46 +02:00 |
test_mt0_base.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |
test_neox.py
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_neox_sharded.py
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_t5_sharded.py
|
feat(server): only compute prefill logprobs when asked (#406)
|
2023-06-02 17:12:30 +02:00 |