hf_text-generation-inference/integration-tests
Daniël de Kok 0f346a3296
Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels (#2688)
* Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels

Performance and accuracy of these kernels are on par (tested with Llama
70B and 405B). Removes a dependency and resolves some stability issues
we have been seeing.

* Update test snapshots
2024-10-25 16:40:47 +02:00
..
images Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
models Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels (#2688) 2024-10-25 16:40:47 +02:00
conftest.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
poetry.lock Prefix test - Different kind of load test to trigger prefix test bugs. (#2490) 2024-09-11 18:10:40 +02:00
pyproject.toml nix: add black and isort to the closure (#2619) 2024-10-09 11:08:02 +02:00
pytest.ini chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
requirements.txt Prefix test - Different kind of load test to trigger prefix test bugs. (#2490) 2024-09-11 18:10:40 +02:00