hf_text-generation-inference/server
fxmarty 26b3916612
Make `--cuda-graphs` work as expected (bis) (#1768)
This was ignored up to now, even with `--cuda-graphs 0`.

With this fix, `--cuda-graphs` is obeyed to.
2024-04-22 16:09:19 +02:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests feat(server): add frequency penalty (#1541) 2024-02-08 18:41:25 +01:00
text_generation_server Make `--cuda-graphs` work as expected (bis) (#1768) 2024-04-22 16:09:19 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-flash-att-v2 fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00
pyproject.toml v2.0.1 2024-04-18 17:20:36 +02:00
requirements_cuda.txt Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00
requirements_rocm.txt Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev