hf_text-generation-inference/server
fxmarty bb37321b9f allow to fix paged attention num blocks 2024-06-05 10:05:04 +00:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
text_generation_server allow to fix paged attention num blocks 2024-06-05 10:05:04 +00:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-flash-att-v2 Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
pyproject.toml Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_cuda.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_rocm.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev