hf_text-generation-inference/server
fxmarty 65506e19bf update dockerfile 2024-06-20 15:36:46 +00:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests feat(server): add frequency penalty (#1541) 2024-02-08 18:41:25 +01:00
text_generation_server update dockerfile 2024-06-20 15:36:46 +00:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-flash-att-v2 fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-10 17:20:25 +02:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00
pyproject.toml (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00
requirements_cuda.txt (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00
requirements_rocm.txt (chore): torch 2.3.0 (#1833) 2024-04-30 18:15:35 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev