hf_text-generation-inference/server
OlivierDehaene 57795685d1 feat: support cuda 12.1 2023-10-10 15:23:52 +02:00
..
custom_kernels
exllama_kernels feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
tests feat: format code (#1070) 2023-09-27 12:22:09 +02:00
text_generation_server Hotfixing idefics base64 parsing. (#1103) 2023-10-05 13:35:26 +02:00
.gitignore Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile-awq Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
Makefile-eetq Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile-flash-att
Makefile-flash-att-v2 feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
Makefile-vllm feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
README.md
poetry.lock feat: support cuda 12.1 2023-10-10 15:23:52 +02:00
pyproject.toml feat: support cuda 12.1 2023-10-10 15:23:52 +02:00
requirements.txt feat: support cuda 12.1 2023-10-10 15:23:52 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev