hf_text-generation-inference/server
OlivierDehaene 4f460e5bfe feat(server): improve max tokens calculation 2023-04-26 13:07:25 +02:00
..
tests feat(router): use number of tokens in batch as input for dynamic batching (#226) 2023-04-24 17:59:00 +02:00
text_generation_server feat(server): improve max tokens calculation 2023-04-26 13:07:25 +02:00
.gitignore feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
Makefile feat(router): drop requests when client closes the channel (#202) 2023-04-20 11:07:40 +02:00
Makefile-flash-att fea(dockerfile): better layer caching (#159) 2023-04-14 10:12:21 +02:00
Makefile-transformers fea(dockerfile): better layer caching (#159) 2023-04-14 10:12:21 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock chore(server): update safetensors version (#235) 2023-04-25 13:50:56 +02:00
pyproject.toml chore(server): update safetensors version (#235) 2023-04-25 13:50:56 +02:00
requirements.txt chore(server): update safetensors version (#235) 2023-04-25 13:50:56 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev