hf_text-generation-inference/server
OlivierDehaene 0ac184ce77
feat(server): add special token bool (#85)
2023-02-24 15:55:57 +01:00
..
tests feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
text_generation feat(server): add special token bool (#85) 2023-02-24 15:55:57 +01:00
.gitignore feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
Makefile feat: add distributed tracing (#62) 2023-02-13 13:02:45 +01:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock feat(server): enable hf-transfer (#76) 2023-02-18 14:04:11 +01:00
pyproject.toml v0.3.1 (#84) 2023-02-24 13:27:41 +01:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev