hf_text-generation-inference/server
OlivierDehaene cd5961b5da
feat: allow local models (#101)
closes #99
2023-03-06 14:39:36 +01:00
..
tests fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100) 2023-03-06 13:22:58 +01:00
text_generation feat: allow local models (#101) 2023-03-06 14:39:36 +01:00
.gitignore feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
Makefile v0.3.2 (#97) 2023-03-03 18:42:20 +01:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock feat(server): update to hf_transfer==0.1.2 (#93) 2023-03-03 11:26:27 +01:00
pyproject.toml v0.3.2 (#97) 2023-03-03 18:42:20 +01:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev