hf_text-generation-inference/server
OlivierDehaene 1ad3250b89
fix(docker): increase shm size (#60)
2023-02-08 17:53:33 +01:00
..
tests feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
text_generation fix(docker): increase shm size (#60) 2023-02-08 17:53:33 +01:00
.gitignore feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
Makefile fix(dockerfile): fix docker build (#32) 2023-01-24 19:52:39 +01:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock fix(server): fix seeding with multiple shards (#44) 2023-01-31 16:01:15 +01:00
pyproject.toml V0.2.1 (#58) 2023-02-07 15:40:25 +01:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev