hf_text-generation-inference/server
OlivierDehaene 91d9beec90
fix(server): fix init for flash causal lm (#352)
Fixes #347
2023-05-22 15:05:32 +02:00
..
tests fix(server): fix decode token (#334) 2023-05-16 23:23:27 +02:00
text_generation_server fix(server): fix init for flash causal lm (#352) 2023-05-22 15:05:32 +02:00
.gitignore feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
Makefile fix(server): fix decode token (#334) 2023-05-16 23:23:27 +02:00
Makefile-flash-att fea(dockerfile): better layer caching (#159) 2023-04-14 10:12:21 +02:00
Makefile-transformers chore(server): update transformers (#250) 2023-04-27 09:57:41 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock chore(server): update safetensors version (#235) 2023-04-25 13:50:56 +02:00
pyproject.toml fix(server): fix init for flash causal lm (#352) 2023-05-22 15:05:32 +02:00
requirements.txt chore(server): update safetensors version (#235) 2023-04-25 13:50:56 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev