hf_text-generation-inference/server
OlivierDehaene a88c54bb4c
feat(server): check cuda capability when importing flash models (#201)
close #198
2023-04-19 12:52:37 +02:00
..
tests feat(server): add flash attention llama (#144) 2023-04-11 16:38:22 +02:00
text_generation_server feat(server): check cuda capability when importing flash models (#201) 2023-04-19 12:52:37 +02:00
.gitignore feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
Makefile fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00
Makefile-flash-att fea(dockerfile): better layer caching (#159) 2023-04-14 10:12:21 +02:00
Makefile-transformers fea(dockerfile): better layer caching (#159) 2023-04-14 10:12:21 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
poetry.lock fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00
pyproject.toml fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00
requirements.txt fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev