hf_text-generation-inference/server
OlivierDehaene 427d7cc444 feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
..
text_generation feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
.gitignore feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
Makefile fix(models): Revert buggy support for AutoModel 2022-11-03 16:07:54 +01:00
README.md feat(server): Use safetensors 2022-10-22 20:00:15 +02:00
poetry.lock feat: Use json formatter by default in docker image 2022-11-02 17:29:56 +01:00
pyproject.toml feat: Use json formatter by default in docker image 2022-11-02 17:29:56 +01:00

README.md

BLOOM Inference Python gRPC Server

A Python gRPC server for BLOOM Inference

Install

make install

Run

make run-dev