hf_text-generation-inference/server
Nick Hill e6d3eb5d5d
fix(server): Minor refactorization using new_zeros (#24)
- Fix some type hints, in particular base tokenizer class
- Make use of `tensor.new_zero/empty` methods
- Simplify env var string parsing in launcher
2023-01-17 09:10:22 +01:00
..
tests fix(server): Fix stop sequences (#11) 2022-12-16 16:03:39 +01:00
text_generation fix(server): Minor refactorization using new_zeros (#24) 2023-01-17 09:10:22 +01:00
.gitignore feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
Makefile feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
README.md feat(server): Use safetensors 2022-10-22 20:00:15 +02:00
poetry.lock feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
pyproject.toml feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00

README.md

BLOOM Inference Python gRPC Server

A Python gRPC server for BLOOM Inference

Install

make install

Run

make run-dev