Large Language Model Text Generation Inference
Go to file
OlivierDehaene 03bdf18290
fix(server): fix seeding on gpu (#42)
2023-01-31 14:30:33 +01:00
.github/workflows feat(launcher): Add integration tests (#9) 2022-12-16 11:29:36 +01:00
aml fix(docker): fix api-inference deployment (#30) 2023-01-23 17:33:08 +01:00
assets v0.1.0 2022-10-20 19:14:44 +02:00
k6 Add load testing 2022-10-11 10:36:51 +02:00
launcher Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
proto Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
router Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
server fix(server): fix seeding on gpu (#42) 2023-01-31 14:30:33 +01:00
.dockerignore fix(server): Fix Transformers fork version 2022-11-08 17:42:38 +01:00
.gitignore v0.1.0 2022-10-20 19:14:44 +02:00
Cargo.lock Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
Cargo.toml feat(rust): Update to 1.65 2022-11-14 13:59:56 +01:00
Dockerfile fix(dockerfile): fix docker build (#32) 2023-01-24 19:52:39 +01:00
LICENSE Create LICENSE (#2) 2022-10-22 10:44:52 +02:00
Makefile feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
README.md feat(server): Support SantaCoder (#26) 2023-01-20 12:24:39 +01:00
rust-toolchain.toml feat(rust): Update to 1.65 2022-11-14 13:59:56 +01:00

README.md

Text Generation Inference

architecture

A Rust and gRPC server for text generation inference. Used in production at HuggingFace to power Bloom, BloomZ and MT0-XXL api-inference widgets.

Features

Officially supported models

Other models are supported on a best effort basis using:

AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")

or

AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")

Load Tests for BLOOM

See k6/load_test.js

avg min med max p(90) p(95) RPS
Original code 8.9s 1s 9.12s 16.69s 13.7s 14.26s 5.9
New batching logic 5.44s 959.53ms 5.28s 13.12s 7.78s 8.92s 9.08

Install

make install

Run

BLOOM 560-m

make run-bloom-560m

BLOOM

First you need to download the weights:

make download-bloom
make run-bloom # Requires 8xA100 80GB

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

make run-bloom-quantize # Requires 8xA100 40GB

Test

curl 127.0.0.1:3000/generate \
    -v \
    -X POST \
    -d '{"inputs":"Testing API","parameters":{"max_new_tokens":9}}' \
    -H 'Content-Type: application/json'

Develop

make server-dev
make router-dev