hf_text-generation-inference/benchmark
OlivierDehaene 5fa8ae041c
feat(server): optimize decode for sane tokenizers (#170)
2023-04-12 12:03:10 +02:00
..
src feat(router): make router input validation optional (#164) 2023-04-09 20:22:27 +02:00
.gitignore feat(benchmark): tui based benchmarking tool (#149) 2023-03-30 15:26:27 +02:00
Cargo.lock feat(server): optimize decode for sane tokenizers (#170) 2023-04-12 12:03:10 +02:00
Cargo.toml feat(router): make router input validation optional (#164) 2023-04-09 20:22:27 +02:00
README.md feat(benchmark): tui based benchmarking tool (#149) 2023-03-30 15:26:27 +02:00
rust-toolchain.toml feat(benchmark): tui based benchmarking tool (#149) 2023-03-30 15:26:27 +02:00

README.md

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m