hf_text-generation-inference/benchmark
OlivierDehaene 757223b352
feat: add SchedulerV3 (#1996)
- Refactor code to allow supporting multiple versions of the
generate.proto at the same time
- Add v3/generate.proto (ISO to generate.proto for now but allow for
future changes without impacting v2 backends)
- Add Schedule trait to abstract queuing and batching mechanisms that
will be different in the future
- Add SchedulerV2/V3 impl
2024-06-04 15:56:56 +02:00
..
src feat: add SchedulerV3 (#1996) 2024-06-04 15:56:56 +02:00
Cargo.toml Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00

README.md

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m