hf_text-generation-inference/benchmark
Daniël de Kok c8c7ccd31e
Set maximum grpc message receive size to 2GiB (#2075)
* Set maximum grpc message receive size to 2GiB

The previous default was 4MiB, which doesn't really work well for
multi-modal models.

* Update to Rust 1.79.0

* Fixup formatting to make PR pass
2024-06-17 16:40:44 +02:00
..
src Set maximum grpc message receive size to 2GiB (#2075) 2024-06-17 16:40:44 +02:00
Cargo.toml Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00

README.md

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m