hf_text-generation-inference/benchmark
Daniël de Kok fc52ba61ab router: send the input as chunks to the backend
Before this change, the generation input was sent to the backend as a
single string, encoding images as Base64 and packing them in
Markdown-style links.

This change adds a new chunked input representation that separates text
chunks from images chunks. Image chunks contain binary data (for smaller
message sizes) and the image's MIME type.

The stringly-typed inputs are still sent to support backends that do not
support chunked inputs yet.
2024-06-03 07:27:22 +00:00
..
src router: send the input as chunks to the backend 2024-06-03 07:27:22 +00:00
Cargo.toml Upgrading all versions. (#1759) 2024-04-18 17:17:40 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00

README.md

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m