Large Language Model Text Generation Inference
Go to file
osanseviero 43ed6c217a Dummy commit 2023-08-10 10:33:52 +02:00
.github Setup for doc-builder and docs for TGI (#740) 2023-08-10 10:24:52 +02:00
assets feat(benchmark): tui based benchmarking tool (#149) 2023-03-30 15:26:27 +02:00
benchmark docs(benchmarker): Adding some help for the options in `text-generation-benchmark`. (#462) 2023-07-04 18:35:37 +02:00
clients/python feat(server): only compute prefill logprobs when asked (#406) 2023-06-02 17:12:30 +02:00
docs Dummy commit 2023-08-10 10:33:52 +02:00
integration-tests feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
launcher Merge BNB 4bit. (#770) 2023-08-03 23:00:59 +02:00
load_tests feat: add nightly load testing (#358) 2023-05-23 17:42:19 +02:00
proto feat(server): auto max_batch_total_tokens for flash att models (#630) 2023-07-19 09:31:25 +02:00
router feat(server): update vllm version (#723) 2023-07-28 15:36:38 +02:00
server Llama change. (#793) 2023-08-08 13:43:40 +02:00
.dockerignore chore: add `flash-attention` to docker ignore (#287) 2023-05-05 17:52:09 +02:00
.gitignore feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
Cargo.lock v1.0.0 (#727) 2023-07-28 17:43:46 +02:00
Cargo.toml v1.0.0 (#727) 2023-07-28 17:43:46 +02:00
Dockerfile This should prevent the PyTorch overriding. (#767) 2023-08-03 21:54:39 +02:00
LICENSE chore: update license to HFOIL (#725) 2023-07-28 15:59:46 +02:00
Makefile docs(README): update readme 2023-07-25 19:45:25 +02:00
README.md Merge BNB 4bit. (#770) 2023-08-03 23:00:59 +02:00
rust-toolchain.toml v0.9.0 (#525) 2023-07-01 19:25:41 +02:00
sagemaker-entrypoint.sh feat(sagemaker): add trust remote code to entrypoint (#394) 2023-06-02 09:51:06 +02:00

README.md

image

Text Generation Inference

GitHub Repo stars Swagger API documentation

A Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power Hugging Chat, the Inference API and Inference Endpoint.

Table of contents

Features

Optimized architectures

Other architectures are supported on a best effort basis using:

AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")

or

AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")

Get started

Docker

The easiest way of getting started is using the official Docker container:

model=tiiuae/falcon-7b-instruct
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.0 --model-id $model

Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.

To see all options to serve your models (in the code or in the cli:

text-generation-launcher --help

You can then query the model using either the /generate or /generate_stream routes:

curl 127.0.0.1:8080/generate \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'
curl 127.0.0.1:8080/generate_stream \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'

or from Python:

pip install text-generation
from text_generation import Client

client = Client("http://127.0.0.1:8080")
print(client.generate("What is Deep Learning?", max_new_tokens=20).generated_text)

text = ""
for response in client.generate_stream("What is Deep Learning?", max_new_tokens=20):
    if not response.token.special:
        text += response.token.text
print(text)

API documentation

You can consult the OpenAPI documentation of the text-generation-inference REST API using the /docs route. The Swagger UI is also available at: https://huggingface.github.io/text-generation-inference.

Using a private or gated model

You have the option to utilize the HUGGING_FACE_HUB_TOKEN environment variable for configuring the token employed by text-generation-inference. This allows you to gain access to protected resources.

For example, if you want to serve the gated Llama V2 model variants:

  1. Go to https://huggingface.co/settings/tokens
  2. Copy your cli READ token
  3. Export HUGGING_FACE_HUB_TOKEN=<your cli READ token>

or with Docker:

model=meta-llama/Llama-2-7b-chat-hf
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=<your cli READ token>

docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.0 --model-id $model

A note on Shared Memory (shm)

NCCL is a communication framework used by PyTorch to do distributed training/inference. text-generation-inference make use of NCCL to enable Tensor Parallelism to dramatically speed up inference for large language models.

In order to share data between the different devices of a NCCL group, NCCL might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible.

To allow the container to use 1G of Shared Memory and support SHM sharing, we add --shm-size 1g on the above command.

If you are running text-generation-inference inside Kubernetes. You can also add Shared Memory to the container by creating a volume with:

- name: shm
  emptyDir:
   medium: Memory
   sizeLimit: 1Gi

and mounting it to /dev/shm.

Finally, you can also disable SHM sharing by using the NCCL_SHM_DISABLE=1 environment variable. However, note that this will impact performance.

Distributed Tracing

text-generation-inference is instrumented with distributed tracing using OpenTelemetry. You can use this feature by setting the address to an OTLP collector with the --otlp-endpoint argument.

Local install

You can also opt to install text-generation-inference locally.

First install Rust and create a Python virtual environment with at least Python 3.9, e.g. using conda:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

conda create -n text-generation-inference python=3.9
conda activate text-generation-inference

You may also need to install Protoc.

On Linux:

PROTOC_ZIP=protoc-21.12-linux-x86_64.zip
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP
sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc
sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*'
rm -f $PROTOC_ZIP

On MacOS, using Homebrew:

brew install protobuf

Then run:

BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels
make run-falcon-7b-instruct

Note: on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run:

sudo apt-get install libssl-dev gcc -y

CUDA Kernels

The custom CUDA kernels are only tested on NVIDIA A100s. If you have any installation or runtime issues, you can remove the kernels by using the DISABLE_CUSTOM_KERNELS=True environment variable.

Be aware that the official Docker image has them enabled by default.

Run Falcon

Run

make run-falcon-7b-instruct

Quantization

You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:

make run-falcon-7b-instruct-quantize

4bit quantization is available using the NF4 and FP4 data types from bitsandbytes. It can be enabled by providing --quantize bitsandbytes-nf4 or --quantize bitsandbytes-fp4 as a command line argument to text-generation-launcher.

Develop

make server-dev
make router-dev

Testing

# python
make python-server-tests
make python-client-tests
# or both server and client tests
make python-tests
# rust cargo tests
make rust-tests
# integration tests
make integration-tests

Other supported hardware

TGI is also supported on the following AI hardware accelerators:

  • Habana first-gen Gaudi and Gaudi2: checkout here how to serve models with TGI on Gaudi and Gaudi2 with Optimum Habana