Large Language Model Text Generation Inference
Go to file
Morgan Funtowicz 11f348119f misc(backend): attempt to run the tests? 2024-12-21 12:16:20 +01:00
.devcontainer Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
.github misc(backend): attempt to run the tests? 2024-12-21 12:16:20 +01:00
assets Update grafana template (#1918) 2024-05-17 17:37:23 +02:00
backends misc(backend): let's actually cache things now 2024-12-20 23:07:23 +01:00
benchmark chore: prepare 2.4.1 release (#2773) 2024-11-22 17:26:15 +00:00
clients/python nix: add black and isort to the closure (#2619) 2024-10-09 11:08:02 +02:00
docs Removing ../ that broke the link (#2789) 2024-12-02 05:48:55 +01:00
integration-tests Support continue final message (#2733) 2024-11-27 19:13:30 -05:00
launcher fix: add merge-lora arg for model id (#2788) 2024-12-02 05:52:02 +01:00
load_tests chore: prepare 2.4.1 release (#2773) 2024-11-22 17:26:15 +00:00
nix nix: build and cache impure devshells (#2765) 2024-11-20 20:56:11 +01:00
proto Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
router feat(backend): fix main.rs retrieving the tokenizer 2024-12-03 12:11:17 +01:00
server Saving some VRAM. (#2790) 2024-12-03 04:04:21 +01:00
.dockerignore nix: experimental support for building a Docker container (#2470) 2024-10-01 18:02:06 +02:00
.gitignore Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
.pre-commit-config.yaml Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
.redocly.lint-ignore.yaml Stream options. (#2533) 2024-09-19 20:50:37 +02:00
CODE_OF_CONDUCT.md Set maximum grpc message receive size to 2GiB (#2075) 2024-06-17 16:40:44 +02:00
CONTRIBUTING.md Set maximum grpc message receive size to 2GiB (#2075) 2024-06-17 16:40:44 +02:00
Cargo.lock feat(backend): fix main.rs retrieving the tokenizer 2024-12-03 12:11:17 +01:00
Cargo.toml misc(ci): add debug profile 2024-12-12 14:52:12 +01:00
Dockerfile Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
Dockerfile.nix nix: experimental support for building a Docker container (#2470) 2024-10-01 18:02:06 +02:00
Dockerfile_amd Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
Dockerfile_intel upgrade ipex cpu to fix coredump in tiiuae/falcon-7b-instruct (pageat… (#2778) 2024-11-26 14:28:11 +01:00
Dockerfile_trtllm misc(backend): let's actually cache things now 2024-12-20 21:47:03 +01:00
LICENSE Revert license to Apache 2.0 (#1714) 2024-04-08 15:06:16 +02:00
Makefile Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
README.md chore: prepare 2.4.1 release (#2773) 2024-11-22 17:26:15 +00:00
crate-hashes.json Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
flake.lock chore: Update to marlin-kernels 0.3.6 (#2771) 2024-11-22 14:44:47 +00:00
flake.nix Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
rust-toolchain.toml Upgrade minor rust version (Fixes rust build compilation cache) (#2617) 2024-10-08 09:42:50 +02:00
sagemaker-entrypoint.sh
tgi-entrypoint.sh fix tgi-entrypoint wrapper in docker file: exec instead of spawning a child process (#2663) 2024-10-17 11:15:26 +02:00
update_doc.py Fix doc. (#2792) 2024-12-02 05:28:26 +01:00

README.md

Making TGI deployment optimal

Text Generation Inference

GitHub Repo stars Swagger API documentation

A Rust, Python and gRPC server for text generation inference. Used in production at Hugging Face to power Hugging Chat, the Inference API and Inference Endpoint.

Table of contents

Text Generation Inference (TGI) is a toolkit for deploying and serving Large Language Models (LLMs). TGI enables high-performance text generation for the most popular open-source LLMs, including Llama, Falcon, StarCoder, BLOOM, GPT-NeoX, and more. TGI implements many features, such as:

  • Simple launcher to serve most popular LLMs
  • Production ready (distributed tracing with Open Telemetry, Prometheus metrics)
  • Tensor Parallelism for faster inference on multiple GPUs
  • Token streaming using Server-Sent Events (SSE)
  • Continuous batching of incoming requests for increased total throughput
  • Messages API compatible with Open AI Chat Completion API
  • Optimized transformers code for inference using Flash Attention and Paged Attention on the most popular architectures
  • Quantization with :
  • Safetensors weight loading
  • Watermarking with A Watermark for Large Language Models
  • Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see transformers.LogitsProcessor)
  • Stop sequences
  • Log probabilities
  • Speculation ~2x latency
  • Guidance/JSON. Specify output format to speed up inference and make sure the output is valid according to some specs..
  • Custom Prompt Generation: Easily generate text by providing custom prompts to guide the model's output
  • Fine-tuning Support: Utilize fine-tuned models for specific tasks to achieve higher accuracy and performance

Hardware support

Get Started

Docker

For a detailed starting guide, please see the Quick Tour. The easiest way of getting started is using the official Docker container:

model=HuggingFaceH4/zephyr-7b-beta
# share a volume with the Docker container to avoid downloading weights every run
volume=$PWD/data

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \
    ghcr.io/huggingface/text-generation-inference:2.4.1 --model-id $model

And then you can make requests like

curl 127.0.0.1:8080/generate_stream \
    -X POST \
    -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
    -H 'Content-Type: application/json'

You can also use TGI's Messages API to obtain Open AI Chat Completion API compatible responses.

curl localhost:8080/v1/chat/completions \
    -X POST \
    -d '{
  "model": "tgi",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "What is deep learning?"
    }
  ],
  "stream": true,
  "max_tokens": 20
}' \
    -H 'Content-Type: application/json'

Note: To use NVIDIA GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 12.2 or higher. For running the Docker container on a machine with no GPUs or CUDA support, it is enough to remove the --gpus all flag and add --disable-custom-kernels, please note CPU is not the intended platform for this project, so performance might be subpar.

Note: TGI supports AMD Instinct MI210 and MI250 GPUs. Details can be found in the Supported Hardware documentation. To use AMD GPUs, please use docker run --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.4.1-rocm --model-id $model instead of the command above.

To see all options to serve your models (in the code or in the cli):

text-generation-launcher --help

API documentation

You can consult the OpenAPI documentation of the text-generation-inference REST API using the /docs route. The Swagger UI is also available at: https://huggingface.github.io/text-generation-inference.

Using a private or gated model

You have the option to utilize the HF_TOKEN environment variable for configuring the token employed by text-generation-inference. This allows you to gain access to protected resources.

For example, if you want to serve the gated Llama V2 model variants:

  1. Go to https://huggingface.co/settings/tokens
  2. Copy your cli READ token
  3. Export HF_TOKEN=<your cli READ token>

or with Docker:

model=meta-llama/Meta-Llama-3.1-8B-Instruct
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=<your cli READ token>

docker run --gpus all --shm-size 1g -e HF_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.4.1 --model-id $model

A note on Shared Memory (shm)

NCCL is a communication framework used by PyTorch to do distributed training/inference. text-generation-inference make use of NCCL to enable Tensor Parallelism to dramatically speed up inference for large language models.

In order to share data between the different devices of a NCCL group, NCCL might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible.

To allow the container to use 1G of Shared Memory and support SHM sharing, we add --shm-size 1g on the above command.

If you are running text-generation-inference inside Kubernetes. You can also add Shared Memory to the container by creating a volume with:

- name: shm
  emptyDir:
   medium: Memory
   sizeLimit: 1Gi

and mounting it to /dev/shm.

Finally, you can also disable SHM sharing by using the NCCL_SHM_DISABLE=1 environment variable. However, note that this will impact performance.

Distributed Tracing

text-generation-inference is instrumented with distributed tracing using OpenTelemetry. You can use this feature by setting the address to an OTLP collector with the --otlp-endpoint argument. The default service name can be overridden with the --otlp-service-name argument

Architecture

TGI architecture

Detailed blogpost by Adyen on TGI inner workings: LLM inference at scale with TGI (Martin Iglesias Goyanes - Adyen, 2024)

Local install

You can also opt to install text-generation-inference locally.

First install Rust and create a Python virtual environment with at least Python 3.9, e.g. using conda:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

conda create -n text-generation-inference python=3.11
conda activate text-generation-inference

You may also need to install Protoc.

On Linux:

PROTOC_ZIP=protoc-21.12-linux-x86_64.zip
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP
sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc
sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*'
rm -f $PROTOC_ZIP

On MacOS, using Homebrew:

brew install protobuf

Then run:

BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels
text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2

Note: on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run:

sudo apt-get install libssl-dev gcc -y

Local install (Nix)

Another option is to install text-generation-inference locally using Nix. Currently, we only support Nix on x86_64 Linux with CUDA GPUs. When using Nix, all dependencies can be pulled from a binary cache, removing the need to build them locally.

First follow the instructions to install Cachix and enable the TGI cache. Setting up the cache is important, otherwise Nix will build many of the dependencies locally, which can take hours.

After that you can run TGI with nix run:

nix run . -- --model-id meta-llama/Llama-3.1-8B-Instruct

Note: when you are using Nix on a non-NixOS system, you have to make some symlinks to make the CUDA driver libraries visible to Nix packages.

For TGI development, you can use the impure dev shell:

nix develop .#impure

# Only needed the first time the devshell is started or after updating the protobuf.
(
cd server
mkdir text_generation_server/pb || true
python -m grpc_tools.protoc -I../proto/v3 --python_out=text_generation_server/pb \
       --grpc_python_out=text_generation_server/pb --mypy_out=text_generation_server/pb ../proto/v3/generate.proto
find text_generation_server/pb/ -type f -name "*.py" -print0 -exec sed -i -e 's/^\(import.*pb2\)/from . \1/g' {} \;
touch text_generation_server/pb/__init__.py
)

All development dependencies (cargo, Python, Torch), etc. are available in this dev shell.

Optimized architectures

TGI works out of the box to serve optimized models for all modern models. They can be found in this list.

Other architectures are supported on a best-effort basis using:

AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")

or

AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")

Run locally

Run

text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2

Quantization

You can also run pre-quantized weights (AWQ, GPTQ, Marlin) or on-the-fly quantize weights with bitsandbytes, EETQ, fp8, to reduce the VRAM requirement:

text-generation-launcher --model-id mistralai/Mistral-7B-Instruct-v0.2 --quantize

4bit quantization is available using the NF4 and FP4 data types from bitsandbytes. It can be enabled by providing --quantize bitsandbytes-nf4 or --quantize bitsandbytes-fp4 as a command line argument to text-generation-launcher.

Read more about quantization in the Quantization documentation.

Develop

make server-dev
make router-dev

Testing

# python
make python-server-tests
make python-client-tests
# or both server and client tests
make python-tests
# rust cargo tests
make rust-tests
# integration tests
make integration-tests