hf_text-generation-inference/docs/source/basic_tutorials/docker_launch.md

824 B

Launching with Docker

The easiest way of getting started is using the official Docker container:

model=tiiuae/falcon-7b-instruct
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run

docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.0.0 --model-id $model

Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.

Note: To see all options to serve your models (in the code) or in the cli:

text-generation-launcher --help