diff --git a/docs/source/quicktour.md b/docs/source/quicktour.md index 109e3c9a..33832964 100644 --- a/docs/source/quicktour.md +++ b/docs/source/quicktour.md @@ -15,6 +15,15 @@ docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \ --model-id $model ``` + + +If you want to serve gated or private models, which provide +controlled access to sensitive or proprietary content, refer to +[this guide](https://huggingface.co/docs/text-generation-inference/en/basic_tutorials/gated_model_access) +for detailed instructions. + + + ### Supported hardware TGI supports various hardware. Make sure to check the [Using TGI with Nvidia GPUs](./installation_nvidia), [Using TGI with AMD GPUs](./installation_amd), [Using TGI with Intel GPUs](./installation_intel), [Using TGI with Gaudi](./installation_gaudi), [Using TGI with Inferentia](./installation_inferentia) guides depending on which hardware you would like to deploy TGI on.