From 9883f3b40e8e76cecc9807c438fc08c11e602b7a Mon Sep 17 00:00:00 2001 From: "Wang, Yi" Date: Thu, 29 Aug 2024 23:42:02 +0800 Subject: [PATCH] update doc with intel cpu part (#2420) * update doc with intel cpu part Signed-off-by: Wang, Yi A * Apply suggestions from code review we do not use latest ever in documentation, it causes too many issues for users. Release number get update on every release. --------- Signed-off-by: Wang, Yi A Co-authored-by: Nicolas Patry --- docs/source/installation_intel.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/docs/source/installation_intel.md b/docs/source/installation_intel.md index b3843490..3084a436 100644 --- a/docs/source/installation_intel.md +++ b/docs/source/installation_intel.md @@ -12,7 +12,24 @@ volume=$PWD/data # share a volume with the Docker container to avoid downloading docker run --rm --privileged --cap-add=sys_nice \ --device=/dev/dri \ --ipc=host --shm-size 1g --net host -v $volume:/data \ - ghcr.io/huggingface/text-generation-inference:2.2.0-intel \ + ghcr.io/huggingface/text-generation-inference:2.2.0-intel-xpu \ + --model-id $model --cuda-graphs 0 +``` + +# Using TGI with Intel CPUs + +IntelĀ® Extension for PyTorch (IPEX) also provides further optimizations for Intel CPUs. The IPEX provides optimization operations such as flash attention, page attention, Add + LayerNorm, ROPE and more. + +On a server powered by Intel CPU, TGI can be launched with the following command: + +```bash +model=teknium/OpenHermes-2.5-Mistral-7B +volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run + +docker run --rm --privileged --cap-add=sys_nice \ + --device=/dev/dri \ + --ipc=host --shm-size 1g --net host -v $volume:/data \ + ghcr.io/huggingface/text-generation-inference:2.2.0-intel-cpu \ --model-id $model --cuda-graphs 0 ```