From 743ecbca3aa5a708f438ae473f17ede3c15452b6 Mon Sep 17 00:00:00 2001 From: Brandon Royal <2762697+brandonroyal@users.noreply.github.com> Date: Tue, 30 Apr 2024 05:39:52 -0400 Subject: [PATCH] Add reference to TPU support (#1760) # What does this PR do? This PR makes a small addition to the readme that reference new TGI support for TPUs via Optimum TPU (https://huggingface.co/docs/optimum-tpu/howto/serving) --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index ad66e328..74616748 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,7 @@ Text Generation Inference (TGI) is a toolkit for deploying and serving Large Lan - [Inferentia](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference) - [Intel GPU](https://github.com/huggingface/text-generation-inference/pull/1475) - [Gaudi](https://github.com/huggingface/tgi-gaudi) +- [Google TPU](https://huggingface.co/docs/optimum-tpu/howto/serving) ## Get Started