Add reference to TPU support (#1760)
# What does this PR do? This PR makes a small addition to the readme that reference new TGI support for TPUs via Optimum TPU (https://huggingface.co/docs/optimum-tpu/howto/serving)
This commit is contained in:
parent
04d4765bad
commit
743ecbca3a
|
@ -64,6 +64,7 @@ Text Generation Inference (TGI) is a toolkit for deploying and serving Large Lan
|
||||||
- [Inferentia](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference)
|
- [Inferentia](https://github.com/huggingface/optimum-neuron/tree/main/text-generation-inference)
|
||||||
- [Intel GPU](https://github.com/huggingface/text-generation-inference/pull/1475)
|
- [Intel GPU](https://github.com/huggingface/text-generation-inference/pull/1475)
|
||||||
- [Gaudi](https://github.com/huggingface/tgi-gaudi)
|
- [Gaudi](https://github.com/huggingface/tgi-gaudi)
|
||||||
|
- [Google TPU](https://huggingface.co/docs/optimum-tpu/howto/serving)
|
||||||
|
|
||||||
|
|
||||||
## Get Started
|
## Get Started
|
||||||
|
|
Loading…
Reference in New Issue