Update the docs

This commit is contained in:
Nicolas Patry 2024-01-26 10:13:23 +01:00
parent 9c320e260b
commit 17b7b75e65
2 changed files with 4 additions and 2 deletions

View File

@ -198,7 +198,7 @@ Be aware that the official Docker image has them enabled by default.
## Optimized architectures ## Optimized architectures
TGI works out of the box to serve optimized models in [this list](https://huggingface.co/docs/text-generation-inference/supported_models). TGI works out of the box to serve optimized models for all modern models. They can be found in [this list](https://huggingface.co/docs/text-generation-inference/supported_models).
Other architectures are supported on a best-effort basis using: Other architectures are supported on a best-effort basis using:

View File

@ -19,7 +19,9 @@ The following models are optimized and can be served with TGI, which uses custom
- [MPT](https://huggingface.co/mosaicml/mpt-30b) - [MPT](https://huggingface.co/mosaicml/mpt-30b)
- [Llama V2](https://huggingface.co/meta-llama) - [Llama V2](https://huggingface.co/meta-llama)
- [Code Llama](https://huggingface.co/codellama) - [Code Llama](https://huggingface.co/codellama)
- [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) - [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
- [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- [Phi](https://huggingface.co/microsoft/phi-2)
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: