From 17b7b75e652394379931c058a8c2db3a000b4225 Mon Sep 17 00:00:00 2001 From: Nicolas Patry Date: Fri, 26 Jan 2024 10:13:23 +0100 Subject: [PATCH] Update the docs --- README.md | 2 +- docs/source/supported_models.md | 4 +++- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 1b3041d3..73356f28 100644 --- a/README.md +++ b/README.md @@ -198,7 +198,7 @@ Be aware that the official Docker image has them enabled by default. ## Optimized architectures -TGI works out of the box to serve optimized models in [this list](https://huggingface.co/docs/text-generation-inference/supported_models). +TGI works out of the box to serve optimized models for all modern models. They can be found in [this list](https://huggingface.co/docs/text-generation-inference/supported_models). Other architectures are supported on a best-effort basis using: diff --git a/docs/source/supported_models.md b/docs/source/supported_models.md index dce4f2f9..004790ab 100644 --- a/docs/source/supported_models.md +++ b/docs/source/supported_models.md @@ -19,7 +19,9 @@ The following models are optimized and can be served with TGI, which uses custom - [MPT](https://huggingface.co/mosaicml/mpt-30b) - [Llama V2](https://huggingface.co/meta-llama) - [Code Llama](https://huggingface.co/codellama) -- [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) +- [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) +- [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) +- [Phi](https://huggingface.co/microsoft/phi-2) If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models: