Small fixes for supported models (#2471)

* Small improvements for docs

* Update _toctree.yml

* Updating the doc (we keep the list actually).

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
This commit is contained in:
Omar Sanseviero 2024-10-14 15:26:39 +02:00 committed by GitHub
parent 0c478846c5
commit ce28ee88d5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 10 additions and 11 deletions

View File

@ -2186,4 +2186,4 @@
"description": "Hugging Face Text Generation Inference API"
}
]
}
}

View File

@ -3,6 +3,8 @@
title: Text Generation Inference
- local: quicktour
title: Quick Tour
- local: supported_models
title: Supported Models
- local: installation_nvidia
title: Using TGI with Nvidia GPUs
- local: installation_amd
@ -15,8 +17,7 @@
title: Using TGI with Intel GPUs
- local: installation
title: Installation from source
- local: supported_models
title: Supported Models and Hardware
- local: architecture
title: Internal Architecture
- local: usage_statistics

View File

@ -1,9 +1,7 @@
# Supported Models and Hardware
# Supported Models
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
## Supported Models
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
@ -38,6 +36,7 @@ Text Generation Inference enables serving optimized models on specific hardware
- [Mllama](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) (Multimodal)
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
```python

View File

@ -5,14 +5,13 @@ import json
import os
TEMPLATE = """
# Supported Models and Hardware
# Supported Models
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
## Supported Models
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
SUPPORTED_MODELS
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
```python