Small fixes for supported models (#2471)
* Small improvements for docs * Update _toctree.yml * Updating the doc (we keep the list actually). --------- Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
This commit is contained in:
parent
0c478846c5
commit
ce28ee88d5
|
@ -2186,4 +2186,4 @@
|
||||||
"description": "Hugging Face Text Generation Inference API"
|
"description": "Hugging Face Text Generation Inference API"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
|
@ -3,6 +3,8 @@
|
||||||
title: Text Generation Inference
|
title: Text Generation Inference
|
||||||
- local: quicktour
|
- local: quicktour
|
||||||
title: Quick Tour
|
title: Quick Tour
|
||||||
|
- local: supported_models
|
||||||
|
title: Supported Models
|
||||||
- local: installation_nvidia
|
- local: installation_nvidia
|
||||||
title: Using TGI with Nvidia GPUs
|
title: Using TGI with Nvidia GPUs
|
||||||
- local: installation_amd
|
- local: installation_amd
|
||||||
|
@ -15,8 +17,7 @@
|
||||||
title: Using TGI with Intel GPUs
|
title: Using TGI with Intel GPUs
|
||||||
- local: installation
|
- local: installation
|
||||||
title: Installation from source
|
title: Installation from source
|
||||||
- local: supported_models
|
|
||||||
title: Supported Models and Hardware
|
|
||||||
- local: architecture
|
- local: architecture
|
||||||
title: Internal Architecture
|
title: Internal Architecture
|
||||||
- local: usage_statistics
|
- local: usage_statistics
|
||||||
|
|
|
@ -1,9 +1,7 @@
|
||||||
|
|
||||||
# Supported Models and Hardware
|
# Supported Models
|
||||||
|
|
||||||
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
|
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
|
||||||
|
|
||||||
## Supported Models
|
|
||||||
|
|
||||||
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
|
- [Deepseek V2](https://huggingface.co/deepseek-ai/DeepSeek-V2)
|
||||||
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
|
- [Idefics 2](https://huggingface.co/HuggingFaceM4/idefics2-8b) (Multimodal)
|
||||||
|
@ -38,6 +36,7 @@ Text Generation Inference enables serving optimized models on specific hardware
|
||||||
- [Mllama](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) (Multimodal)
|
- [Mllama](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) (Multimodal)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|
|
@ -5,14 +5,13 @@ import json
|
||||||
import os
|
import os
|
||||||
|
|
||||||
TEMPLATE = """
|
TEMPLATE = """
|
||||||
# Supported Models and Hardware
|
# Supported Models
|
||||||
|
|
||||||
Text Generation Inference enables serving optimized models on specific hardware for the highest performance. The following sections list which models (VLMs & LLMs) are supported.
|
Text Generation Inference enables serving optimized models. The following sections list which models (VLMs & LLMs) are supported.
|
||||||
|
|
||||||
## Supported Models
|
|
||||||
|
|
||||||
SUPPORTED_MODELS
|
SUPPORTED_MODELS
|
||||||
|
|
||||||
|
|
||||||
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
If the above list lacks the model you would like to serve, depending on the model's pipeline type, you can try to initialize and serve the model anyways to see how well it performs, but performance isn't guaranteed for non-optimized models:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
|
|
Loading…
Reference in New Issue