hf_text-generation-inference/server/text_generation_server/models
fxmarty 232e8d5227
MI300 compatibility (#1764)
Adds support for AMD Instinct MI300 in TGI.

Most changes are:
* Support PyTorch TunableOp to pick the GEMM/GEMV kernels for decoding
https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/cuda/tunable.
TunableOp is disabled by default, and can be enabled with
`PYTORCH_TUNABLEOP_ENABLED=1`.
* Update ROCm dockerfile to PyTorch 2.3 (actually patched with changes
from https://github.com/pytorch/pytorch/pull/124362)
* Support SILU & Linear custom kernels contributed by AMD
* Update vLLM paged attention to https://github.com/fxmarty/rocm-vllm/,
branching out of a much more recent commit
3489ce7936
* Support FA2 Triton kernel as recommended by AMD. Can be used by
specifying `ROCM_USE_FLASH_ATTN_V2_TRITON=1`.
* Update dockerfile to ROCm 6.1

By default, TunableOp tuning results are saved in `/data` (e.g.
`/data/tunableop_meta-llama-Llama-2-70b-chat-hf_tp1_rank0.csv`) in order
to avoid to have to rerun the tuning at each `docker run`.

Example:
```
Validator,PT_VERSION,2.3.0
Validator,ROCM_VERSION,6.1.0.0-82-5fabb4c
Validator,HIPBLASLT_VERSION,0.7.0-1549b021
Validator,GCN_ARCH_NAME,gfx942:sramecc+:xnack-
Validator,ROCBLAS_VERSION,4.1.0-cefa4a9b-dirty
GemmTunableOp_Half_TN,tn_8192_7_28672,Gemm_Rocblas_45475,0.132098
GemmTunableOp_Half_TN,tn_10240_4_8192,Gemm_Rocblas_45546,0.0484431
GemmTunableOp_Half_TN,tn_32000_6_8192,Default,0.149546
GemmTunableOp_Half_TN,tn_32000_3_8192,Gemm_Rocblas_45520,0.147119
GemmTunableOp_Half_TN,tn_8192_3_28672,Gemm_Rocblas_45475,0.132645
GemmTunableOp_Half_TN,tn_10240_3_8192,Gemm_Rocblas_45546,0.0482971
GemmTunableOp_Half_TN,tn_57344_5_8192,Gemm_Rocblas_45520,0.255694
GemmTunableOp_Half_TN,tn_10240_7_8192,Gemm_Rocblas_45517,0.0482522
GemmTunableOp_Half_TN,tn_8192_3_8192,Gemm_Rocblas_45546,0.0444671
GemmTunableOp_Half_TN,tn_8192_5_8192,Gemm_Rocblas_45546,0.0445834
GemmTunableOp_Half_TN,tn_57344_7_8192,Gemm_Rocblas_45520,0.25622
GemmTunableOp_Half_TN,tn_8192_2_28672,Gemm_Rocblas_45475,0.132122
GemmTunableOp_Half_TN,tn_8192_4_8192,Gemm_Rocblas_45517,0.0453191
GemmTunableOp_Half_TN,tn_10240_5_8192,Gemm_Rocblas_45517,0.0482514
GemmTunableOp_Half_TN,tn_8192_5_28672,Gemm_Rocblas_45542,0.133914
GemmTunableOp_Half_TN,tn_8192_2_8192,Gemm_Rocblas_45517,0.0446516
GemmTunableOp_Half_TN,tn_8192_1_28672,Gemm_Hipblaslt_TN_10814,0.131953
GemmTunableOp_Half_TN,tn_10240_2_8192,Gemm_Rocblas_45546,0.0481043
GemmTunableOp_Half_TN,tn_32000_4_8192,Gemm_Rocblas_45520,0.147497
GemmTunableOp_Half_TN,tn_8192_6_28672,Gemm_Rocblas_45529,0.134895
GemmTunableOp_Half_TN,tn_57344_2_8192,Gemm_Rocblas_45520,0.254716
GemmTunableOp_Half_TN,tn_57344_4_8192,Gemm_Rocblas_45520,0.255731
GemmTunableOp_Half_TN,tn_10240_6_8192,Gemm_Rocblas_45517,0.0484816
GemmTunableOp_Half_TN,tn_57344_3_8192,Gemm_Rocblas_45520,0.254701
GemmTunableOp_Half_TN,tn_8192_4_28672,Gemm_Rocblas_45475,0.132159
GemmTunableOp_Half_TN,tn_32000_2_8192,Default,0.147524
GemmTunableOp_Half_TN,tn_32000_5_8192,Default,0.147074
GemmTunableOp_Half_TN,tn_8192_6_8192,Gemm_Rocblas_45546,0.0454045
GemmTunableOp_Half_TN,tn_57344_6_8192,Gemm_Rocblas_45520,0.255582
GemmTunableOp_Half_TN,tn_32000_7_8192,Default,0.146705
GemmTunableOp_Half_TN,tn_8192_7_8192,Gemm_Rocblas_45546,0.0445489
```

---------

Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
2024-05-17 15:30:47 +02:00
..
custom_modeling MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
__init__.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
bloom.py MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
cache_manager.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
causal_lm.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_causal_lm.py MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
flash_cohere.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_dbrx.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_gemma.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
flash_gpt2.py MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
flash_llama.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_mistral.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_mixtral.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_neox.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_phi.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_qwen2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_rw.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_santacoder.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
flash_starcoder2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
galactica.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
globals.py MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
gpt_neox.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics_causal_lm.py Adding Llava-Next (Llava 1.6) with full support. (#1709) 2024-04-09 21:32:00 +02:00
llava_next.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
mamba.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
model.py Use the generation config. (#1808) 2024-04-25 19:41:50 +02:00
mpt.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
opt.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
pali_gemma.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
phi.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
rw.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
santacoder.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
seq2seq_lm.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
t5.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
types.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
vlm_causal_lm.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00