This PR adds support for AMD Instinct MI210 & MI250 GPUs, with paged
attention and FAv2 support.
Remaining items to discuss, on top of possible others:
* Should we have a
`ghcr.io/huggingface/text-generation-inference:1.1.0+rocm` hosted image,
or is it too early?
* Should we set up a CI on MI210/MI250? I don't have access to the
runners of TGI though.
* Are we comfortable with those changes being directly in TGI, or do we
need a fork?
---------
Co-authored-by: Felix Marty <felix@hf.co>
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
Co-authored-by: Your Name <you@example.com>
Added note on serving supported models from a different folder without
re-downloading them.
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
I added ToC for docs v1 & started setting up for doc-builder. cc @Narsil
@osanseviero
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: osanseviero <osanseviero@gmail.com>
Co-authored-by: Mishig <mishig.davaadorj@coloradocollege.edu>