hf_text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma f9e561eced
Update ROCM libs and improvements (#2579)
* style

* update torch

* ix issues

* fix clone

* revert mkl

* added custom PA

* style

* fix style

* style

* hide env vart

* fix mixtral model

* add skinny kernel and merge fixes

* fixed style

* fix issue for sliding window models

* addressed review comments

* fix import

* improved error messag

* updated default value

* remove import

* fix imports after rebase

* float16 dep

* improve dockerfile

* cleaned dockerfile
2024-09-30 10:54:32 +02:00
..
__init__.py Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00
common.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
cuda.py Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flashinfer.py flashinfer: pass window size and dtype (#2574) 2024-09-28 18:41:41 +02:00
ipex.py Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00
rocm.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00