hf_text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma 557e18e08c fix style 2024-06-24 14:30:26 +00:00
..
__init__.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
cuda.py fix style 2024-06-24 14:30:26 +00:00
flash_attn_triton.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
rocm.py fix style 2024-06-24 14:30:26 +00:00
xpu.py rebase and update 2024-06-24 08:15:36 +00:00