hf_text-generation-inference/server/text_generation_server/layers/attention
Nicolas Patry dd8691b7c5
More tensor cores. (#2558)
* More tensor cores.

* Fixing the logic.

* Gemma is modified by this.
2024-09-24 23:57:26 +02:00
..
__init__.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
common.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
cuda.py Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flashinfer.py More tensor cores. (#2558) 2024-09-24 23:57:26 +02:00
ipex.py hotfix : enable intel ipex cpu and xpu in python3.11 (#2517) 2024-09-12 17:23:49 +02:00
rocm.py Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00