hf_text-generation-inference/server/text_generation_server/layers/attention
Nicolas Patry 9e2fdf57c0
Removing IPEX_AVAIL. (#2115)
* Removing IPEX_AVAIL.

Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
except for a very few spots.

The biggest number of spots is the kv-cache layout and the flash_xxx.py
files.
Since those files should be removed soon and factored away, we should
not need them.

* Forgot a few places.

* Unrelated change.

* Fixing HF_TOKEN.

* HF_TOKEN
2024-06-25 13:20:57 +02:00
..
__init__.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
cuda.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_attn_triton.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
ipex.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
rocm.py ROCm and sliding windows fixes (#2033) 2024-06-10 15:09:50 +08:00