hf_text-generation-inference/server/text_generation_server/models/custom_modeling
Nicolas Patry 9e2fdf57c0
Removing IPEX_AVAIL. (#2115)
* Removing IPEX_AVAIL.

Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
except for a very few spots.

The biggest number of spots is the kv-cache layout and the flash_xxx.py
files.
Since those files should be removed soon and factored away, we should
not need them.

* Forgot a few places.

* Unrelated change.

* Fixing HF_TOKEN.

* HF_TOKEN
2024-06-25 13:20:57 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
clip.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
flash_cohere_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
flash_dbrx_modeling.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
flash_gemma_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_gpt2_modeling.py Add Phi-3 medium support (#2039) 2024-06-10 09:22:29 +02:00
flash_llama_modeling.py Add Phi-3 medium support (#2039) 2024-06-10 09:22:29 +02:00
flash_mistral_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_mixtral_modeling.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
flash_neox_modeling.py feat: move allocation logic to rust (#1835) 2024-06-05 12:18:38 +02:00
flash_pali_gemma_modeling.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
flash_phi_modeling.py Add support for Marlin-quantized models 2024-06-06 13:16:52 +02:00
flash_qwen2_modeling.py Support exl2-quantized Qwen2 models (#2085) 2024-06-20 07:56:16 +02:00
flash_rw_modeling.py feat: move allocation logic to rust (#1835) 2024-06-05 12:18:38 +02:00
flash_santacoder_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
flash_starcoder2_modeling.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
idefics2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py chore: formatting 2023-12-11 14:49:52 +01:00
idefics_modeling.py reenable xpu for tgi (#1939) 2024-05-23 14:11:08 +02:00
idefics_perceiver.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
idefics_processing.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_vision.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
llava_next.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
mamba_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
mpt_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
neox_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
opt_modeling.py fix(server): fix OPT implementation (#2061) 2024-06-12 18:22:20 +02:00
phi_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
siglip.py Removing some unused code. (#1915) 2024-05-17 11:35:49 +02:00
t5_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
vlm.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00