hf_text-generation-inference/server/text_generation_server/models/custom_modeling
Nicolas Patry 799a193b10 Fixing Phi3. 2024-06-01 08:47:00 +00:00
..
__init__.py
bloom_modeling.py
clip.py
flash_cohere_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_dbrx_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_gemma_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_gpt2_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_llama_modeling.py Fixing Phi3. 2024-06-01 08:47:00 +00:00
flash_mistral_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_mixtral_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_neox_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_pali_gemma_modeling.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00
flash_phi_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_qwen2_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_rw_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_santacoder_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
flash_starcoder2_modeling.py Purely refactors paged/attention into `layers/attention` and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-05-31 17:57:01 +02:00
idefics2.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
idefics_config.py
idefics_image_processing.py
idefics_modeling.py reenable xpu for tgi (#1939) 2024-05-23 14:11:08 +02:00
idefics_perceiver.py
idefics_processing.py
idefics_vision.py
llava_next.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
mamba_modeling.py
mpt_modeling.py
neox_modeling.py
opt_modeling.py
phi_modeling.py
siglip.py Removing some unused code. (#1915) 2024-05-17 11:35:49 +02:00
t5_modeling.py
vlm.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00