.. |
__init__.py
|
feat(server): flash santacoder (#153)
|
2023-04-03 19:06:42 +02:00 |
bloom_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
clip.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
flash_cohere_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_dbrx_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_gemma2_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_gemma_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_gpt2_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_llama_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_mistral_modeling.py
|
fix: use the base layers weight in mistral rocm (#2155)
|
2024-07-02 11:56:25 +02:00 |
flash_mixtral_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_neox_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_pali_gemma_modeling.py
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
flash_phi_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_qwen2_modeling.py
|
Hotfixing qwen2 and starcoder2 (which also get clamping). (#2167)
|
2024-07-02 14:26:47 +02:00 |
flash_rw_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_santacoder_modeling.py
|
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
|
2024-07-01 23:28:00 +02:00 |
flash_starcoder2_modeling.py
|
Hotfixing qwen2 and starcoder2 (which also get clamping). (#2167)
|
2024-07-02 14:26:47 +02:00 |
idefics2.py
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
idefics_config.py
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
idefics_image_processing.py
|
chore: formatting
|
2023-12-11 14:49:52 +01:00 |
idefics_modeling.py
|
reenable xpu for tgi (#1939)
|
2024-05-23 14:11:08 +02:00 |
idefics_perceiver.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
idefics_processing.py
|
chore: add pre-commit (#1569)
|
2024-02-16 11:58:58 +01:00 |
idefics_vision.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
llava_next.py
|
Idefics2: sync added image tokens with transformers (#2080)
|
2024-06-27 15:54:35 +02:00 |
mamba_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
mpt_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
neox_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
opt_modeling.py
|
fix(server): fix OPT implementation (#2061)
|
2024-06-12 18:22:20 +02:00 |
phi_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
siglip.py
|
Removing some unused code. (#1915)
|
2024-05-17 11:35:49 +02:00 |
t5_modeling.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
vlm.py
|
Pali gemma modeling (#1895)
|
2024-05-16 06:58:47 +02:00 |