hf_text-generation-inference/server/text_generation_server/models
Wang, Yi 59922f9bc1
add numa to improve cpu inference perf (#2330)
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2024-08-13 15:33:55 +02:00
..
custom_modeling fix: prefer hidden_activation over hidden_act in gemma2 (#2381) 2024-08-08 14:08:56 -04:00
__init__.py feat: validate template variables before apply and improve sliding wi… (#2403) 2024-08-12 10:58:40 -04:00
bloom.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flash_causal_lm.py add numa to improve cpu inference perf (#2330) 2024-08-13 15:33:55 +02:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py Add support for prefix caching to the v3 router (#2392) 2024-08-12 14:59:17 +02:00
idefics.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-08-01 11:08:36 +02:00
idefics_causal_lm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
mamba.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
model.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
seq2seq_lm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
types.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm_causal_lm.py fix crash in multi-modal (#2245) 2024-07-24 10:39:08 +02:00