hf_text-generation-inference/server/text_generation_server/models/custom_modeling
drbh bd6e8b3c13
fix: adjust llama MLP name from dense to mlp to correctly apply lora (#2760)
2024-11-19 15:10:22 -05:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Fixing auto bloom test. (#2699) 2024-10-28 06:14:11 +01:00
clip.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flash_cohere_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_dbrx_modeling.py Simplify two ipex conditions (#2755) 2024-11-19 08:04:23 +01:00
flash_deepseek_v2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gemma2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gemma_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gpt2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_gptj_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_llama_modeling.py fix: adjust llama MLP name from dense to mlp to correctly apply lora (#2760) 2024-11-19 15:10:22 -05:00
flash_mistral_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_mixtral_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_neox_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_pali_gemma_modeling.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
flash_phi_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_phi_moe_modeling.py feat: support phi3.5 moe (#2479) 2024-09-30 11:15:09 +02:00
flash_qwen2_modeling.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
flash_rw_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_santacoder_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
flash_starcoder2_modeling.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
idefics2.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_modeling.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-08-01 11:08:36 +02:00
idefics_perceiver.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_processing.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
idefics_vision.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
llava_next.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
mamba_modeling.py Fix: Change embeddings to embedding (#2738) 2024-11-15 13:16:15 +01:00
mllama.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mpt_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
neox_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
opt_modeling.py Fix the prefix for OPT model in opt_modelling.py #2370 (CI RUN) (#2371) 2024-08-07 23:14:02 -04:00
phi_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
qwen2_vl.py feat: support flash attention 2 in qwen2 vl vision blocks (#2721) 2024-11-18 12:46:40 -05:00
siglip.py Fix: don't apply post layernorm in SiglipVisionTransformer (#2459) 2024-08-26 17:04:46 -04:00
t5_modeling.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00