hf_text-generation-inference/server/text_generation_server/models/custom_modeling
Daniël de Kok e52be9bba2
Add support for Deepseek V2 (#2224)
Deepseek V2 is a MoE model from Deepseek. Relevant variations
compared to other models:

- Grouped top-K in expert selection.
- mscale in yarn is calculated using the `mscale` and `mscale_all_dim`
  configuration options.
- `mscale_all_dim` is also used in scaling attention softmax.
- Permuting of the query/key representations before applying rotary
  embeddings.
- Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`).
  So, we need weight loads that supports quantized weights. To this
  end `{Weights,WeightLoader}.get_weight` was added.
- The query/key head dimensionality differs from that of the value,
  so we need to pad during attention.
- Heads with size 192, needs an extension to our paged attention
  fork and we need to ensure that the KV cache is allocated with the
  correct size.
- Shared experts.
2024-07-19 17:23:20 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
clip.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
flash_cohere_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_dbrx_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_deepseek_v2_modeling.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
flash_gemma2_modeling.py Hotfix: fix of use of unquantized weights in Gemma GQA loading (#2255) 2024-07-19 12:55:59 +02:00
flash_gemma_modeling.py Hotfix: fix of use of unquantized weights in Gemma GQA loading (#2255) 2024-07-19 12:55:59 +02:00
flash_gpt2_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_llama_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_mistral_modeling.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
flash_mixtral_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_neox_modeling.py Hotfix: various GPT-based model fixes (#2256) 2024-07-19 14:42:19 +02:00
flash_pali_gemma_modeling.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
flash_phi_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_qwen2_modeling.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
flash_rw_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_santacoder_modeling.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
flash_starcoder2_modeling.py Hotfix: various GPT-based model fixes (#2256) 2024-07-19 14:42:19 +02:00
idefics2.py Improve the handling of quantized weights (#2250) 2024-07-19 09:37:39 +02:00
idefics_config.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_image_processing.py chore: formatting 2023-12-11 14:49:52 +01:00
idefics_modeling.py reenable xpu for tgi (#1939) 2024-05-23 14:11:08 +02:00
idefics_perceiver.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
idefics_processing.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
idefics_vision.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
llava_next.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
mamba_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
mpt_modeling.py Hotfix: fix MPT after recent refactor (#2257) 2024-07-19 14:42:35 +02:00
neox_modeling.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
opt_modeling.py fix dbrx & opt model prefix bug (#2201) 2024-07-08 09:01:14 +02:00
phi_modeling.py Consistently take `prefix` in model constructors (#2191) 2024-07-05 16:07:48 +02:00
siglip.py Removing some unused code. (#1915) 2024-05-17 11:35:49 +02:00
t5_modeling.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
vlm.py Pali gemma modeling (#1895) 2024-05-16 06:58:47 +02:00