hf_text-generation-inference/server/text_generation_server/models
Daniël de Kok e52be9bba2
Add support for Deepseek V2 (#2224)
Deepseek V2 is a MoE model from Deepseek. Relevant variations
compared to other models:

- Grouped top-K in expert selection.
- mscale in yarn is calculated using the `mscale` and `mscale_all_dim`
  configuration options.
- `mscale_all_dim` is also used in scaling attention softmax.
- Permuting of the query/key representations before applying rotary
  embeddings.
- Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`).
  So, we need weight loads that supports quantized weights. To this
  end `{Weights,WeightLoader}.get_weight` was added.
- The query/key head dimensionality differs from that of the value,
  so we need to pad during attention.
- Heads with size 192, needs an extension to our paged attention
  fork and we need to ensure that the KV cache is allocated with the
  correct size.
- Shared experts.
2024-07-19 17:23:20 +02:00
..
custom_modeling Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
__init__.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
bloom.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Hotfix: fix MPT after recent refactor (#2257) 2024-07-19 14:42:35 +02:00
flash_causal_lm.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
flash_mistral.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
galactica.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
globals.py [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-07-01 23:28:00 +02:00
idefics.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
mamba.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
model.py Hotfixing after refactor. 2024-07-05 09:25:29 +00:00
pali_gemma.py Refactor dead code - Removing all `flash_xxx.py` files. (#2166) 2024-07-05 10:29:56 +02:00
seq2seq_lm.py Move quantized weight handling out of the `Weights` class (#2194) 2024-07-09 20:04:03 +02:00
types.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
vlm_causal_lm.py Hotfix: pass through model revision in `VlmCausalLM` (#2258) 2024-07-19 15:59:00 +02:00