hf_text-generation-inference/server/text_generation_server/layers/gptq
Daniël de Kok e52be9bba2
Add support for Deepseek V2 (#2224)
Deepseek V2 is a MoE model from Deepseek. Relevant variations
compared to other models:

- Grouped top-K in expert selection.
- mscale in yarn is calculated using the `mscale` and `mscale_all_dim`
  configuration options.
- `mscale_all_dim` is also used in scaling attention softmax.
- Permuting of the query/key representations before applying rotary
  embeddings.
- Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`).
  So, we need weight loads that supports quantized weights. To this
  end `{Weights,WeightLoader}.get_weight` was added.
- The query/key head dimensionality differs from that of the value,
  so we need to pad during attention.
- Heads with size 192, needs an extension to our paged attention
  fork and we need to ensure that the KV cache is allocated with the
  correct size.
- Shared experts.
2024-07-19 17:23:20 +02:00
..
__init__.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
custom_autotune.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py Do not initialize scratch space when there are no ExLlamaV2 layers (#2015) 2024-06-05 10:45:47 +02:00
quant_linear.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
quantize.py Use symmetric quantization in the `quantize` subcommand (#2120) 2024-07-12 12:20:12 +02:00