Deepseek V2 is a MoE model from Deepseek. Relevant variations
compared to other models:
- Grouped top-K in expert selection.
- mscale in yarn is calculated using the `mscale` and `mscale_all_dim`
configuration options.
- `mscale_all_dim` is also used in scaling attention softmax.
- Permuting of the query/key representations before applying rotary
embeddings.
- Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`).
So, we need weight loads that supports quantized weights. To this
end `{Weights,WeightLoader}.get_weight` was added.
- The query/key head dimensionality differs from that of the value,
so we need to pad during attention.
- Heads with size 192, needs an extension to our paged attention
fork and we need to ensure that the KV cache is allocated with the
correct size.
- Shared experts.
* draft of usage stats
* fix wrong link
* launcher doesn't need sysinfo dep
* only tokenizer class instead of hole struct
* unused import
* fix clippy errors
* update openAPI doc
* cargo fmt
* fix error in passing flags to router
* try again to update docs
* run pre-commit locally
* Update router/src/main.rs
Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
* Update router/src/main.rs
Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
* on crash use anonymous error event
* delete json_output and ngrok
* more robust way of checking if is in container
* more robust nvidia smi
* parse xpu more robustly
* fix errors
* add nvidia-smi details in docs
* cargo fmt
* fix clippy
* should make docs check pass
* Update router/src/usage_stats.rs
Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
* error reason can't be in nested json
* cargo fmt
---------
Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
Co-authored-by: Erik Kaunismäki <erikkaum@Eriks-MacBook-Pro.local>
* Improve the handling of quantized weights
Handling of quantized weights was split between two mechanisms:
- For quantized checkpoints, we used the new weight loader
infrastructure.
- For quantization while loading (EETQ, FP8, bitsandbytes) we
instead relied on conditional in `get_linear`.
Weight loaders support context managers to selectively load
particular layers with different weight loaders, which is useful
for models like Idefics2 AWQ, which uses a quantized text model,
but unquantized vision and connector models. However, the context
manager would be overrided by `get_linear`, which string-checks
`quantizer`. Also, the context manager would not work with
EETQ, FP8, and bitsandbytes.
This change migrates all quantizers to the weight loader infrastructure.
This has several benefits:
- We can use context managers with all quantizers.
- All the implementation details move down to the quantizer layers,
`get_linear` does not need to know how to handle quantizer linear
layers.
- All quantizer weights are strongly typed, we don't pass around
raw tensors.
- We don't have to pass around the `quantizer` string everywhere.
* Exclude non-MLP layers when using FP8 quantization with Llama
* feat: simple mistral lora integration tests
* fix: include args in docker launcher
* fix: disable cuda graphs with lora and warn
* fix: adjust docs and precommit issues
* fix: re update docs
Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.
Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.
This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.
I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
* fix nccl issue
* add note in dockerfile
* use v2.22.3 that also fixes @samsamoa's repro
* poetry actually can't handle the conflict between torch and nccl
* set LD_PRELOAD
* Add more representative Llama GPTQ test
The Llama GPTQ test is updated to use a model with the commonly-used
quantizer config format and activation sorting. The old test is
kept around (but renamed) since it tests the format produced by
`text-generation-server quantize`.
* Add support for manually triggering a release build
* Refactor dead code.
* First working step.
* Remove a lot of duplicated code.
* More dead code.
* More cleanup.
* Fix Santacoder test.
* Fixing the simple tests.
* Fixing sharding.
* Fixes for VLM.
* Fixing santacoder (num_kv_heads hardcoded).
* Removing more dead code.
* Fixing `config.n_head`.
* Stopping earlier because of `<end_of_utterance>` in idefics2.
* Addresses comments.
* Removing the dead code.
* Fuse back mistral into FlashCausalLM.
* Finish removal.
* Fixing docs + causal_lm `batch_class`.
* Fixing docs + causal.lm.
* Add default to Gemma Causality.
* Default value for gemma/gemma2.
* Wrong default.