Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.
Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.
This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.
I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
* fix nccl issue
* add note in dockerfile
* use v2.22.3 that also fixes @samsamoa's repro
* poetry actually can't handle the conflict between torch and nccl
* set LD_PRELOAD
* Add more representative Llama GPTQ test
The Llama GPTQ test is updated to use a model with the commonly-used
quantizer config format and activation sorting. The old test is
kept around (but renamed) since it tests the format produced by
`text-generation-server quantize`.
* Add support for manually triggering a release build
* Refactor dead code.
* First working step.
* Remove a lot of duplicated code.
* More dead code.
* More cleanup.
* Fix Santacoder test.
* Fixing the simple tests.
* Fixing sharding.
* Fixes for VLM.
* Fixing santacoder (num_kv_heads hardcoded).
* Removing more dead code.
* Fixing `config.n_head`.
* Stopping earlier because of `<end_of_utterance>` in idefics2.
* Addresses comments.
* Removing the dead code.
* Fuse back mistral into FlashCausalLM.
* Finish removal.
* Fixing docs + causal_lm `batch_class`.
* Fixing docs + causal.lm.
* Add default to Gemma Causality.
* Default value for gemma/gemma2.
* Wrong default.
* feat: add pre commit step to force schema update when router changes
* fix: prefer improved update_doc and start server and compare
* fix: adjust typo
* fix: adjust revert typo
* fix: update workflow to use update_doc md command
* feat: improve workflow to check openapi schema too
* fix: adjust timeout for CI
* fix: adjust raise condition and install server in ci
* fix: install protoc before server
* feat: improve update doc and add command to print router schema
* fix: adjust autodoc workflow
* fix: explicitly install protoc and python
* fix: alllow trailing space in openapi schema diff
* Using flash decoding
Conditional flashdecoding.
Fix max_q.
Working kvcache
Working version with flash decoding.
Make it work for mistral.
Fix after rebase..
Less intrusive.
REvert changes in modeling.
Speedup flashdecoding.
HHachweew
Hack to make other models work.
Fixing non flash decoding llama path.
Router logic knows about page size.
Missing 2 models.
Missing cohere.
Fixing cohere flash decoding.
Revamped all this architecture.
Fix cohere.
Fixing falcon.
Enabling custom block size schedule.
Update router/src/infer.rs
Not sending preallocated output.
* Making it work on non flash decoding.
* Fix Cohere.
* Fix non decoding paths.
* Rebased.
* No need for cache_manager anymore.
* Update?
* "ipex" -> "cpu"
* These do not belong.
* Factoring cu_seqlen_qk for better abstracting over every model.
* Fixing non flash tests/imports.
* Changing return everywhere.
* Update mistral past.
* Fixing Mi{s,x}tral (non functional in Flash Decoding mode though).
* Fixup mistral clamping (had issues with cuda graphs).
* No need to recreate anything actually.
* refine get xpu free memory
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* enable qwen2 in xpu
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* enable gemma/gemma2/phi in intel platform
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
GPTQ-Marlin is currently the best-performing kernel for GPTQ models. So
let's use it by default if the kernels are installed, the GPU supports
it, and the kernels support the configuration.
For models generated by `text-generation-server quantize`, use
`sym=False`. This subcommand symmetric quantization since the beginning
and incorrectly reporting the model to be symmetric will use
GPTQ-Marlin (which does not support asymmetric quantization).
* fix microsoft/Phi-3-mini-4k-instruct crash in batch.slots[batch.slot_indices]
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Apply suggestions from code review
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>