This change adds support for FlashInfer. FlashInfer can be enabled using
`FLASH_INFER=1` and is currently only implemented in `FlashCausalLM`.
Since this functionality is currently only for testing, FlashInfer is
not installed anywhere yet.
The FlashInfer API is quite different from FlashAttention/vLLM in that
it requires more global bookkeeping:
* A wrapper class needs to be contstructed (which we just call *state*).
Since this is fairly expensive (due to pinned host memory allocation),
we only do this once in a FlashCausalLM instance or for each CUDA
Graph size.
* Each model forward call needs to be wrapped in `begin_forward` and
`end_forward`. This sets up data structures that can be reused for all
calls to attention for that forward call.
When calling attention, we need access to the state object. To avoid
passing an argument down the call chain (which would require changes to
all models), we use a context variable.
Each model forward call is wrapped using a context manager that does all
the bookkeeping for such a call:
* Set the context variable to the forward call's state.
* Call `begin_forward` on the state.
* Yield.
* Call `end_forward` on the state.
* Reset the context variable.
We cannot use a single shared global variable for this, since e.g. CUDA
Graphs of different sizes each have their own state.
- Always return the hidden states.
- Create the output tensor inside the `attention` and `paged_attention`
functions.
This removes the difference between how the output is handled between
attention (output parameter) and paged attention (return value). This
also removes the assumption that the attention implementation can
write to an output tensor (in preparation of FlashInfer).
The `GPTWeightLoader` was structured like this in pseudocode:
if marlin:
Set up tensors in a way that GPTQ-Marlin expects
else:
Set up tensors in a way that ExLlama/GPTQ/AWQ expect
However, the GPT-Marlin implementation details should really be in the
`marlin` module. So move the former part out to a separate
`GPTQMarlinWeightsLoader`.
* Fix GPTQ autotune data type to be compatible with Torch 2.4.0
* Update poetry lock file
* Fix small PaliGemma logprob differences after the torch update
* fix crash in multi-modal
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* update according to review comment
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix llava_next regression in latest main
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Support passing head_dim through config
* Using `head_dim` as a fallback is necessary since it's a non standard
key in mistralConfig (as defined in transformers).
* Shorter diff.
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
* Add support for repacking AWQ weights for GPTQ-Marlin
So far we couldn't support AWQ because virtually all AWQ models use
symmetric quantization, which GPTQ-Marlin did not suppors. GPTQ-Marlin
has recently added support AWQ repacking and AWQ asymmetric quantization
(zero_point=True).
This change updates all GPTQ-Marlin kernels from upstream and wires up
AWQ support. For now enabling AWQ using Marlin requires running TGI with
`--quantize gptq`.
* Enable Marlin for supported AWQ configurations by default
This makes the AWQ -> GPTQ repack test redundant, since we are now
testing this with the regular AWQ test.
Deepseek V2 is a MoE model from Deepseek. Relevant variations
compared to other models:
- Grouped top-K in expert selection.
- mscale in yarn is calculated using the `mscale` and `mscale_all_dim`
configuration options.
- `mscale_all_dim` is also used in scaling attention softmax.
- Permuting of the query/key representations before applying rotary
embeddings.
- Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`).
So, we need weight loads that supports quantized weights. To this
end `{Weights,WeightLoader}.get_weight` was added.
- The query/key head dimensionality differs from that of the value,
so we need to pad during attention.
- Heads with size 192, needs an extension to our paged attention
fork and we need to ensure that the KV cache is allocated with the
correct size.
- Shared experts.
* Improve the handling of quantized weights
Handling of quantized weights was split between two mechanisms:
- For quantized checkpoints, we used the new weight loader
infrastructure.
- For quantization while loading (EETQ, FP8, bitsandbytes) we
instead relied on conditional in `get_linear`.
Weight loaders support context managers to selectively load
particular layers with different weight loaders, which is useful
for models like Idefics2 AWQ, which uses a quantized text model,
but unquantized vision and connector models. However, the context
manager would be overrided by `get_linear`, which string-checks
`quantizer`. Also, the context manager would not work with
EETQ, FP8, and bitsandbytes.
This change migrates all quantizers to the weight loader infrastructure.
This has several benefits:
- We can use context managers with all quantizers.
- All the implementation details move down to the quantizer layers,
`get_linear` does not need to know how to handle quantizer linear
layers.
- All quantizer weights are strongly typed, we don't pass around
raw tensors.
- We don't have to pass around the `quantizer` string everywhere.
* Exclude non-MLP layers when using FP8 quantization with Llama
* feat: simple mistral lora integration tests
* fix: include args in docker launcher
* fix: disable cuda graphs with lora and warn
* fix: adjust docs and precommit issues
* fix: re update docs
Packing of asymmetric quantization is broken, all (q)zeros values
of `0` get reset to `1`, resulting in a loss of accuracy. So instead
use symmetric quantization. To be able to distinguish models with
symmetric and asymmetric quantization, a new config tensor `gptq_sym` is
added. If this tensor is not present, we assume `sym=False`.
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.
Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.
This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.
I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
* fix nccl issue
* add note in dockerfile
* use v2.22.3 that also fixes @samsamoa's repro
* poetry actually can't handle the conflict between torch and nccl
* set LD_PRELOAD