* Add support for FP8 KV cache scales
Since FP8 only has limited dynamic range, we can scale keys/values
before storing them into the cache (and unscale them in attention). To
avoid rescaling the cache as the absmax values change, good scales are
usually determined per layer using calibration calibration data and stored
in the checkpoint.
This change adds support for for using key-value scales and loading them
from checkpoints in the two most common formats:
- Separate per-layer `k_scale` and `v_scale` scalars.
- Per-layer `kv_scale` scalar (older format).
Currently, scales are only used with an `float8_e4m3fn` cache.
Besides adding support for key/value scales, the `fp8_quantize` function
is also extended to support quantization with a kernel vendored from
vLLM. This is slightly faster than the PyTorch implementation, but also
scales in FP32, potentially improving accuracy.
* Update FP8 KV cache test to use checkpoint with scales
* `can_scale`: check that the attention is flashinfer
Change `fp8_quantize` so that we can pass around reciprocals everywhere,
so scales are always passed around in the checkpoint format.
I also noticed that we ignore any input scales that we might have when
fbgemm is available. Skip this path if we already have a scale.
* add gptq and awq int4 support in intel platform
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix ci failure
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* set kv cache dtype
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* refine the code according to the review command
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Simplifying conditionals + reverting integration tests values.
* Unused import
* Fix redundant import.
* Revert change after rebase.
* Upgrading the tests (TP>1 fix changes to use different kernels.)
* Update server/text_generation_server/layers/gptq/__init__.py
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
* Simplify the `attention` function
- Use one definition rather than multiple.
- Add `key`/`value` arguments, so that we don't need the
`PREFILL_IN_KVCACHE` constant.
- Make it kwargs-only (to avoid mixing up the various `Tensor` args).
* Fixup flashinfer support
XPU backend is available natively (without IPEX) in pytorch starting
from pytorch 2.4. This commit extends TGI to cover the case when user
has XPU support thru pytorch 2.4, but does not have IPEX installed.
Models which don't require attention can work. For attention required
models more work is needed to provide attention implementation.
Tested with the following models:
* teknium/OpenHermes-2.5-Mistral-7B
* bigscience/bloom-560m
* google/gemma-7b
* google/flan-t5-xxl
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
To make sure that everything is formatted with the same black version
as CI.
I sometimes use isort for new files to get nicely ordered imports,
so add it as well. Also set the isort configuration to format in a
way that is compatible with black.
* Add basic FP8 KV cache support
This change adds rudimentary FP8 KV cache support. The support is
enabled by passing `--kv-cache-dtype fp8_e5m2` to the launcher. Doing so
uses this type for the KV cache. However support is still limited:
* Only the `fp8_e5m2` type is supported.
* The KV cache layout is the same as `float16`/`bfloat16` (HND).
* The FP8 KV cache is only supported for FlashInfer.
* Loading of scales is not yet supported.
* Fix Cargo.toml
* Working loading state.
* Preprocessing.
* Working state ? (Broke idefics1 temporarily).
* Cleaner condition.
* Fix idefics.
* Updating config, removing TODO
* Mllama
* Ugrade transformers 4.45
* Flashing mllama.
* Starting to get there.
* Working state.
* Integrations tests for mllama (cutting to 10 tokens because there seems'
to be instability after (meaning size of the batch matters.
* Updating model link.
* Earlier assert.
* Fix vlm ?
* remove log.
* Force ignore all images but last.
* Default dtype bfloat16.
* Update integration test after switch to bf16.
* Remove dead code.
* Removed dead code.
* Upgrade the flake to latest transformers/tokenizers
* Move to hf tgi-nix
* Upgrade to 0.5.0
* feat: support phi3.5 moe model loading
* fix: prefer llama base model and improve rotary logic
* feat: return reasonable generation and add integration test
* fix: run lint and update docs
* fix: rerun lint for openapi docs
* fix: prefer do_sample false unless temp is set by user, and update chat tests
* fix: small typo adjustments
* fix: consolidate long rope paths
* fix: revert greedy by default and test changes
* Vendor configuration so that we don't have to `trust_remote_code`
* Use SparseMoELayer
* Add support for dense MoE
* Some type annotations
* Add the usual model tests
* Ruff.
---------
Co-authored-by: Daniël de Kok <me@danieldk.eu>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
This change add support for MoE models that use GPTQ quantization.
Currently only models with the following properties are supported:
- No `desc_act` with tensor parallelism, unless `group_size=-1`.
- No asymmetric quantization.
- No AWQ.
* Improve support for GPUs with capability < 8
- For models that cannot use flashinfer, use flash-attn v1 + paged
attention for models with a compute capability older than 8.
- Disable prefix caching when using paged attention.
- When using flash-attn v1, pass the key/value, rather than the
cache, since v1 cannot use block tables.
* nix: add flash-attn-v1 to the server environment
* Move disabling prefix caching into the block of exceptions
* Capability as `usize`s
* Add support for scalar FP8 weight scales
* Support LLM compressor FP8 checkpoints on H100
On H100, we use fbgemm-gpu, which requires bfloat16 as the input dtype.
However, we wouldn't pick up fp8 quantization for models quantized with
LLM compressor. This change adds enough parsing to detect if models have
FP8-quantized weights.
* Remove stray debug print
* Move to moe-kernels package and switch to common MoE layer
This change introduces the new `moe-kernels` package:
- Add `moe-kernels` as a dependency.
- Introduce a `SparseMoELayer` module that can be used by MoE
models.
- Port over Mixtral and Deepseek.
* Make `cargo check` pass
* Update runner
* Fixing odd tokenization self modifications on the Rust side (load and
resave in Python).
* Fixing the builds ?
* Fix the gh action?
* Fixing the location ?
* Validation is odd.
* Try a faster runner
* Upgrade python version.
* Remove sccache
* No sccache.
* Getting libpython maybe ?
* List stuff.
* Monkey it up.
* have no idea at this point
* Tmp.
* Shot in the dark.
* Tmate the hell out of this.
* Desperation.
* WTF.
* -y.
* Apparently 3.10 is not available anymore.
* Updating the dockerfile to make libpython discoverable at runtime too.
* Put back rust tests.
* Why do we want mkl on AMD ?
* Forcing 3.11 ?
* Adding prefix test.
* [WIP] tmp dump of integration load tests.
* Remove other tensor creation.
* Fixed the radix tree.
Used a slice everywhere in radix.rs to keep the cheap Arc cloning
instead of recomputing the input_ids.
* Fix parsing
* Is it really flashinfer version ?
* Remove some comments.
* Revert the max prefix hit.
* Adding numpy to diff.
* Upgraded flashinfer.
* Upgrading some stuff.
* Are we done yet ?
* Minor fixup
* Remove 1 log and put back the other.
* Add comment for why slot 0 is OK.
* Mounting on the job.
* Get me a debug branch
* Debugging CIs is fun.
* Attempt #28
* wip
* Tmate.
* Praying.
* Updating VLM causal model with updated context.
* Important line got squashed.
* Tmate again.
* Fingers crossed.
* We want only 1 run of integration tests.....
---------
Co-authored-by: Guillaume LEGENDRE <glegendre01@gmail.com>
fix regression caused by attention api change. ipex.varlen_attention does not support paged-cache
format kv input now.
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Tied embeddings in MLP speculator.
* Fixing the scale_weight when users decide to not use the speculation as
much as defined in the config.
* Adding scaling support + optimize some ops.
* Making prefix/flashinfer the default and testing the full release tests.
* Include flashinfer in the docker.
* Using prebuilt.
* Allowing window_left_size (dummy version).
* Disabling flashinfer/prefix caching on odd head_dim
* Disable prefix caching for lora.
* More specific codes.
* Update lock
* Updating integration tests with new values with FI/FD.
Remove paged as a default too, and using FD everywhere.
* Update cargo lock ?
* Upgrade to 1.80 because of bitstream...
* Everywhere 1.80
* Forgot last default place.
* Apply suggestions from code review
Co-authored-by: drbh <david.richard.holtz@gmail.com>
* Updated flake lock
* Tmp
* Upgrade resolution system for less errors in resolution.
* Remove lambda for cleaner function.
* Handling debugger.
* OVerride the env in server tests.
* Is this enough to make it work ?
* This seems to be working.
* Downgrade some logs.
* Fixing the default for vlm.
* Don't enable prefix caching on VLM just yet.
* Change `add_special_tokens` in order to have the correct tokens for chat
input and not (since it's super important with the prefixing now)
* Fixing prefix caching for flashdecoding.
* Update all models.
* Fixed flashinfer version.
* add_special_tokens is internal only
* Fixing seqlen with the new vlms.
* Fixing the issue with `add_special_tokens` not being passed around.
* Fixing the test.
* Removing encoder_decoder (seq2seq).
* Update the chat test.
* Fixing the batching tokenization in flash causal lm.
* Truncating left for radix purposes.
* Oops this doesn't belong here.
* Put back default pure shell.
* Update server tests
- Default to throughput test in k6
- Use TGI_WIGGLE_ROOM to adjust wiggle room
* Only n_heads / process_group.size() are necessary.
* Revert the integrationt tests change (seem linked to head_size
modification).
* Adding error message when assert is violated.
* Fixing the free algorithm to handle times where the common prefix is
smaller.
* Apply suggestions from code review
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
* Update server/text_generation_server/layers/attention/common.py
Co-authored-by: OlivierDehaene <olivier@huggingface.co>
* Fix disabling prefix caching - Fix windowing checks.
* Revert the Cohere tokenizer change (for now using a revision instead).
* Fmt.
---------
Co-authored-by: drbh <david.richard.holtz@gmail.com>
Co-authored-by: OlivierDehaene <olivier@huggingface.co>