* Add support for FP8 KV cache scales
Since FP8 only has limited dynamic range, we can scale keys/values
before storing them into the cache (and unscale them in attention). To
avoid rescaling the cache as the absmax values change, good scales are
usually determined per layer using calibration calibration data and stored
in the checkpoint.
This change adds support for for using key-value scales and loading them
from checkpoints in the two most common formats:
- Separate per-layer `k_scale` and `v_scale` scalars.
- Per-layer `kv_scale` scalar (older format).
Currently, scales are only used with an `float8_e4m3fn` cache.
Besides adding support for key/value scales, the `fp8_quantize` function
is also extended to support quantization with a kernel vendored from
vLLM. This is slightly faster than the PyTorch implementation, but also
scales in FP32, potentially improving accuracy.
* Update FP8 KV cache test to use checkpoint with scales
* `can_scale`: check that the attention is flashinfer
* Simplify the `attention` function
- Use one definition rather than multiple.
- Add `key`/`value` arguments, so that we don't need the
`PREFILL_IN_KVCACHE` constant.
- Make it kwargs-only (to avoid mixing up the various `Tensor` args).
* Fixup flashinfer support
* Add basic FP8 KV cache support
This change adds rudimentary FP8 KV cache support. The support is
enabled by passing `--kv-cache-dtype fp8_e5m2` to the launcher. Doing so
uses this type for the KV cache. However support is still limited:
* Only the `fp8_e5m2` type is supported.
* The KV cache layout is the same as `float16`/`bfloat16` (HND).
* The FP8 KV cache is only supported for FlashInfer.
* Loading of scales is not yet supported.
* Fix Cargo.toml
* Improve support for GPUs with capability < 8
- For models that cannot use flashinfer, use flash-attn v1 + paged
attention for models with a compute capability older than 8.
- Disable prefix caching when using paged attention.
- When using flash-attn v1, pass the key/value, rather than the
cache, since v1 cannot use block tables.
* nix: add flash-attn-v1 to the server environment
* Move disabling prefix caching into the block of exceptions
* Capability as `usize`s
fix regression caused by attention api change. ipex.varlen_attention does not support paged-cache
format kv input now.
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* hotfix: fix xpu crash brought by code refine. torch.xpu rely on import ipex
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* reable gemma2 in xpu
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* fix in regression in ipex flashattention
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
- Always return the hidden states.
- Create the output tensor inside the `attention` and `paged_attention`
functions.
This removes the difference between how the output is handled between
attention (output parameter) and paged attention (return value). This
also removes the assumption that the attention implementation can
write to an output tensor (in preparation of FlashInfer).
* Using flash decoding
Conditional flashdecoding.
Fix max_q.
Working kvcache
Working version with flash decoding.
Make it work for mistral.
Fix after rebase..
Less intrusive.
REvert changes in modeling.
Speedup flashdecoding.
HHachweew
Hack to make other models work.
Fixing non flash decoding llama path.
Router logic knows about page size.
Missing 2 models.
Missing cohere.
Fixing cohere flash decoding.
Revamped all this architecture.
Fix cohere.
Fixing falcon.
Enabling custom block size schedule.
Update router/src/infer.rs
Not sending preallocated output.
* Making it work on non flash decoding.
* Fix Cohere.
* Fix non decoding paths.
* Rebased.
* No need for cache_manager anymore.
* Update?
* "ipex" -> "cpu"
* These do not belong.
* Factoring cu_seqlen_qk for better abstracting over every model.
* Fixing non flash tests/imports.
* Changing return everywhere.
* Update mistral past.
* Fixing Mi{s,x}tral (non functional in Flash Decoding mode though).
* Fixup mistral clamping (had issues with cuda graphs).
* No need to recreate anything actually.
* refine get xpu free memory
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* enable qwen2 in xpu
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* enable gemma/gemma2/phi in intel platform
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* Removing IPEX_AVAIL.
Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
except for a very few spots.
The biggest number of spots is the kv-cache layout and the flash_xxx.py
files.
Since those files should be removed soon and factored away, we should
not need them.
* Forgot a few places.
* Unrelated change.
* Fixing HF_TOKEN.
* HF_TOKEN