hf_text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 2358c2bb54
Add basic FP8 KV cache support (#2603)
* Add basic FP8 KV cache support

This change adds rudimentary FP8 KV cache support. The support is
enabled by passing `--kv-cache-dtype fp8_e5m2` to the launcher. Doing so
uses this type for the KV cache. However support is still limited:

* Only the `fp8_e5m2` type is supported.
* The KV cache layout is the same as `float16`/`bfloat16` (HND).
* The FP8 KV cache is only supported for FlashInfer.
* Loading of scales is not yet supported.

* Fix Cargo.toml
2024-10-04 17:51:48 +02:00
..
__init__.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
common.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
cuda.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
flashinfer.py flashinfer: pass window size and dtype (#2574) 2024-09-28 18:41:41 +02:00
ipex.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
kv_cache.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00
rocm.py Add basic FP8 KV cache support (#2603) 2024-10-04 17:51:48 +02:00