hf_text-generation-inference/server/text_generation_server/layers/attention
Daniël de Kok 52e48739a5
Remove vLLM dependency for CUDA (#2751)
* Remove vLLM dependency for CUDA

This change adds `attention-kernels` as a dependency for paged
attention and cache reshaping. With that, we don't use vLLM
anywhere for CUDA.

Tested run (since we don't have paged attention in CI):

```
❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release
[...]
5 snapshots passed.
```

* Fix clippy warning
2024-11-17 17:34:50 +01:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
ipex.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
kv_cache.py Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
rocm.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00