hf_text-generation-inference/server/text_generation_server
Daniël de Kok 52e48739a5
Remove vLLM dependency for CUDA (#2751)
* Remove vLLM dependency for CUDA

This change adds `attention-kernels` as a dependency for paged
attention and cache reshaping. With that, we don't use vLLM
anywhere for CUDA.

Tested run (since we don't have paged attention in CI):

```
❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release
[...]
5 snapshots passed.
```

* Fix clippy warning
2024-11-17 17:34:50 +01:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layers Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
models Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Upgrading our deps. (#2750) 2024-11-15 14:03:27 +01:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00