hf_text-generation-inference/nix
Daniël de Kok 52e48739a5
Remove vLLM dependency for CUDA (#2751)
* Remove vLLM dependency for CUDA

This change adds `attention-kernels` as a dependency for paged
attention and cache reshaping. With that, we don't use vLLM
anywhere for CUDA.

Tested run (since we don't have paged attention in CI):

```
❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release
[...]
5 snapshots passed.
```

* Fix clippy warning
2024-11-17 17:34:50 +01:00
..
client.nix Stream options. (#2533) 2024-09-19 20:50:37 +02:00
crate-overrides.nix nix: support Python tokenizer conversion in the router (#2515) 2024-09-12 10:44:01 +02:00
docker.nix nix: experimental support for building a Docker container (#2470) 2024-10-01 18:02:06 +02:00
impure-shell.nix feat: natively support Granite models (#2682) 2024-10-23 10:04:05 +00:00
overlay.nix nix: example of local package overrides during development (#2607) 2024-10-04 16:52:42 +02:00
server.nix Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00