hf_text-generation-inference/nix
Daniël de Kok 5b6b74e21d
Improve support for GPUs with capability < 8 (#2575)
* Improve support for GPUs with capability < 8

- For models that cannot use flashinfer, use flash-attn v1 + paged
  attention for models with a compute capability older than 8.
- Disable prefix caching when using paged attention.
- When using flash-attn v1, pass the key/value, rather than the
  cache, since v1 cannot use block tables.

* nix: add flash-attn-v1 to the server environment

* Move disabling prefix caching into the block of exceptions

* Capability as `usize`s
2024-09-27 16:19:42 +02:00
..
client.nix Stream options. (#2533) 2024-09-19 20:50:37 +02:00
crate-overrides.nix nix: support Python tokenizer conversion in the router (#2515) 2024-09-12 10:44:01 +02:00
impure-shell.nix Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00
server.nix Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00