hf_text-generation-inference/server
Daniël de Kok 5b6b74e21d
Improve support for GPUs with capability < 8 (#2575)
* Improve support for GPUs with capability < 8

- For models that cannot use flashinfer, use flash-attn v1 + paged
  attention for models with a compute capability older than 8.
- Disable prefix caching when using paged attention.
- When using flash-attn v1, pass the key/value, rather than the
  cache, since v1 cannot use block tables.

* nix: add flash-attn-v1 to the server environment

* Move disabling prefix caching into the block of exceptions

* Capability as `usize`s
2024-09-27 16:19:42 +02:00
..
custom_kernels All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
exllama_kernels MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
tests Fix tokenization yi (#2507) 2024-09-11 22:41:56 +02:00
text_generation_server Improve support for GPUs with capability < 8 (#2575) 2024-09-27 16:19:42 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-exllamav2 Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
Makefile-fbgemm Add Directory Check to Prevent Redundant Cloning in Build Process (#2486) 2024-09-07 13:19:43 +02:00
Makefile-flash-att Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Softcapping for gemma2. (#2273) 2024-07-22 18:27:10 +02:00
Makefile-flashinfer Prefix test - Different kind of load test to trigger prefix test bugs. (#2490) 2024-09-11 18:10:40 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock Update to moe-kenels 0.3.1 (#2535) 2024-09-19 22:16:32 +02:00
pyproject.toml Update to moe-kenels 0.3.1 (#2535) 2024-09-19 22:16:32 +02:00
requirements_cuda.txt hotfix: add syrupy to the right subproject (#2499) 2024-09-06 12:47:06 +02:00
requirements_intel.txt hotfix: add syrupy to the right subproject (#2499) 2024-09-06 12:47:06 +02:00
requirements_rocm.txt hotfix: add syrupy to the right subproject (#2499) 2024-09-06 12:47:06 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev