hf_text-generation-inference/server
Daniël de Kok 7830de1566
Add FlashInfer support ()
This change adds support for FlashInfer. FlashInfer can be enabled using
`FLASH_INFER=1` and is currently only implemented in `FlashCausalLM`.
Since this functionality is currently only for testing, FlashInfer is
not installed anywhere yet.

The FlashInfer API is quite different from FlashAttention/vLLM in that
it requires more global bookkeeping:

* A wrapper class needs to be contstructed (which we just call *state*).
  Since this is fairly expensive (due to pinned host memory allocation),
  we only do this once in a FlashCausalLM instance or for each CUDA
  Graph size.
* Each model forward call needs to be wrapped in `begin_forward` and
  `end_forward`. This sets up data structures that can be reused for all
  calls to attention for that forward call.

When calling attention, we need access to the state object. To avoid
passing an argument down the call chain (which would require changes to
all models), we use a context variable.

Each model forward call is wrapped using a context manager that does all
the bookkeeping for such a call:

* Set the context variable to the forward call's state.
* Call `begin_forward` on the state.
* Yield.
* Call `end_forward` on the state.
* Reset the context variable.

We cannot use a single shared global variable for this, since e.g. CUDA
Graphs of different sizes each have their own state.
2024-08-09 11:42:00 +02:00
..
custom_kernels chore: add pre-commit () 2024-02-16 11:58:58 +01:00
exllama_kernels MI300 compatibility () 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit () 2024-02-16 11:58:58 +01:00
tests feat: add ruff and resolve issue () 2024-07-26 10:29:09 -04:00
text_generation_server Add FlashInfer support () 2024-08-09 11:42:00 +02:00
.gitignore Impl simple mamba model () 2024-02-08 10:19:45 +01:00
Makefile hotfix: update nccl 2024-07-23 23:31:28 +02:00
Makefile-awq chore: add pre-commit () 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). () 2024-04-12 08:15:28 +02:00
Makefile-fbgemm chore: update to torch 2.4 () 2024-07-23 20:39:43 +00:00
Makefile-flash-att Hotfixing `make install`. () 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Softcapping for gemma2. () 2024-07-22 18:27:10 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters () 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit () 2024-02-16 11:58:58 +01:00
Makefile-vllm Add support for Deepseek V2 () 2024-07-19 17:23:20 +02:00
README.md chore: add pre-commit () 2024-02-16 11:58:58 +01:00
poetry.lock Install Marlin from standalone package () 2024-07-29 15:37:10 +02:00
pyproject.toml Install Marlin from standalone package () 2024-07-29 15:37:10 +02:00
requirements_cuda.txt hotfix: pin numpy () 2024-07-23 17:53:19 +02:00
requirements_intel.txt hotfix: pin numpy () 2024-07-23 17:53:19 +02:00
requirements_rocm.txt hotfix: pin numpy () 2024-07-23 17:53:19 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev