hf_text-generation-inference/server
Nicolas Patry 4327210e6b
[Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940)
* Using flash decoding

Conditional flashdecoding.

Fix max_q.

Working kvcache

Working version with flash decoding.

Make it work for mistral.

Fix after rebase..

Less intrusive.

REvert changes in modeling.

Speedup flashdecoding.

HHachweew
Hack to make other models work.

Fixing non flash decoding llama path.

Router logic knows about page size.

Missing 2 models.

Missing cohere.

Fixing cohere flash decoding.

Revamped all this architecture.

Fix cohere.

Fixing falcon.

Enabling custom block size schedule.

Update router/src/infer.rs

Not sending preallocated output.

* Making it work on non flash decoding.

* Fix Cohere.

* Fix non decoding paths.

* Rebased.

* No need for cache_manager anymore.

* Update?

* "ipex" -> "cpu"

* These do not belong.

* Factoring cu_seqlen_qk for better abstracting over every model.

* Fixing non flash tests/imports.

* Changing return everywhere.

* Update mistral past.

* Fixing Mi{s,x}tral (non functional in Flash Decoding mode though).

* Fixup mistral clamping (had issues with cuda graphs).

* No need to recreate anything actually.
2024-07-01 23:28:00 +02:00
..
custom_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
exllama_kernels MI300 compatibility (#1764) 2024-05-17 15:30:47 +02:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
marlin Add support for Marlin 2:4 sparsity (#2102) 2024-06-25 21:09:42 +02:00
tests Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
text_generation_server [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-07-01 23:28:00 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
Makefile Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-12 08:15:28 +02:00
Makefile-flash-att Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Hotfixing `make install`. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Update LLMM1 bound (#2050) 2024-06-11 19:30:29 +08:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
poetry.lock Making `make install` work better by default. (#2004) 2024-06-04 19:38:46 +02:00
pyproject.toml Making `make install` work better by default. (#2004) 2024-06-04 19:38:46 +02:00
requirements_cuda.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00
requirements_intel.txt reable xpu, broken by gptq and setuptool upgrade (#1988) 2024-06-03 16:07:50 +02:00
requirements_rocm.txt Fix seeded output. (#1949) 2024-05-24 15:36:13 +02:00

README.md

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev