hf_text-generation-inference/docs/source/conceptual
Nicolas Patry 7a48a84784
Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385)
* Using an enum for flash backens (paged/flashdecoding/flashinfer)

* Early exit on server too.

* Clippy.

* Fix clippy and fmt.
2024-08-09 16:41:17 +02:00
..
flash_attention.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
guidance.md Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
lora.md feat: include local lora adapter loading docs (#2359) 2024-08-05 12:36:44 -04:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00
safetensors.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
speculation.md feat: add train medusa head tutorial (#1934) 2024-05-23 11:34:18 +02:00
streaming.md fix typos in docs and add small clarifications (#1790) 2024-04-22 12:15:48 -04:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00