hf_text-generation-inference/docs/source/conceptual
drbh 2335459556
CI (2592): Allow LoRA adapter revision in server launcher (#2602)
allow revision for lora adapters from launcher

Co-authored-by: Sida <sida@kulamind.com>
Co-authored-by: teamclouday <teamclouday@gmail.com>
2024-10-02 10:51:04 -04:00
..
external.md Add links to Adyen blogpost (#2500) 2024-09-06 17:00:54 +02:00
flash_attention.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
guidance.md Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
lora.md CI (2592): Allow LoRA adapter revision in server launcher (#2602) 2024-10-02 10:51:04 -04:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-08-09 16:41:17 +02:00
safetensors.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
speculation.md feat: add train medusa head tutorial (#1934) 2024-05-23 11:34:18 +02:00
streaming.md Add links to Adyen blogpost (#2500) 2024-09-06 17:00:54 +02:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00