hf_text-generation-inference/docs/source/conceptual
Daniël de Kok 36dd16017c Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-05-30 11:28:05 +02:00
..
flash_attention.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
guidance.md Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
safetensors.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
speculation.md feat: add train medusa head tutorial (#1934) 2024-05-23 11:34:18 +02:00
streaming.md fix typos in docs and add small clarifications (#1790) 2024-04-22 12:15:48 -04:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00