4594e6faba
This change adds support for Marlin-quantized models. Marlin is an FP16xINT4 matmul kernel, which provides good speedups decoding batches of 16-32 tokens. It supports quantized models with symmetric quantization, groupsize -1 or 128, and 4-bit. Tested with: - Llama 2 - Llama 3 - Phi 3 |
||
---|---|---|
.. | ||
basic_tutorials | ||
conceptual | ||
_toctree.yml | ||
index.md | ||
installation.md | ||
installation_amd.md | ||
installation_gaudi.md | ||
installation_inferentia.md | ||
installation_nvidia.md | ||
messages_api.md | ||
quicktour.md | ||
supported_models.md |