hf_text-generation-inference/server/text_generation_server
Daniël de Kok 72ab60fdd5
Use FP8 KV cache when specified by compressed-tensors (#2761)
The compressed-tensors configuration can specify the configuration of
the KV cache as well. Use an FP8 KV cache when the configuration tells
us to do so (all other options and types are ignored for now).
2024-11-26 08:27:41 +01:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layers Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
models Use FP8 KV cache when specified by compressed-tensors (#2761) 2024-11-26 08:27:41 +01:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00