hf_text-generation-inference/server/text_generation_server
OlivierDehaene e74bd41e0f
feat(server): add paged attention to flash models (#516)
Closes #478
2023-06-30 19:09:59 +02:00
..
models feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
cli.py feat(server): Add inference support for GPTQ (llama + falcon tested) + Quantization script (#438) 2023-06-26 12:27:01 +02:00
interceptor.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
server.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00