hf_text-generation-inference/server/text_generation_server
OlivierDehaene ebc74d5666
feat(router): use number of tokens in batch as input for dynamic batching (#226)
Co-authored-by: Nick Hill <nickhill@us.ibm.com>
2023-04-24 17:59:00 +02:00
..
models feat(router): use number of tokens in batch as input for dynamic batching (#226) 2023-04-24 17:59:00 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils feat(server): support OPT models (#55) 2023-04-11 19:16:41 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
cli.py fix(docker): fix docker image dependencies (#187) 2023-04-17 00:26:47 +02:00
interceptor.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
server.py feat(router): use number of tokens in batch as input for dynamic batching (#226) 2023-04-24 17:59:00 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00