hf_text-generation-inference/server/tests/models
OlivierDehaene 5a58226130
fix(server): fix decode token (#334)
Fixes #333

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2023-05-16 23:23:27 +02:00
..
test_bloom.py feat(router): use number of tokens in batch as input for dynamic batching (#226) 2023-04-24 17:59:00 +02:00
test_causal_lm.py feat(router): use number of tokens in batch as input for dynamic batching (#226) 2023-04-24 17:59:00 +02:00
test_model.py fix(server): fix decode token (#334) 2023-05-16 23:23:27 +02:00
test_santacoder.py feat(router): make router input validation optional (#164) 2023-04-09 20:22:27 +02:00
test_seq2seq_lm.py fix(server): fix decode token (#334) 2023-05-16 23:23:27 +02:00