hf_text-generation-inference/server/tests/models
OlivierDehaene 9b205d33cc
fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100)
2023-03-06 13:22:58 +01:00
..
test_bloom.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
test_causal_lm.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
test_santacoder.py breaking(router): modify /generate API to only return generated text (#50) 2023-02-02 15:02:04 +01:00
test_seq2seq_lm.py fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100) 2023-03-06 13:22:58 +01:00