hf_text-generation-inference/server/tests/models
OlivierDehaene 44ce098c10
feat(server): pre-allocate max attention mask (#75)
2023-02-24 12:49:21 +01:00
..
test_bloom.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
test_causal_lm.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
test_santacoder.py breaking(router): modify /generate API to only return generated text (#50) 2023-02-02 15:02:04 +01:00
test_seq2seq_lm.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00