hf_text-generation-inference/server/text_generation/models
OlivierDehaene 042180d88f fix(server): Only pad to multiple of 8 on GPUs 2022-12-08 19:37:37 +01:00
..
__init__.py feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
bloom.py feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
causal_lm.py fix(server): Only pad to multiple of 8 on GPUs 2022-12-08 19:37:37 +01:00
galactica.py feat(server): Add model tests (#6) 2022-12-08 18:49:33 +01:00
model.py fix(batching): Avoid theoretical hang in batcher loop (#5) 2022-12-05 10:10:59 +01:00
seq2seq_lm.py fix(server): Only pad to multiple of 8 on GPUs 2022-12-08 19:37:37 +01:00
types.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00