hf_text-generation-inference/server/text_generation/models
OlivierDehaene 54fec93193
fix(server): fix seeding with multiple shards (#44)
2023-01-31 16:01:15 +01:00
..
__init__.py feat(server): Support SantaCoder (#26) 2023-01-20 12:24:39 +01:00
bloom.py fix(server): fix seeding with multiple shards (#44) 2023-01-31 16:01:15 +01:00
causal_lm.py fix(server): fix seeding on gpu (#42) 2023-01-31 14:30:33 +01:00
galactica.py fix(server): fix seeding with multiple shards (#44) 2023-01-31 16:01:15 +01:00
model.py fix(server): Minor refactorization using new_zeros (#24) 2023-01-17 09:10:22 +01:00
santacoder.py feat: Support sampling seeding (#37) 2023-01-30 15:36:16 +01:00
seq2seq_lm.py fix(server): fix seeding on gpu (#42) 2023-01-31 14:30:33 +01:00
types.py Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00