hf_text-generation-inference/server/text_generation_server/models
OlivierDehaene 3f2542bb6a
fix(server): fix escape characters in stop sequence (#155)
2023-04-05 19:37:41 +02:00
..
custom_modeling fix(server): fix escape characters in stop sequence (#155) 2023-04-05 19:37:41 +02:00
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
causal_lm.py feat(server): flash neoX (#133) 2023-03-24 14:02:14 +01:00
flash_causal_lm.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
flash_neox.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
flash_santacoder.py fix(server): fix escape characters in stop sequence (#155) 2023-04-05 19:37:41 +02:00
galactica.py fix(server): add position ids to neox (#126) 2023-03-15 13:12:49 +01:00
gpt_neox.py fix(server): add position ids to neox (#126) 2023-03-15 13:12:49 +01:00
model.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
santacoder.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
seq2seq_lm.py fix(server): use server tokenizer as gt (#128) 2023-03-16 12:12:26 +01:00
t5.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
types.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00