hf_text-generation-inference/server/text_generation/models
OlivierDehaene 65e2f1624e
fix(server): fix token_is_special (#87)
2023-02-24 17:20:00 +01:00
..
__init__.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
bloom.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
causal_lm.py fix(server): fix token_is_special (#87) 2023-02-24 17:20:00 +01:00
galactica.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
gpt_neox.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
model.py feat(server): add special token bool (#85) 2023-02-24 15:55:57 +01:00
santacoder.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
seq2seq_lm.py fix(server): fix token_is_special (#87) 2023-02-24 17:20:00 +01:00
t5.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
types.py feat(server): add special token bool (#85) 2023-02-24 15:55:57 +01:00