hf_text-generation-inference/server/text_generation/models
OlivierDehaene cd5961b5da
feat: allow local models (#101)
closes #99
2023-03-06 14:39:36 +01:00
..
__init__.py feat: allow local models (#101) 2023-03-06 14:39:36 +01:00
bloom.py feat: allow local models (#101) 2023-03-06 14:39:36 +01:00
causal_lm.py fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100) 2023-03-06 13:22:58 +01:00
galactica.py feat: allow local models (#101) 2023-03-06 14:39:36 +01:00
gpt_neox.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
model.py fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100) 2023-03-06 13:22:58 +01:00
santacoder.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
seq2seq_lm.py fix(server): fix generate_stream by forcing tokens to be decoded correctly (#100) 2023-03-06 13:22:58 +01:00
t5.py feat(server): pre-allocate max attention mask (#75) 2023-02-24 12:49:21 +01:00
types.py feat(server): add special token bool (#85) 2023-02-24 15:55:57 +01:00