hf_text-generation-inference/server/text_generation_server/models
OlivierDehaene 73a4d65d26
feat: add cuda memory fraction (#659)
Close #673
2023-07-24 11:43:58 +02:00
..
custom_modeling feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
__init__.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
bloom.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
causal_lm.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
flash_llama.py fix(server): fix llamav2 config (#635) 2023-07-18 18:49:42 +02:00
flash_neox.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
flash_rw.py fix(server): Fixing RW code (it's remote code so the Arch checking doesn't work to see which weights to keep). (#579) 2023-07-12 09:51:34 +02:00
flash_santacoder.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
galactica.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
gpt_neox.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
model.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
mpt.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
opt.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
rw.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py Directly load GPTBigCode to specified device (#618) 2023-07-21 11:27:31 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than `f16`. (#513) 2023-06-30 20:30:09 +02:00
t5.py fix(server): T5 weights names. (#582) 2023-07-12 10:01:42 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00