hf_text-generation-inference/server/text_generation_server/models/custom_modeling
OlivierDehaene ab96b9aec3
feat(server): support new falcon config (#712)
2023-07-27 18:38:57 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
flash_llama_modeling.py feat(server): Add exllama GPTQ CUDA kernel support #553 (#666) 2023-07-21 10:59:00 +02:00
flash_neox_modeling.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
flash_rw_modeling.py feat(server): support new falcon config (#712) 2023-07-27 18:38:57 +02:00
flash_santacoder_modeling.py feat(server): Using `quantize_config.json` instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
mpt_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
neox_modeling.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
opt_modeling.py feat(server): Using `quantize_config.json` instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
t5_modeling.py fix(server): Adding logger import to t5_modeling.py (#585) 2023-07-12 10:40:32 +02:00