hf_text-generation-inference/server/text_generation_server/utils
OlivierDehaene 8bd0adb135
fix(server): fix quantization python requirements (#708)
2023-07-27 12:28:10 +02:00
..
gptq fix(server): fix quantization python requirements (#708) 2023-07-27 12:28:10 +02:00
__init__.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
convert.py fix(server): blacklist local files (#609) 2023-07-13 21:54:55 +02:00
dist.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
flash_attn.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
hub.py feat(server): Adding new ignore_rule for conversion. (#485) 2023-06-23 12:41:13 +02:00
layers.py feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
logits_process.py fix(server): avoid errors for very small top_p values (#544) 2023-07-04 20:11:33 +02:00
tokens.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
watermark.py fix(server): fix flash-neox scores warping (#137) 2023-03-24 18:21:41 +01:00
weights.py feat(server): Using `quantize_config.json` instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00