hf_text-generation-inference/server/text_generation_server/utils
OlivierDehaene 6abec14a7e
feat(server): batch tokenization for flash causal lm (#411)
2023-06-05 16:09:41 +02:00
..
__init__.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
convert.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
dist.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
hub.py feat(server): batch tokenization for flash causal lm (#411) 2023-06-05 16:09:41 +02:00
layers.py feat(server): support RefinedWeb models (#379) 2023-05-30 18:25:19 +02:00
logits_process.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
tokens.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
watermark.py fix(server): fix flash-neox scores warping (#137) 2023-03-24 18:21:41 +01:00