hf_text-generation-inference/server/text_generation
OlivierDehaene 65e2f1624e
fix(server): fix token_is_special (#87)
2023-02-24 17:20:00 +01:00
..
models fix(server): fix token_is_special (#87) 2023-02-24 17:20:00 +01:00
pb feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
utils feat(server): enable hf-transfer (#76) 2023-02-18 14:04:11 +01:00
__init__.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
cache.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
cli.py feat: add safetensors conversion (#63) 2023-02-14 13:02:16 +01:00
interceptor.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
server.py feat: add distributed tracing (#62) 2023-02-13 13:02:45 +01:00
tracing.py feat: add distributed tracing (#62) 2023-02-13 13:02:45 +01:00