hf_text-generation-inference/server/text_generation
OlivierDehaene 1ad3250b89
fix(docker): increase shm size (#60)
2023-02-08 17:53:33 +01:00
..
models feat(server): support t5 (#59) 2023-02-07 18:25:17 +01:00
pb feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
__init__.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
cache.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
cli.py feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
interceptor.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
server.py fix(server): better handling of inference mode (#57) 2023-02-07 15:38:22 +01:00
utils.py fix(docker): increase shm size (#60) 2023-02-08 17:53:33 +01:00