hf_text-generation-inference/server/text_generation
OlivierDehaene 4f9ac67cfa
Revert "feat: Add token streaming using ServerSideEvents support" (#40)
Reverts huggingface/text-generation-inference#36
2023-01-31 14:21:51 +01:00
..
models Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
pb feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
__init__.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
cache.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
cli.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
interceptor.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
server.py Revert "feat: Add token streaming using ServerSideEvents support" (#40) 2023-01-31 14:21:51 +01:00
utils.py feat: Support sampling seeding (#37) 2023-01-30 15:36:16 +01:00