hf_text-generation-inference/server/text_generation
OlivierDehaene b1482d9048
breaking(router): modify /generate API to only return generated text (#50)
@njhill, @yk FYI

generated_text was concatenated to the user prompt for legacy reason. We
want to remove this behaviour as we don't think it is useful and even
detrimonial to usability.

We also remove the unused Vec.
2023-02-02 15:02:04 +01:00
..
models breaking(router): modify /generate API to only return generated text (#50) 2023-02-02 15:02:04 +01:00
pb feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
__init__.py feat(server): Support all AutoModelForCausalLM on a best effort basis 2022-10-28 19:24:00 +02:00
cache.py feat(server): Support AutoModelForSeq2SeqLM 2022-11-04 18:03:04 +01:00
cli.py feat(server): Support GPT-Neox (#39) 2023-01-31 18:53:56 +01:00
interceptor.py feat(launcher): Log server stdout (#19) 2023-01-05 12:01:23 +01:00
server.py feat(server): Support GPT-Neox (#39) 2023-01-31 18:53:56 +01:00
utils.py fix(server): allow greedy repetition penalty (#51) 2023-02-02 10:34:35 +01:00