local-llm-server/llm_server/routes/v1
Cyberes 11e84db59c update database, tokenizer handle null prompt, convert top_p to vllm on openai, actually validate prompt on streaming, 2023-09-25 22:32:48 -06:00
..
__init__.py cache stats in background 2023-09-17 18:55:36 -06:00
generate.py further align openai endpoint with expected responses 2023-09-24 21:45:30 -06:00
generate_stats.py fix typo 2023-09-25 17:24:51 -06:00
generate_stream.py update database, tokenizer handle null prompt, convert top_p to vllm on openai, actually validate prompt on streaming, 2023-09-25 22:32:48 -06:00
info.py minor changes, add admin token auth system, add route to get backend info 2023-09-24 15:54:35 -06:00
proxy.py fix division by 0, prettify /stats json, add js var to home 2023-09-16 17:37:43 -06:00