local-llm-server/llm_server/routes/v1
Cyberes 40ac84aa9a actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
..
__init__.py implement streaming for hf-textgen 2023-08-29 17:56:12 -06:00
generate.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
generate_stats.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
generate_stream.py disable stream for now 2023-08-30 19:58:59 -06:00
info.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
proxy.py update home, update readme, calculate estimated wait based on database stats 2023-08-24 16:47:14 -06:00