local-llm-server/llm_server/routes/v1
Cyberes aba2e5b9c0 don't use db pooling, add LLM-ST-Errors header to disable formatted errors 2023-09-26 23:59:22 -06:00
..
__init__.py cache stats in background 2023-09-17 18:55:36 -06:00
generate.py don't use db pooling, add LLM-ST-Errors header to disable formatted errors 2023-09-26 23:59:22 -06:00
generate_stats.py convert to gunicorn 2023-09-26 13:32:33 -06:00
generate_stream.py don't use db pooling, add LLM-ST-Errors header to disable formatted errors 2023-09-26 23:59:22 -06:00
info.py more work on openai endpoint 2023-09-26 22:09:11 -06:00
proxy.py more work on openai endpoint 2023-09-26 22:09:11 -06:00