local-llm-server/llm_server/routes
Cyberes 77edbe779c actually validate prompt length lol 2023-09-14 18:31:13 -06:00
..
helpers add openai-compatible backend 2023-09-12 16:40:09 -06:00
openai set up queue to work with gunicorn processes, other improvements 2023-09-14 17:38:20 -06:00
v1 add moderation endpoint to openai api, update config 2023-09-14 15:07:17 -06:00
__init__.py show total output tokens on stats 2023-08-24 20:43:11 -06:00
cache.py check if the backend crapped out, print some more stuff 2023-09-14 14:26:25 -06:00
ooba_request_handler.py set up queue to work with gunicorn processes, other improvements 2023-09-14 17:38:20 -06:00
openai_request_handler.py set up queue to work with gunicorn processes, other improvements 2023-09-14 17:38:20 -06:00
queue.py set up queue to work with gunicorn processes, other improvements 2023-09-14 17:38:20 -06:00
request_handler.py actually validate prompt length lol 2023-09-14 18:31:13 -06:00
server_error.py fix invalid param error, add manual model name 2023-09-12 10:30:45 -06:00
stats.py add openai-compatible backend 2023-09-12 16:40:09 -06:00