This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server/routes
Cyberes e1d3fca6d3 try to cancel inference if disconnected from client 2023-09-28 09:55:31 -06:00
..
helpers redo background processes, reorganize server.py 2023-09-27 23:36:44 -06:00
openai unify error message handling 2023-09-27 14:48:47 -06:00
v1 try to cancel inference if disconnected from client 2023-09-28 09:55:31 -06:00
__init__.py show total output tokens on stats 2023-08-24 20:43:11 -06:00
auth.py more work on openai endpoint 2023-09-26 22:09:11 -06:00
cache.py rewrite redis usage 2023-09-28 03:44:30 -06:00
ooba_request_handler.py fix double logging 2023-09-28 01:34:15 -06:00
openai_request_handler.py fix double logging 2023-09-28 01:34:15 -06:00
queue.py fix negative queue on stats 2023-09-28 08:47:39 -06:00
request_handler.py rewrite redis usage 2023-09-28 03:44:30 -06:00
server_error.py fix invalid param error, add manual model name 2023-09-12 10:30:45 -06:00
stats.py redo background processes, reorganize server.py 2023-09-27 23:36:44 -06:00