local-llm-server/llm_server/routes
Cyberes 6459a1c91b allow setting simultaneous IP limit per-token, fix token use tracker, fix tokens on streaming 2023-09-25 00:55:20 -06:00
..
helpers minor changes, add admin token auth system, add route to get backend info 2023-09-24 15:54:35 -06:00
openai further align openai endpoint with expected responses 2023-09-24 21:45:30 -06:00
v1 allow setting simultaneous IP limit per-token, fix token use tracker, fix tokens on streaming 2023-09-25 00:55:20 -06:00
__init__.py show total output tokens on stats 2023-08-24 20:43:11 -06:00
auth.py handle when auth token is not enabled 2023-09-24 15:57:39 -06:00
cache.py further align openai endpoint with expected responses 2023-09-24 21:45:30 -06:00
ooba_request_handler.py further align openai endpoint with expected responses 2023-09-24 21:45:30 -06:00
openai_request_handler.py further align openai endpoint with expected responses 2023-09-24 21:45:30 -06:00
queue.py set up queue to work with gunicorn processes, other improvements 2023-09-14 17:38:20 -06:00
request_handler.py allow setting simultaneous IP limit per-token, fix token use tracker, fix tokens on streaming 2023-09-25 00:55:20 -06:00
server_error.py fix invalid param error, add manual model name 2023-09-12 10:30:45 -06:00
stats.py change proompters 1 min to 5 min 2023-09-20 21:21:22 -06:00