local-llm-server/llm_server/routes
Cyberes 1582625e09 how did this get broken 2023-09-13 11:56:30 -06:00
..
helpers add openai-compatible backend 2023-09-12 16:40:09 -06:00
openai add openai-compatible backend 2023-09-12 16:40:09 -06:00
v1 add openai-compatible backend 2023-09-12 16:40:09 -06:00
__init__.py show total output tokens on stats 2023-08-24 20:43:11 -06:00
cache.py get working with ooba again, give up on dockerfile 2023-09-11 09:51:01 -06:00
ooba_request_handler.py how did this get broken 2023-09-13 11:56:30 -06:00
openai_request_handler.py didnt test anything 2023-09-13 11:51:46 -06:00
queue.py implement vllm backend 2023-09-11 20:47:19 -06:00
request_handler.py adjust logging, add more vllm stuff 2023-09-13 11:22:33 -06:00
server_error.py fix invalid param error, add manual model name 2023-09-12 10:30:45 -06:00
stats.py add openai-compatible backend 2023-09-12 16:40:09 -06:00