This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server/routes/openai
Cyberes 94141b8ecf fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
..
__init__.py more work on openai endpoint 2023-09-26 22:09:11 -06:00
chat_completions.py fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
completions.py fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
info.py set up cluster config and basic background workers 2023-09-28 18:40:24 -06:00
models.py fix openai models response 2023-10-01 23:07:49 -06:00
simulated.py update openai endpoints 2023-10-01 14:15:01 -06:00