This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server/llm
Cyberes 94141b8ecf fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
..
oobabooga functional 2023-09-30 19:41:50 -06:00
openai fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
vllm fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
__init__.py fix processing not being decremented on streaming, fix confusion over queue, adjust stop sequences 2023-10-02 20:53:08 -06:00
generator.py finish openai endpoints 2023-10-01 16:04:53 -06:00
info.py functional 2023-09-30 19:41:50 -06:00
llm_backend.py finish openai endpoints 2023-10-01 16:04:53 -06:00