This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server
Cyberes 4f226ae38e handle requests to offline backends 2023-10-02 11:11:48 -06:00
..
cluster finish openai endpoints 2023-10-01 16:04:53 -06:00
config cache the home page in the background 2023-09-30 23:03:42 -06:00
database fix ratelimiting 2023-10-02 02:05:15 -06:00
llm fix ratelimiting 2023-10-02 02:05:15 -06:00
pages update current model when we generate_stats() 2023-08-24 21:10:00 -06:00
routes fix ratelimiting 2023-10-02 02:05:15 -06:00
workers handle requests to offline backends 2023-10-02 11:11:48 -06:00
__init__.py MVP 2023-08-21 21:28:52 -06:00
custom_redis.py functional 2023-09-30 19:41:50 -06:00
helpers.py mvp 2023-09-29 00:09:44 -06:00
netdata.py option to disable streaming, improve timeout on requests to backend, fix error handling. reduce duplicate code, misc other cleanup 2023-09-14 14:05:50 -06:00
opts.py cache the home page in the background 2023-09-30 23:03:42 -06:00
pre_fork.py functional 2023-09-30 19:41:50 -06:00
sock.py mvp 2023-09-29 00:09:44 -06:00