This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server
Cyberes b3f0c4b28f remove debug print 2023-10-15 15:14:32 -06:00
..
cluster add model selection to openai endpoint 2023-10-09 23:51:26 -06:00
config fix the queue?? 2023-10-05 21:37:18 -06:00
database clean up 2023-10-11 18:04:15 -06:00
llm trying to fix workers still processing after backend goes offline 2023-10-15 15:11:37 -06:00
pages update current model when we generate_stats() 2023-08-24 21:10:00 -06:00
routes remove debug print 2023-10-15 15:14:32 -06:00
workers trying to fix workers still processing after backend goes offline 2023-10-15 15:11:37 -06:00
__init__.py MVP 2023-08-21 21:28:52 -06:00
custom_redis.py f 2023-10-04 12:47:59 -06:00
helpers.py add length penalty param to vllm 2023-10-11 12:22:50 -06:00
messages.py trying to fix workers still processing after backend goes offline 2023-10-15 15:11:37 -06:00
netdata.py option to disable streaming, improve timeout on requests to backend, fix error handling. reduce duplicate code, misc other cleanup 2023-09-14 14:05:50 -06:00
opts.py fix streaming? 2023-10-05 20:14:28 -06:00
pre_fork.py functional 2023-09-30 19:41:50 -06:00
sock.py mvp 2023-09-29 00:09:44 -06:00