This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server
Cyberes 763139c949 fix keyerror 2023-10-20 14:02:30 -06:00
..
cluster track down keyerror 2023-10-20 13:57:20 -06:00
config fix the queue?? 2023-10-05 21:37:18 -06:00
database get streaming working again 2023-10-16 16:22:52 -06:00
llm remove timed-out items from queue 2023-10-17 11:46:39 -06:00
pages update current model when we generate_stats() 2023-08-24 21:10:00 -06:00
routes fix keyerror 2023-10-20 14:02:30 -06:00
workers refer to queue for tracking IP count rather than seperate value 2023-10-18 09:03:10 -06:00
__init__.py MVP 2023-08-21 21:28:52 -06:00
custom_redis.py docs and stuff 2023-10-18 09:23:54 -06:00
helpers.py add length penalty param to vllm 2023-10-11 12:22:50 -06:00
messages.py trying to fix workers still processing after backend goes offline 2023-10-15 15:11:37 -06:00
opts.py don't pickle streaming 2023-10-16 18:35:10 -06:00
pre_fork.py functional 2023-09-30 19:41:50 -06:00
sock.py get streaming working again 2023-10-16 16:22:52 -06:00