This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server
Cyberes 0771c2325c fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
..
cluster fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
config fix the queue?? 2023-10-05 21:37:18 -06:00
database get streaming working again 2023-10-16 16:22:52 -06:00
llm fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
pages update current model when we generate_stats() 2023-08-24 21:10:00 -06:00
routes fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
workers fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
__init__.py MVP 2023-08-21 21:28:52 -06:00
custom_redis.py docs and stuff 2023-10-18 09:23:54 -06:00
helpers.py add length penalty param to vllm 2023-10-11 12:22:50 -06:00
logging.py fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
messages.py trying to fix workers still processing after backend goes offline 2023-10-15 15:11:37 -06:00
opts.py fix inference workers quitting when a backend is offline, start adding logging, improve tokenizer error handling 2023-10-23 17:24:20 -06:00
pre_fork.py functional 2023-09-30 19:41:50 -06:00
sock.py get streaming working again 2023-10-16 16:22:52 -06:00