This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
local-llm-server/llm_server
Cyberes 40ac84aa9a actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
..
llm actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
pages update current model when we generate_stats() 2023-08-24 21:10:00 -06:00
routes actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
__init__.py MVP 2023-08-21 21:28:52 -06:00
config.py implement vllm backend 2023-09-11 20:47:19 -06:00
database.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
helpers.py add HF text-generation-inference backend 2023-08-29 13:46:41 -06:00
integer.py MVP 2023-08-21 21:28:52 -06:00
netdata.py reorganize nvidia stats 2023-08-25 15:02:40 -06:00
opts.py implement vllm backend 2023-09-11 20:47:19 -06:00
stream.py implement streaming for hf-textgen 2023-08-29 17:56:12 -06:00
threads.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00