local-llm-server/llm_server/llm/vllm
Cyberes c45e68a8c8 adjust requests timeout, add service file 2023-09-14 01:32:49 -06:00
..
__init__.py implement vllm backend 2023-09-11 20:47:19 -06:00
generate.py adjust requests timeout, add service file 2023-09-14 01:32:49 -06:00
info.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
vllm_backend.py didnt test anything 2023-09-13 11:51:46 -06:00