local-llm-server/llm_server/llm/vllm
Cyberes 05a45e6ac6 didnt test anything 2023-09-13 11:51:46 -06:00
..
__init__.py implement vllm backend 2023-09-11 20:47:19 -06:00
generate.py add openai-compatible backend 2023-09-12 16:40:09 -06:00
info.py actually we don't want to emulate openai 2023-09-12 01:04:11 -06:00
vllm_backend.py didnt test anything 2023-09-13 11:51:46 -06:00