An HTTP API to serve local LLM Models.
Go to file
Cyberes 61b9e313d2 cache again 2023-08-22 23:14:56 -06:00
config prototype hf-textgen and adjust logging 2023-08-22 19:58:31 -06:00
llm_server cache again 2023-08-22 23:14:56 -06:00
other add systemctl service 2023-08-21 23:25:53 -06:00
.gitignore MVP 2023-08-21 21:28:52 -06:00
LICENSE Initial commit 2023-08-21 14:40:46 -06:00
README.md use redis caching 2023-08-21 23:59:50 -06:00
requirements.txt concurrent gens setting, online status 2023-08-22 00:26:46 -06:00
server.py fix proompters_1_min, other minor changes 2023-08-22 22:32:29 -06:00

README.md

local-llm-server

A HTTP API to serve local LLM Models.

sudo apt install redis