An HTTP API to serve local LLM Models.
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
Go to file
Cyberes 2e998344d6 Update 'other/vllm/vllm_api_server.py' 4 hours ago
config add moderation endpoint to openai api, update config 2 weeks ago
llm_server try to cancel inference if disconnected from client 2 days ago
other Update 'other/vllm/vllm_api_server.py' 4 hours ago
templates fix division by 0, prettify /stats json, add js var to home 2 weeks ago
.gitignore actually we don't want to emulate openai 3 weeks ago
LICENSE Initial commit 1 month ago
README.md minor changes, add admin token auth system, add route to get backend info 5 days ago
VLLM INSTALL.md adjust logging, add more vllm stuff 2 weeks ago
daemon.py redo background processes, reorganize server.py 2 days ago
gunicorn.py redo background processes, reorganize server.py 2 days ago
requirements.txt redo background processes, reorganize server.py 2 days ago
server.py try to cancel inference if disconnected from client 2 days ago

README.md

local-llm-server

An HTTP API to serve local LLM Models.

The purpose of this server is to abstract your LLM backend from your frontend API. This enables you to switch your backend while providing a stable frontend clients.

Install

  1. sudo apt install redis
  2. python3 -m venv venv
  3. source venv/bin/activate
  4. pip install -r requirements.txt
  5. wget https://git.evulid.cc/attachments/89c87201-58b1-4e28-b8fd-d0b323c810c4 -O /tmp/vllm_gptq-0.1.3-py3-none-any.whl && pip install /tmp/vllm_gptq-0.1.3-py3-none-any.whl && rm /tmp/vllm_gptq-0.1.3-py3-none-any.whl
  6. python3 server.py

An example systemctl service file is provided in other/local-llm.service.

Configure

First, set up your LLM backend. Currently, only oobabooga/text-generation-webui is supported, but eventually huggingface/text-generation-inference will be the default.

Then, configure this server. The config file is located at config/config.yml.sample so copy it to config/config.yml.

  1. Set backend_url to the base API URL of your backend.
  2. Set token_limit to the configured token limit of the backend. This number is shown to clients and on the home page.

To set up token auth, add rows to the token_auth table in the SQLite database.

token: the token/password.

type: the type of token. Currently unused (maybe for a future web interface?) but required.

priority: the lower this value, the higher the priority. Higher priority tokens are bumped up in the queue line.

uses: how many responses this token has generated. Leave empty.

max_uses: how many responses this token is allowed to generate. Leave empty to leave unrestricted.

expire: UNIX timestamp of when this token expires and is not longer valid.

disabled: mark the token as disabled.

Use

To Do

  • Implement streaming
  • Bring streaming endpoint up to the level of the blocking endpoint
  • Add VLLM support
  • Make sure stats work when starting from an empty database
  • Make sure we're correctly canceling requests when the client cancels
  • Make sure the OpenAI endpoint works as expected