An HTTP API to serve local LLM Models.
This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
Go to file
Cyberes 44e692c9cf remove debug print 2023-09-25 12:35:36 -06:00
config
llm_server remove debug print 2023-09-25 12:35:36 -06:00
other handle error while streaming 2023-09-24 13:27:27 -06:00
templates fix division by 0, prettify /stats json, add js var to home 2023-09-16 17:37:43 -06:00
.gitignore
LICENSE
README.md minor changes, add admin token auth system, add route to get backend info 2023-09-24 15:54:35 -06:00
VLLM INSTALL.md
requirements.txt update requirements.txt 2023-09-24 21:46:48 -06:00
server.py improve openai endpoint, exclude system tokens more places 2023-09-25 09:32:23 -06:00

README.md

local-llm-server

An HTTP API to serve local LLM Models.

The purpose of this server is to abstract your LLM backend from your frontend API. This enables you to switch your backend while providing a stable frontend clients.

Install

  1. sudo apt install redis
  2. python3 -m venv venv
  3. source venv/bin/activate
  4. pip install -r requirements.txt
  5. wget https://git.evulid.cc/attachments/89c87201-58b1-4e28-b8fd-d0b323c810c4 -O /tmp/vllm_gptq-0.1.3-py3-none-any.whl && pip install /tmp/vllm_gptq-0.1.3-py3-none-any.whl && rm /tmp/vllm_gptq-0.1.3-py3-none-any.whl
  6. python3 server.py

An example systemctl service file is provided in other/local-llm.service.

Configure

First, set up your LLM backend. Currently, only oobabooga/text-generation-webui is supported, but eventually huggingface/text-generation-inference will be the default.

Then, configure this server. The config file is located at config/config.yml.sample so copy it to config/config.yml.

  1. Set backend_url to the base API URL of your backend.
  2. Set token_limit to the configured token limit of the backend. This number is shown to clients and on the home page.

To set up token auth, add rows to the token_auth table in the SQLite database.

token: the token/password.

type: the type of token. Currently unused (maybe for a future web interface?) but required.

priority: the lower this value, the higher the priority. Higher priority tokens are bumped up in the queue line.

uses: how many responses this token has generated. Leave empty.

max_uses: how many responses this token is allowed to generate. Leave empty to leave unrestricted.

expire: UNIX timestamp of when this token expires and is not longer valid.

disabled: mark the token as disabled.

Use

To Do

  • Implement streaming
  • Bring streaming endpoint up to the level of the blocking endpoint
  • Add VLLM support
  • Make sure stats work when starting from an empty database
  • Make sure we're correctly canceling requests when the client cancels
  • Make sure the OpenAI endpoint works as expected