An HTTP API to serve local LLM Models.
This repository has been archived on 2024-10-27. You can view files and clone it, but cannot push or open issues or pull requests.
Go to file
Cyberes f9b9051bad update weighted_average_column_for_model to account for when there was an error reported, insert null for response tokens when error, correctly parse x-forwarded-for, correctly convert model reported by hf-textgen 2023-08-29 15:46:56 -06:00
config limit amount of simultaneous requests an IP can make 2023-08-27 23:48:10 -06:00
llm_server update weighted_average_column_for_model to account for when there was an error reported, insert null for response tokens when error, correctly parse x-forwarded-for, correctly convert model reported by hf-textgen 2023-08-29 15:46:56 -06:00
other fix stats for real 2023-08-23 01:14:19 -06:00
templates update info page 2023-08-29 14:00:35 -06:00
.gitignore update gitignore 2023-08-24 00:10:22 -06:00
LICENSE Initial commit 2023-08-21 14:40:46 -06:00
README.md calculate weighted average for stat tracking 2023-08-27 19:58:04 -06:00
requirements.txt reorganize stats page again 2023-08-27 22:24:44 -06:00
server.py update info page 2023-08-29 14:00:35 -06:00

README.md

local-llm-server

An HTTP API to serve local LLM Models.

The purpose of this server is to abstract your LLM backend from your frontend API. This enables you to make changes to (or even switch) your backend without affecting your clients.

Install

  1. sudo apt install redis
  2. python3 -m venv venv
  3. source venv/bin/activate
  4. pip install -r requirements.txt
  5. python3 server.py

An example systemctl service file is provided in other/local-llm.service.

Configure

First, set up your LLM backend. Currently, only oobabooga/text-generation-webui is supported, but eventually huggingface/text-generation-inference will be the default.

Then, configure this server. The config file is located at config/config.yml.sample so copy it to config/config.yml.

  1. Set backend_url to the base API URL of your backend.
  2. Set token_limit to the configured token limit of the backend. This number is shown to clients and on the home page.

To set up token auth, add rows to the token_auth table in the SQLite database.

token: the token/password.

type: the type of token. Currently unused (maybe for a future web interface?) but required.

priority: the lower this value, the higher the priority. Higher priority tokens are bumped up in the queue line.

uses: how many responses this token has generated. Leave empty.

max_uses: how many responses this token is allowed to generate. Leave empty to leave unrestricted.

expire: UNIX timestamp of when this token expires and is not longer valid.

disabled: mark the token as disabled.

Use

DO NOT lose your database. It's used for calculating the estimated wait time based on average TPS and response tokens and if you lose those stats your numbers will be inaccurate until the database fills back up again. If you change GPUs, you should probably clear the generation_time time column in the prompts table.

To Do

  • Implement streaming
  • Add huggingface/text-generation-inference
  • Convince Oobabooga to implement concurrent generation
  • Make sure stats work when starting from an empty database
  • Make sure we're correctly canceling requests when the client cancels