Reverse proxy server for various LLM APIs. Features translation between API formats, user management, anti-abuse, API key rotation, DALL-E support, and optional prompt/response logging.
Go to file
nai-degen ffcaa23511 handles AWS HTTP 503 ServiceUnavailableException similarly to 429s 2024-09-09 08:07:53 -05:00
.husky Add temporary user tokens (khanon/oai-reverse-proxy!42) 2023-09-09 22:21:38 +00:00
data OpenAI DALL-E Image Generation (khanon/oai-reverse-proxy!52) 2023-11-14 05:41:19 +00:00
docker fixes CI image tagging on main branch 2024-01-15 01:37:50 -06:00
docs Merge GCP Vertex AI implementation from cg-dot/oai-reverse-proxy (khanon/oai-reverse-proxy!72) 2024-08-05 14:27:51 +00:00
public adds slightly less-ugly global stylesheet; improves mobile compat 2024-05-21 12:56:25 -05:00
scripts Refactor AWS service code and add AWS Mistral support (khanon/oai-reverse-proxy!75) 2024-08-14 04:40:41 +00:00
src handles AWS HTTP 503 ServiceUnavailableException similarly to 429s 2024-09-09 08:07:53 -05:00
.env.example Merge GCP Vertex AI implementation from cg-dot/oai-reverse-proxy (khanon/oai-reverse-proxy!72) 2024-08-05 14:27:51 +00:00
.gitattributes initial commit 2023-04-08 01:54:44 -05:00
.gitignore adds dall-e full history page and metadata downloader 2024-03-10 14:53:11 -05:00
.prettierrc Temporary usertokens via proof-of-work challenge (khanon/oai-reverse-proxy!68) 2024-05-19 16:31:56 +00:00
README.md updates README.md 2024-08-05 11:33:16 -05:00
http-client.env.json Azure OpenAI suport (khanon/oai-reverse-proxy!48) 2023-12-04 04:21:18 +00:00
package-lock.json Use AWS Inference Profiles for higher rate limits (khanon/oai-reverse-proxy!78) 2024-09-01 22:55:07 +00:00
package.json Use AWS Inference Profiles for higher rate limits (khanon/oai-reverse-proxy!78) 2024-09-01 22:55:07 +00:00
render.yaml Add docs and support for Render.com deployments (khanon/oai-reverse-proxy!9) 2023-05-15 21:47:30 +00:00
tsconfig.json refactors api transformers and adds oai->anthropic chat api translation 2024-03-08 20:59:19 -06:00

README.md

OAI Reverse Proxy

Reverse proxy server for various LLM APIs.

Table of Contents

What is this?

This project allows you to run a reverse proxy server for various LLM APIs.

Features

  • Support for multiple APIs
  • Translation from OpenAI-formatted prompts to any other API, including streaming responses
  • Multiple API keys with rotation and rate limit handling
  • Basic user management
    • Simple role-based permissions
    • Per-model token quotas
    • Temporary user accounts
  • Prompt and completion logging
  • Abuse detection and prevention

Usage Instructions

If you'd like to run your own instance of this server, you'll need to deploy it somewhere and configure it with your API keys. A few easy options are provided below, though you can also deploy it to any other service you'd like if you know what you're doing and the service supports Node.js.

Self-hosting

See here for instructions on how to self-host the application on your own VPS or local machine.

Ensure you set the TRUSTED_PROXIES environment variable according to your deployment. Refer to .env.example and config.ts for more information.

Huggingface (outdated, not advised)

See here for instructions on how to deploy to a Huggingface Space.

Render (outdated, not advised)

See here for instructions on how to deploy to Render.com.

Local Development

To run the proxy locally for development or testing, install Node.js >= 18.0.0 and follow the steps below.

  1. Clone the repo
  2. Install dependencies with npm install
  3. Create a .env file in the root of the project and add your API keys. See the .env.example file for an example.
  4. Start the server in development mode with npm run start:dev.

You can also use npm run start:dev:tsc to enable project-wide type checking at the cost of slower startup times. npm run type-check can be used to run type checking without starting the server.

Building

To build the project, run npm run build. This will compile the TypeScript code to JavaScript and output it to the build directory.

Note that if you are trying to build the server on a very memory-constrained (<= 1GB) VPS, you may need to run the build with NODE_OPTIONS=--max_old_space_size=2048 npm run build to avoid running out of memory during the build process, assuming you have swap enabled. The application itself should run fine on a 512MB VPS for most reasonable traffic levels.

Forking

If you are forking the repository on GitGud, you may wish to disable GitLab CI/CD or you will be spammed with emails about failed builds due not having any CI runners. You can do this by going to Settings > General > Visibility, project features, permissions and then disabling the "CI/CD" feature.