Reverse proxy server for various LLM APIs. Features translation between API formats, user management, anti-abuse, API key rotation, DALL-E support, and optional prompt/response logging.
Go to file
nai-degen 44f667976a simplifies infopage and updates README 2023-04-11 03:19:05 -07:00
docker refines huggingface instructions 2023-04-09 20:34:22 -05:00
docs improves quota display 2023-04-10 04:16:14 -07:00
src simplifies infopage and updates README 2023-04-11 03:19:05 -07:00
.env.example redacts cookies in logs; fix sha on infopage 2023-04-10 03:39:13 -07:00
.gitattributes initial commit 2023-04-08 01:54:44 -05:00
.gitignore loads keys on startup 2023-04-08 06:02:59 -05:00
.replit adds preliminary replit support 2023-04-09 18:55:24 -05:00
README.md simplifies infopage and updates README 2023-04-11 03:19:05 -07:00
info-page.md simplifies infopage and updates README 2023-04-11 03:19:05 -07:00
package-lock.json redacts cookies in logs; fix sha on infopage 2023-04-10 03:39:13 -07:00
package.json redacts cookies in logs; fix sha on infopage 2023-04-10 03:39:13 -07:00
replit.nix fixes incorrectly named replit.nix file 2023-04-09 21:28:51 -05:00
tsconfig.json implements basic key rotation 2023-04-08 05:32:24 -05:00

README.md

title emoji colorFrom colorTo sdk pinned
oai-reverse-proxy 🔁 green purple docker false

OAI Reverse Proxy

Reverse proxy server for the OpenAI (and soon Anthropic) APIs. Forwards text generation requests while rejecting administrative/billing requests. Includes optional rate limiting and prompt filtering to prevent abuse.

Table of Contents

What is this?

If you would like to provide a friend access to an API via keys you own, you can use this to keep your keys safe while still allowing them to generate text with the API. You can also use this if you'd like to build a client-side application which uses the OpenAI or Anthropic APIs, but don't want to build your own backend. You should never embed your real API keys in a client-side application. Instead, you can have your frontend connect to this reverse proxy and forward requests to the downstream service.

This keeps your keys safe and allows you to use the rate limiting and prompt filtering features of the proxy to prevent abuse.

Why?

OpenAI keys have full account permissions. They can revoke themselves, generate new keys, modify spend quotas, etc. You absolutely should not share them, post them publicly, nor embed them in client-side applications as they can be easily stolen.

This proxy only forwards text generation requests to the downstream service and rejects requests which would otherwise modify your account.


Setup Instructions

Since this is a server, you'll need to deploy it somewhere. A few options are available:

See here for instructions on how to deploy to a Huggingface Space.

Deploy to Repl.it (WIP)

Still working on this. It's a bit more technical than the Huggingface option; you can give it a shot by clicking on the button below.

Run on Repl.it

You'll need to set your secrets in Replit similar to the Huggingface instructions above. Currently .env files don't work properly so it only uses the default configuration.