Update .env.example to include MAX_CONTEXT_TOKENS_OPENAI (khanon/oai-reverse-proxy!50)

This commit is contained in:
dllt98 2023-11-08 02:50:19 +00:00 committed by khanon
parent 350d6542cf
commit 08b2196bfb
1 changed files with 4 additions and 0 deletions

View File

@ -14,6 +14,10 @@
# Model requests allowed per minute per user.
# MODEL_RATE_LIMIT=4
# Max number of context tokens a user can request at once.
# Increase this if your proxy allow GPT 32k or 128k context
# MAX_CONTEXT_TOKENS_OPENAI=16384
# Max number of output tokens a user can request at once.
# MAX_OUTPUT_TOKENS_OPENAI=400
# MAX_OUTPUT_TOKENS_ANTHROPIC=400